FAST: earlyReading English

Composite

Rating Summary

Classification Accuracyfull bubble
GeneralizabilityModerate Low
Reliabilityhalf bubble
Validityfull bubble
Disaggregated Reliability and Validity Datafull bubble
Efficiency
AdministrationIndividual
Administration & Scoring Time5 Minutes
Scoring KeyComputer Scored
Benchmarks / NormsYes
Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support Purpose and Other Implementation Information Usage and Reporting

The Formative Assessment System for Teachers™ (FAST) is online software that requires no hardware or special add-ons. FAST is supported by an extensive set of materials to support teachers and students, including self-directed training modules that allow teachers to become certified to administer each of the FAST assessments. The entire FAST assessment package (i.e., reading, math, behavior, and on-line training) is provided at an annual flat rate of $6 per student.

Testers will require 1-4 hours of training.

Paraprofessionals can administer the test.

Where to Obtain: http://www.fastbridge.org/

Address:

FastBridge Learning
520 Nicollet Mall, Suite 910
Minneapolis, MN 55402-1057

Phone: 612-254-2534

Websitehttp://www.fastbridge.org/

Training materials are included in the cost of the tool.  Additional, optional on-site and webinar-based training services are available for a fee.

Ongoing technical support is available by calling 612-424-3710 or emailing fast1@umn.edu

The Formative Assessment System for Teachers (FAST) earlyReading measure is designed to assess both unified and component skills associated with kindergarten and first grade reading achievement. earlyReading is intended to enable screening and progress monitoring across four domains of reading (Concepts of Print, Phonemic Awareness, Phonics, and Decoding) and provide domain specific assessments of these specific component skills and a general estimate of overall reading achievement.

The tool is intended for use in grades K-1 or with ages 5-7.

Administration is computerized and is 5 minutes per student. Scoring is done automatically within the software and does not require any additional time.

Available scores include: raw scores, local through national percentile scores, local through national growth norms, developmental benchmarks and cut points, accuracy rates and error analysis.

 

Classification Accuracy

Classification Accuracy in Predicting Proficiency on GRADE (Spring)

 

Kindergarten

(15th Percentile)

n = 212

Kindergarten

(40th Percentile)

n = 214

1st Grade

(15th Percentile)

n = 124

1st Grade

(40th Percentile)

n = 212,412

False Positive Rate

0.12

0.23

0.10

0.08

False Negative Rate

0.13

0.20

0.11

0.08

Sensitivity

0.88

0.80

0.89

0.92

Specificity

0.88

0.77

0.90

0.92

Positive Predictive Power

0.37

0.58

0.42

0.75

Negative Predictive Power

0.99

0.91

0.99

0.98

Overall Classification Rate

0.88

0.78

0.90

0.92

AUC (ROC)

0.95

0.84

0.99

0.98

Base Rate

0.08

0.29

0.07

0.21

Cut Points:

52

40

45

51

At XX% Sensitivity, Specificity equals

88% sensitivity, 0.88 specificity

90% sensitivity, 0.58 specificity

89% sensitivity, 0.99 specificity

92% sensitivity, 0.92 specificity

At XX% Sensitivity, Specificity equals

81% sensitivity, 0.91 specificity

80% sensitivity, 0.77 specificity

78% sensitivity, 0.99 specificity

81% sensitivity, 0.96 specificity

At XX% Sensitivity, Specificity equals

69% sensitivity, 0.94 specificity

67% sensitivity, 0.88 specificity

67% sensitivity, 0.99 specificity

69% sensitivity, 0.99 specificity

 

Generalizability

Description of study sample:

·         Number of States: 1 (Minnesota)

·         Regions: 1

·         Gender

o   47% Male

o   53% Female

·         Race/Ethnicity:

o   65% White, Non-Hispanic

o   0% American Indian/Alaska Native

o   23% Black, Non-Hispanic

o   0% Asian, Pacific Islander

o   6% Hispanic

o   3% Other

·         Disability status:  approximately 11% SWDs

·         First language: approximately 90% English

·         Language proficiency status: 100% English proficient

 

Reliability

Type of Reliability

Age or Grade

n (range)

Coefficient

SEM

Information (including normative data)/Subjects

range

median

Test-retest (Fall to Winter)

K

185

 

0.64

 

The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%).

Test-retest (Fall to Spring)

K

185

 

0.70

 

The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%).

Test-retest (Winter to Spring)

K

185

 

0.76

 

The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%).

Test-retest (Fall to Winter)

1st Grade

102

 

0.92

 

The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%).

Test-retest (Fall to Spring)

1st Grade

102

 

0.84

 

The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%).

Test-retest (Winter to Spring)

1st Grade

102

 

0.94

 

The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%).

Delayed test-retest reliability 1st Grade 5,363   0.91   Winter to Spring data collection
Delayed test-retest reliability KG 8,726   0.84   Winter to Spring data collection
Delayed test-retest reliability 1st 1,348   0.89   Winter to May data collection
Delayed test-retest reliability 1st 1,120   0.94   One month delay
Delayed test-retest reliability K 96   0.81   The sample consisted of approximately 118 Kindergarten students (42.9% female). The majority of students (76.7%) were not eligible to receive special education services. Winter to Spring data collection.
Delayed test-retest reliability 1st 112   0.90   The sample consisted of approximately 145 First Grade students (45.3% female). The majority of students (79.7%) were not eligible to receive special education services. The majority of students were White (64.2%) followed by 20.9% Hispanic, 5.4% African American, and 4.1% Asian. Winter to Spring data collection
Delayed test-retest reliability K 191   0.91   The sample consisted of approximately 233 Kindergarten students (45.5% female). The majority of students were White (60.5%), followed by 20.2% African American, 9.9% Hispanic, 6.0% Asian, and 3.4% Multiracial students. Fall to Winter data collection.

 

Validity

 

Type of Validity

Age or Grade

 

Test or Criterion

n (range)

Coefficient (if applicable)

 

Information (including normative data)/Subjects

range

Median

Predictive

K

GRADE

173

 

0.68

Fall to Spring prediction. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). Forty to fifty percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district are White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%).

Predictive

K

GRADE

173

 

0.69

Winter to Spring prediction. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). Forty to fifty percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district are White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%).  

Concurrent

K

GRADE

173

 

0.67

Data collected in Spring. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). Forty to fifty percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district are White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%).

Predictive

1

GRADE

100

 

0.72

Fall to Spring prediction. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). Forty to fifty percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district are White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%).

Predictive

1

GRADE

100

 

0.81

Winter to Spring prediction. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). Forty to fifty percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district are White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%).  

Concurrent

1

GRADE

100

 

0.83

Data collected in Spring. The majority of students within the school district were White (78%), with the remaining students identified as either African American (19%), or other (3%). Forty to fifty percent of students at each school were on free and reduced lunch. In school District 2, the majority of students within the school district are White (53%), with the remaining students identified as African American (26%), Hispanic (11%), Asian (8%), or other (2%).

Construct and content validity depend substantially on the development and selection of stimuli. See below for a description of the content.

Concepts of Print
This measure is designed to assess students’ familiarity with the representative structure of written language in English. This measure is designed to assess whether students recognize how to hold a book (orientation), know that printed text has a message (not just the picture; e.g., show me where I should start reading), know that print is read from left to right (e.g., where do I start reading, what word is read next, point while I read), understand the importance of line, word, and letter sequence/order, distinction between letters, words, sentences, turning pages, as well as the meaning of different types of punctuation (e.g., quotation marks, some use of double punctuation, capital letters, accent marks, etc.)

Test Construction. The first page contains seven items. The first item reads “Please turn this page over so it is ready for you to read.” The next six items require students to point based on various prompts. Example prompts include “point to the (upper/lowercase) letter standing alone,” “point to the word standing alone,” “point to the sentence,” “point to the two words standing alone,” and “point to where I should start reading.”

Onset Sound
Intended to serve as a phonemic awareness task, children are presented with a set of pictures and are asked to correctly identify the picture that begins with a particular sound or are asked to generate the initial sound for a particular picture.

Test Construction. Items for all Onset Sound forms were selected from a large word bank. A key factor in determining which words were appropriate to use as items was whether or not they could be illustrated in a simple drawing. For example, illustrations involving arrows pointing to the object of interest were dismissed due to high potential for confusion. A total of 20 forms were created, each with 16 items, all randomized in their order of presentation. The 16 items were broken into 4 sets of 4, in which the same initial sound was not repeated on the same form. The initial sound was also randomized, so as not to correspond with the order of the items. Each item set contains three selection questions (e.g., “Which one begins with /s/?”) and one production question (e.g., “What’s the first sound in the word ‘fruit’?”). The student stimuli form is comprised of four boxes, with each box containing a picture. The item sets are represented in the boxes as follows: upper left, upper right, lower left, and lower right.

Letter Naming
The Letter Naming task assesses the student’s ability and automaticity at naming upper-case and lower-case letters in isolation. The examiner and student each have the same page of letters presented in a random order. As the student names the letters, the examiner marks any errors on his/her copy. The resulting score is the number of letters named correctly.

Test Construction. All 26 letters in the English alphabet were used. Each letter was used once in upper-case and once in lower-case. Every form includes each letter once in upper-case and once in lower-case first before repeating. Each form is organized so that every row alternates with all upper-case or lower-case letters. For example, the first row is all lower-case and the second row all upper-case, and so on. Within the first 26 letters, each letter of the English alphabet is represented either in upper-case or lower-case. The second set of 26 letters contains the opposite upper or lower-case letter. Upper-case and lower-case letters were categorized as “dissimilar” or “same/moderate similarity.” The first two lower-case letters were randomly chosen from the “same/moderate similarity” category. The third letter was randomly chosen from the “dissimilar” category. Each set of three letters thereafter contained one randomly chosen “dissimilar” letter and two “same/moderate similarity” letters. The order for each set of three was randomly chosen after the first set. After the first appearance of each letter (upper and lower-case), letters were randomly chosen regardless of similarity. There are a total of 10 rows and 10 columns.

Letter Sound
The Letter Sounds task assesses the student’s ability and automaticity with saying the sounds of uppercase and lowercase letters in isolation. The examiner and student each have the same page of letters in a random order. As the student says the sounds of the letters, the examiner marks any errors on his/her copy. The resulting score is the number of letter sounds correctly identified.

Test Construction. All 26 letters in the English alphabet were used. Every form includes each letter once in upper-case and once in lower-case, before repeating. Each form is organized so that each row alternates where the first row is all lower-case and the second row all upper-case, and so on. The order of letters were randomly chosen and alternated between continuous and stop sounds. After the first appearance of the letter (upper or lower-case) the order of letters was randomly chosen. The first line of letters did not contain consonants judged to be less useful for beginning readers (j, q, z, y, x, v, and w). The letters c, g, and all vowels were not included in the automaticity portion of the measure and placed separately at the bottom of the form. Directions were provided to solicit the soft and hard sounds of c and g and the short and long sounds of each vowel.

Rhyming
A phonemic awareness task, children are presented with a set of pictures and are asked to correctly identify the picture that rhymes with a particular word or are asked to generate another word that rhymes with a spoken word. This measure is only available as a screener. No progress monitoring forms are available.

Test Construction. Items for the Rhyming screening form was selected from a large word bank. A key factor in determining which words were appropriate to use as items was whether or not they could be illustrated in a simple drawing, similar to the Onset Sound item criteria. Each item set contains three selection questions (e.g., “Which one rhymes with wall?”) and one production question (e.g., “What’s another word that rhymes with wall?"). The student stimuli form is comprised of four boxes, with each box containing a picture. The item sets are represented in the boxes as follows: upper left, upper right, lower left, and lower right. Selected words were reviewed by the authors and pilot tested.

Word Blending
The Blending task assesses the student’s ability to form a word from individually spoken sounds or phonemes. The examiner says each phoneme in a word and asks the student to say the complete word. The resulting score is 1 if the student says the word correctly and 0 if s/he does not produce the correct word. For example, if the examiner says /t/ /o/ /p/, the score is 1 if the student says top and 0 if the student says anything else.

Test Construction. Items for Blending came from a bank of 210 words. All words with short vowel sounds are decodable consonant-vowel-consonant (CVC) words or decodable words with initial or final blends (CCVC or CVCC). Words with long vowel sounds (9% of the total number of words) contain vowel diagraphs or are CVC+e. Half of the total words begin with continuous sounds and half begin with stop sounds. A total of 20 progress monitoring forms were created. Each form contains 10 words. The first six items have 3 phonemes; items 7-10 have 4 phonemes, two words with initial blends and two words with final blends. Each set also has two words with each vowel in either long or short form. There is no overlap in words across all 20 for the Blending task. All words on the blending task are unique across forms, except for the last two forms, which repeat words. Across all sets, 30% of words overlap between the Blending tasks and the Segmenting tasks (i.e., the same word may appear in Form 4 of Blending and Form 15 of Segmenting). All consonant blends were available to use except –ng and –nk.

Word Segmenting
The Segmenting task assesses the student’s ability to separate a spoken word into individual sounds, or phonemes. The examiner says a word and asks the student to say any sounds s/he hears in the word. The resulting score is the number of correctly identified sounds. For example, if the examiner says the word stop and the student says /s/ /t/ /o/ /p/, the score for that word is 4 because the student correctly identified the 4 sounds in the word. If the student says /st/ /op/, the score for that word is 2 because the student correctly identified two sound chunks, but not the individual phonemes.

Test Construction. Items for the Segmenting task came from a bank of 220 words. All words with short vowel sounds are decodable consonant-vowel-consonant (CVC) words or decodable words with initial or final blends (CCVC or CVCC). Words with long vowel sounds (9% of the total words) contain vowel diagraphs (CVVC), silent e (CVC+e), or initial blends+e (CCVC+e). R-Controlled words were not used. Half of the total words begin with continuous sounds and half begin with stop sounds. Each progress monitoring form contains 10 words. The first six items have 3 phonemes; items 7-10 have 4 phonemes, two words with initial blends and two words with final blends. Each form also has two words with each vowel in either long or short form. All words on the segmenting task are unique across forms, except for the last two forms which repeat words. There is no overlap within three corresponding forms for blending and segmenting (i.e., all words are unique across Forms 1,2, and 3 for blending and for segmenting). Across all sets, 30% of words overlap between the Blending tasks and the Segmenting tasks (i.e., the same word may appear in Form 4 of blending and Form 15 of segmenting). All existing consonant blends were available to use.

Sight Words 50
This measure is designed to assess whether students are able to recognize common high-frequency words. (This is distinct from a decodable word measure in that, though some sight words may be decodable, students recognize them with automaticity rather than utilizing cognitive resources to decode them.)

Test Construction. Item response theory was used to evaluate the difficulty of common sight words. Using word difficulty estimates obtained in the analysis, the first 50 words were classified into the “easy” group. Each progress monitoring form consists of 1 page with 50 words on each page. These words were randomized on each progress monitoring form.

Sight Words 150
This measure is designed to assess whether students are able to recognize common high-frequency words. This is distinct from a decodable word measure in that, though some sight words may be decodable, students recognize them with automaticity rather than utilizing cognitive resources to decode them. Test construction of the screening form and progress monitoring forms differ slightly.

Test Construction. Item response theory was used to evaluate the difficulty of common sight words. Using word difficulty estimates obtained in the analysis, the first 50 words were classified into the “easy” group, the second set of 50 in the “medium” group, and the third group of 50 in the “difficult” group. For the screening form all 50 words in each category were randomly scrambled and placed on the one of three pages. These pages were ordered “easy” as page 1, “medium” as page 2, and “difficult” as page 3 for administration. For progress monitoring, each form also consists of 3 pages, with 50 words on each page; however the first line of each form consists of the “easy” words, randomized on the page. The second line consists of “medium” difficulty words, randomized on the page. The third line of each form consists of “difficult” words randomized respectively. Each line thereafter continues to alternate in the “easy,” “medium,” and “difficult” fashion. The differences in screening and progress monitoring test construction support the ability to gauge student performance on the inventory of words for screening, and reduce bias in progress monitoring.

Decodable Words
The Decodable Words task assesses the student’s ability to read phonetically regular words. As the student develops automaticity with letter-sound correspondence, s/he will move from saying the sound of each letter to the goal of reading whole words. The examiner and student each have the same page of words. As the student says the sounds or reads the words, the examiner marks errors on his/her own copy. The resulting scores are 1) the number of letter-sounds correctly identified and 2) the number of whole words read correctly.

Test Construction. A word bank containing 184 consonant-vowel-consonant (CVC) real words was created. The first 15-20 words of each progress monitoring form contained words with unique initial sounds in random order. The remaining 30-35 words were randomly selected from the word bank, for a total of 50 words per form. All words on each form were unique. Each form was organized with ten rows with five words in each row.

Nonsense Words
This measure is designed to assess whether students are able to recognize letter-sound correspondences and blend them automatically. The logic behind a nonsense word measure is that it assesses whether students can decode strings of letters and read them automatically while controlling for potential familiarity that students may have when decoding real words. Nonsense words should be decodable strings of letters that are not established words in the English language but are allowable letter sequences in English.

Test Construction. A word bank containing 378 consonant-vowel-consonant (CVC) or vowel-consonant (VC) pretend words was created. Pretend words that began with ci-, ce-, and ended in -l, -q, -r and –y were not included in the word bank. All inappropriate sounding words were not included in the word bank. Definitions of all pretend words were not found in a Standard English dictionary. The first 15-20 words of each progress monitoring form contained words with unique initial sounds in random order. The remaining 30-35 words were randomly selected from the word bank, for a total of 50 words per form. All words on each form were unique. Each form was organized with ten rows with five words in each row.

Sentence Reading
This measure is designed to assess automaticity while reading connected text. This measure is only available as a screener. For progress monitoring purposes, CBM-Reading is recommended.

Test Construction. The Sentence Reading Screening form was created as part of a larger set of passages designed for FAST CBM-Reading. Sixty passages were created using guidelines including a designated word bank, a strict writing rubric, and an extensive editing process. All passages were designed to target the reading abilities of students who are still learning to read connected text and need an emphasis on simple sentence structure as well the use of words that are structured simply or are high in frequency. The Sentence Reading passage was pulled directly from this passage set and broken apart across several passages with basic pictures. It is of median difficulty compared to the rest of the passage set. The first 3 pages contain one sentence, and then progressively more sentences across the last three pages.

Disaggregated Reliability, Validity, and Classification Data for Diverse Populations

Classification Accuracy in Predicting Proficiency on aReading (Adaptive Reading)

 

Kindergarten
(Winter predicting Spring; 15th Percentile; Non-White)
n = 50

Kindergarten
(Winter predicting Spring; 15th Percentile; White)
n = 83

Kindergarten
(Winter predicting Spring; 40th Percentile; Non-White)
n = 50

Kindergarten
(Winter predicting Spring; 40th Percentile; White)
n = 83

Kindergarten
(Winter predicting Winter; 40th Percentile; Non-White)
n = 50

Kindergarten
(Winter predicting Winter; 40th Percentile; White)
n = 83

False Positive Rate

0.10

0.17

0.15

0.30

0.17

0.29

False Negative Rate

0.30

0.47

0.35

0.32

0.44

0.29

Sensitivity

0.70

0.53

0.65

0.68

0.56

0.71

Specificity

0.90

0.83

0.85

0.69

0.83

0.71

Positive Predictive Power

0.64

0.45

0.79

0.64

0.79

0.64

Negative Predictive Power

0.92

0.87

0.74

0.73

0.61

0.77

Overall Classification Rate

0.86

0.77

0.76

0.69

0.68

0.71

AUC (ROC)

0.89

0.84

0.89

0.88

0.86

0.87

Base Rate

0.20

0.20

0.46

0.45

0.54

0.42

Cut Points:

43

47.5

47.5

49

49.5

50.5

At XX% Sensitivity, Specificity equals

86% sensitivity, 0.76 specificity

92% sensitivity, 0.43 specificity

94% sensitivity, 0.50 specificity

93% sensitivity, 0.64 specificity

94% sensitivity, 0.17 specificity

93% sensitivity, 0.49 specificity

At XX% Sensitivity, Specificity equals

71% sensitivity, 0.82 specificity

85% sensitivity, 0.66 specificity

81% sensitivity, 0.62 specificity

78% sensitivity, 0.79 specificity

83% sensitivity, 0.83 specificity

83% sensitivity, 0.78 specificity

At XX% Sensitivity, Specificity equals

 

69% sensitivity, 0.79 specificity

75% sensitivity, 0.87 specificity

63% sensitivity, 0.90 specificity

72% sensitivity, 1.00 specificity

79% sensitivity, 0.84 specificity

Classification Accuracy in Predicting Proficiency on aReading (Adaptive Reading)

 

1st Grade
(Winter predicting Spring; 15th Percentile; Non-White)
n = 53

1st Grade
(Winter predicting Spring; 15th Percentile; White)
n = 95

1st Grade
(Winter predicting Spring; 40th Percentile; Non-White)
n = 53

1st Grade
(Winter predicting Spring; 40th Percentile; White)
n = 95

1st Grade
(Winter predicting Winter; 40th Percentile; Non-White)
n = 53

1st Grade
(Winter predicting Winter; 40th Percentile; White)
n = 95

1st Grade
(Spring predicting Spring; 15th Percentile; Non-White)
n = 53

1st Grade
(Spring predicting Spring; 15th Percentile; White)
n = 95

1st Grade
(Spring predicting Spring; 40th Percentile; Non-White)
n = 53

1st Grade
(Spring predicting Spring; 40th Percentile; White)
n = 95

False Positive Rate

0.16

0.16

0.13

0.13

0.05

0.23

0.16

0.22

0.43

0.42

False Negative Rate

0.36

0.38

0.30

0.26

0.32

0.36

0.32

0.33

0.10

0.17

Sensitivity

0.64

0.62

0.70

0.74

0.68

0.64

0.68

0.67

0.90

0.83

Specificity

0.84

0.84

0.87

0.88

0.95

0.77

0.84

0.78

0.57

0.58

Positive Predictive Power

0.74

0.52

0.88

0.85

0.96

0.73

0.75

0.47

0.73

0.66

Negative Predictive Power

0.76

0.89

0.69

0.78

0.62

0.69

0.79

0.89

0.81

0.78

Overall Classification Rate

0.75

0.80

0.77

0.81

0.77

0.71

0.77

0.76

0.75

0.71

AUC (ROC)

0.87

0.91

0.87

0.93

0.95

0.88

0.89

0.91

0.87

0.90

Base Rate

0.42

0.22

0.57

0.49

0.64

0.49

0.42

0.22

0.57

0.49

Cut Points:

44.5

45.5

51

52.5

54

49.5

52.5

49

55.5

58

At XX% Sensitivity, Specificity equals

94% sensitivity, 0.59 specificity

94% sensitivity, 0.79 specificity

92% sensitivity, 0.64 specificity

92% sensitivity, 0.82 specificity

92% sensitivity, 0.90 specificity

91% sensitivity, 0.58 specificity

91% sensitivity, 0.67 specificity

93% sensitivity, 0.79 specificity

93% sensitivity, 0.50 specificity

95% sensitivity, 0.63 specificity

At XX% Sensitivity, Specificity equals

83% sensitivity, 0.71 specificity

81% sensitivity, 0.79 specificity

79% sensitivity, 0.82 specificity

82% sensitivity, 0.91 specificity

80% sensitivity, 0.90 specificity

82% sensitivity, 0.76 specificity

81% sensitivity, 0.71 specificity

80% sensitivity, 0.85 specificity

82% sensitivity, 0.79 specificity

80% sensitivity, 0.86 specificity

At XX% Sensitivity, Specificity equals

72% sensitivity, 0.88 specificity

75% sensitivity, 0.87 specificity

58% sensitivity, 0.91 specificity

71% sensitivity, 0.94 specificity

72% sensitivity, 0.90 specificity

71% sensitivity, 0.82 specificity

71% sensitivity, 0.76 specificity

67% sensitivity, 0.88 specificity

71% sensitivity, 0.79 specificity

70% sensitivity, 0.91 specificity