FAST: Adaptive Reading

Reading

Rating Summary

Classification Accuracyfull bubble
GeneralizabilityModerate Low
Reliabilityfull bubble
Validityfull bubble
Disaggregated Reliability and Validity Datafull bubble
Efficiency
AdministrationIndividual Group
Administration & Scoring Time6-20 Minutes
Scoring KeyComputer Scored
Benchmarks / NormsYes
Cost Technology, Human Resources, and Accommodations for Special Needs Service and Support       Purpose and Other Implementation Information Usage and Reporting

Adaptive Reading is part of the Formative Assessment System for Teachers (FAST). It is provided as a bundle of assessments (earlyReading, CBM-R, Adaptive Reading, Adaptive Math) with online administration, score reporting, and data-base support. The system is free to districts who contribute to the research mission of the FAST Team. There is an anticipated $2 per child cost for the FAST at some point in the future.

Computer and Internet access are required for full use of product services.

Testers will require less than 1 hour of training.

Paraprofessionals can administer the test.

Address:

FastBridge Learning
520 Nicollet Mall, Suite 910
Minneapolis, MN 55402-1057

Phone: 612-254-2534

Web Site:http://www.fastbridge.org/  
Email: fast1@umn.edu

Field-tested training manuals are included and provide all implementation information. There are a network of trainers available for in district support.

Ongoing technical support is available through the website which includes contact information via phone, office address, and email.

Adaptive Reading is a broad measure of reading achievement for use across the primary grades (K to 5th) with a K to 12th grade version available for Fall 2013. As an adaptive measure of broad reading achievement, Adaptive Reading is designed to assess five instructional targets that were identified by the National Reading Panel: Concepts of Print, Phonological Awareness, Phonics, Vocabulary, and Comprehension. All items are cross referenced with the national standards for reading. At the time of this submission, Adaptive Reading provided a single score of broad reading achievement. Ongoing research and development will establish subtest scores for skills analysis and options for progress monitoring functionality. Those features are expected for Fall 2015.

Adaptive Reading is delivered with browser-based software. It is individually administered and scored by the computer. There is nothing to install locally. Administrations often occur in a computer lab or classroom workstation. A mouse or touchscreen devise (IPad) and ear phones are necessary for administration. Administrations typically take 6 to 15 minutes, which depends on grade level, student ability, and other factors.

Available scores include: raw, standard, percentile, IRT-based, developmental benchmarks, equated, and composite scores.

 

Classification Accuracy

Classification Accuracy in Predicting Proficiency Level on the Gates-MacGinitie Reading Test
  Grade 1
n = 116
Grade 2
n = 188
Grade 3
n = 159
Grade 4
n = 156
Grade 5
n = 159
False Positive Rate 0.13 0.20 0.20 0.14 0.17
False Negative Rate 0.09 0.26 0.14 0.13 0.16
Sensitivity 0.91 0.74 0.86 0.87 0.84
Specificity 0.87 0.80 0.80 0.86 0.84
Positive Predictive Power 0.64 0.49 0.49 0.60 0.56
Negative Predictive Power 0.98 0.92 0.96 0.96 0.96
Overall Classification Rate 0.88 0.79 0.81 0.86 0.83
AUC (ROC) 0.94 0.88 0.92 0.94 0.87
Base Rate 0.20 0.21 0.18 0.20 0.20
Cut Points: 317* 381* 418* 439* 439*
At 90% Sensitivity, Specificity equals 0.87 0.65 0.82 0.82 0.67
At 80% Sensitivity, Specificity equals 0.89 0.77 0.82 0.90 0.84
At 70% Sensitivity, Specificity equals 0.91 0.84 0.90 0.94 0.90

*Cut points were set by using student’s raw scores of the Gates-MacGinitie Reading Test (GMRT-4th) to compute the 20th percentile by grade; cut points for the C-BAS-R scale were computed by a technique to maximize specificity and sensitivity put forth by Silberglitt and Hintze (2005). The C-BAS-R scale ranges from 200 – 800 (M = 400, SD = 75).

Generalizability

Description of study sample:

  • Number of States: 1
  • Size: 2,333
  • Gender:  Unknown
  • SES: 19%* Eligible for free or reduced-price lunch
  • Race/Ethnicity:
    • 70%* White, Non-Hispanic
    • 6%* Black, Non-Hispanic
    • 7%* Hispanic
    • 1%* American Indian/Alaska Native
    • 16%* Asian/Pacific Islander
  • Disability classification: 10.5%* Special Education
  • Language proficiency status: 14%* LEP

*Based on school-wide demographics; demographics for the specific sample were not collected

Reliability

Type of Reliability Age or Grade n Coefficient SEM Information (including normative data)/Subjects
range median
Alternate Forms K-5 2,333     13.5-21.75* INF: 30.86-11.89** Post-hoc analysis of test length using real student data; acceptable SEM acquired at 30 items.
Test/Retest
(Delayed Test/Retest, 3 mo.)
K-5 2,038 0.71-0.87 Grade 1: 0.71 Grade 2: 0.87 Grade 3: 0.81 Grade 4: 0.86 Grade 5: 0.75 0.79 13.5-21.75* INF: 30.86-11.89** Growth was measured four times over the academic year.
Cronbach’s alpha K-5 2,333   0.95 13.5-21.75* This is a proxy for internal consistency and parallel form reliability. (Samejima, 1994) 

*Note. The standard for acceptable SEM is .30, with a preference to approximate .20 on a logit scale (.30, .25, and .20 correspond with high-stakes, medium-stakes, and low-stakes decisions, respectively). SEM was transformed to CBAS-R scale units (e.g., SEM * 75). An SEM of .20 on a logit scale corresponds with 25 on the CBAS-R scale; an SEM of .25 on a logit scale corresponds with 16 on the CBAS-R scale; and an SEM of .30 on a logit scale corresponds with 11.1 on the CBAS-R scale.

**Note. INF = Information.

Validity

Type of Validity Age or Grade Test or Criterion n (range) Coefficient Information (including normative data)/Subjects
Range Median
Content K-5 Reading teachers/experts       Prior to item parameterization, items were created and revised by reading teachers and experts.
Content K-3 Items administered by Theta Level 287     Analysis on level of difficulty for the 5 domains indicates that C-BAS-R items are representing domains as would be expected – with Concepts of Print being administered at the low end of ability and Comprehension at the high end.
Predictive 1-5
 
Grade 1
Grade 2*
Grade 3
Grade 4
Grade 5
Gates-MacGinitie 125-215
 
125
215
165
175
181
0.64-0.84
 
0.83
0.75
0.84
0.78
0.64
0.78 Scores for subtests of the Gates-MacGinitie as well as the composite score was computed for a sub-set of students and compared to C-BAS-R data.   *Note. Grade 2 is based on the Comprehension subtest, whereas all other grades are based on the overall composite score.
Construct 1-5
 
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Curriculum-Based Measurement of Oral Reading Fluency 55-171
 
55
171
108
114
103
0.56-0.83
 
0.83
0.81
0.74
0.80
0.56
0.80 Scores for Oral Reading Fluency were collected for a subset of students and compared to C-BAS-R data.
Construct 1-5
 
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Measures of Academic Progress 55-398
 
55
302
391
398
376
0.69-0.83
 
0.69
0.83
0.83
0.77
0.73
0.77 Scores for the reading portion of the Measures of Academic Progress were collected for a subset of students and compared to C-BAS-R data.

 

Disaggregated Reliability, Validity, and Classification Data for Diverse Populations

Disaggregated Classification Accuracy in Predicting Proficiency on Spring CBMReading scores

 

1st Grade
(Spring; 40th Percentile; Non-White)
n = 49

1st Grade
(Spring; 40th Percentile; White)
n = 93

3rd Grade
(Winter; 15th Percentile; Non-White)
n = 40

3rd Grade
(Winter; 15th Percentile; White)
n = 81

3rd Grade
(Winter; 40th Percentile; Non-White)
n = 40

3rd Grade
(Winter; 40th Percentile; White)
n = 81

False Positive Rate

0.22

0.24

0.38

0.13

0.47

0.26

False Negative Rate

0.15

0.09

0.07

0.45

0.05

0.18

Sensitivity

0.85

0.91

0.93

0.55

0.95

0.82

Specificity

0.78

0.76

0.62

0.87

0.53

0.74

Positive Predictive Power

0.81

0.69

0.56

0.58

0.69

0.70

Negative Predictive Power

0.82

0.94

0.94

0.85

0.91

0.85

Overall Classification Rate

0.82

0.82

0.73

0.79

0.75

0.78

AUC (ROC)

0.86

0.85

0.89

0.84

0.86

0.87

Base Rate

0.53

0.37

0.35

0.25

0.53

0.42

Cut Points:

459.5

465

478.5

493.5

481.5

498.5

At XX% Sensitivity, Specificity equals

92% sensitivity, 0.40 specificity

91% sensitivity, 0.74 specificity

93% sensitivity, 0.64 specificity

90% sensitivity, 0.63 specificity

91% sensitivity, 0.80 specificity

91% sensitivity, 0.53 specificity

At XX% Sensitivity, Specificity equals

81% sensitivity, 0.80 specificity

82% sensitivity, 0.78 specificity

86% sensitivity, 0.77 specificity

80% sensitivity, 0.70 specificity

86% sensitivity, 0.80 specificity

82% sensitivity, 0.74 specificity

At XX% Sensitivity, Specificity equals

69% sensitivity, 0.93 specificity

71% sensitivity, 0.80 specificity

71% sensitivity, 0.95 specificity

70% sensitivity, 0.81 specificity

71% sensitivity, 0.87 specificity

74% sensitivity, 0.91 specificity

 

 

Disaggregated Reliability

Type of Reliability

Age or Grade

n (range)

Coefficient Range

Coefficient Median

SEM

Information (including normative data)/Subjects

Delayed test-retest

KG

39

 

0.72

 

Non-White students; Winter to Spring

Delayed test-retest

KG

84

 

0.80

 

White students; Winter to Spring

Delayed test-retest

1st

47

 

0.78

 

Non-White students; Winter to Spring

Delayed test-retest

1st

99

 

0.81

 

White students; Winter to Spring

Delayed test-retest

2nd

38

 

0.80

 

Non-White students; Winter to Spring

Delayed test-retest

2nd

39

 

0.70

 

Non-White students; Fall to Winter

Delayed test-retest

2nd

78

 

0.80

 

White students; Fall to Winter

Delayed test-retest

2nd

78

 

0.86

 

White students; Fall to Spring

Delayed test-retest

2nd

80

 

0.85

 

White students; Winter to Spring

Delayed test-retest

3rd

38

 

0.83

 

Non-White students; Fall to Winter

Delayed test-retest

3rd

33

 

0.79

 

Non-White students; Fall to Spring

Delayed test-retest

3rd

33

 

0.87

 

Non-White students; Winter to Spring

Delayed test-retest

3rd

83

 

0.80

 

White students; Fall to Winter

Delayed test-retest

3rd

84

 

0.78

 

White students; Fall to Spring

Delayed test-retest

3rd

83

 

0.87

 

White students; Winter to Spring

Delayed test-retest

4th

90

 

0.87

 

White students; Winter to Spring

Delayed test-retest

4th

94

 

0.86

 

White students; Fall to Spring

Delayed test-retest

4th

90

 

0.88

 

White students; Fall to Winter

Delayed test-retest

4th

35

 

0.78

 

Non-White students; Winter to Spring

Delayed test-retest

4th

38

 

0.75

 

Non-White students; Fall to Winter

Delayed test-retest

5th

98

 

0.89

 

White students; Fall to Winter

Delayed test-retest

5th

96

 

0.83

 

White students; Fall to Spring

Delayed test-retest

5th

96

 

0.91

 

White students; Winter to Spring

Delayed test-retest

5th

31

 

0.91

 

Non-White students; Winter to Spring

Delayed test-retest

5th

32

 

0.91

 

Non-White students; Fall to Spring

Delayed test-retest

5th

34

 

0.85

 

Non-White students; Fall to Winter

Delayed test-retest

6th

62

 

0.87

 

Non-White students; Fall to Winter

Delayed test-retest

6th

59

 

0.75

 

Non-White students; Winter to Spring

Delayed test-retest

6th

164

 

0.83

 

White students; Fall to Winter

Delayed test-retest

6th

160

 

0.83

 

White students; Fall to Spring

Delayed test-retest

6th

163

 

0.86

 

White students; Winter to Spring

Delayed test-retest

7th

81

 

0.86

 

Non-White students; Fall to Winter

Delayed test-retest

7th

79

 

0.77

 

Non-White students; Fall to Spring

Delayed test-retest

7th

159

 

0.86

 

White students; Fall to Winter

Delayed test-retest

7th

158

 

0.79

 

White students; Fall to Spring

Delayed test-retest

7th

160

 

0.77

 

White students; Winter to Spring

 

Disaggregated Validity

Type of Validity

Age or Grade

Test or Criterion

n (range)

Coefficient Range

Coefficient Median

Information (including normative data)/Subjects

Predictive validity

1st

CBMreading

45

 

0.70

Non-White students; Winter to Spring prediction

Concurrent validity

1st

CBMreading

38

 

0.74

Non-White students; Winter data collection

Concurrent validity

1st

CBMreading

45

 

0.71

Non-White students; Spring data collection

Predictive validity

1st

CBMreading

90

 

0.75

White students; Winter to Spring prediction

Concurrent validity

1st

CBMreading

78

 

0.70

White students; Winter data collection

Concurrent validity

1st

CBMreading

91

 

0.76

White students; Spring data collection

Predictive validity

2nd

CBMreading

37

 

0.75

Non-White students; Winter to Spring prediction

Concurrent validity

2nd

CBMreading

39

 

0.80

Non-White students; Spring data collection

Predictive validity

2nd

CBMreading

79

 

0.77

White students; Winter to Spring prediction

Concurrent validity

2nd

CBMreading

80

 

0.79

White students; Spring data collection

Concurrent validity

2nd

SAT-10

39

 

0.83

Non-White students; Winter data collection

Concurrent validity

2nd

SAT-10

76

 

0.80

White students; Winter data collection

Concurrent validity

2nd

TOSREC

35

 

0.72

Non-White students; Spring data collection

Concurrent validity

2nd

TOSREC

75

 

0.78

White students; Spring data collection

Predictive validity

3rd

CBMreading

31

 

0.81

Non-White students; Fall to Spring prediction

Predictive validity

3rd

CBMreading

31

 

0.86

Non-White students; Fall to Winter prediction

Predictive validity

3rd

CBMreading

31

 

0.89

Non-White students; Winter to Spring prediction

Concurrent validity

3rd

CBMreading

31

 

0.89

Non-White students; Winter data collection

Concurrent validity

3rd

CBMreading

31

 

0.83

Non-White students; Spring data collection

Predictive validity

3rd

CBMreading

80

 

0.70

White students; Fall to Spring prediction

Predictive Validity

3rd

CBMreading

81

 

0.71

White students; Fall to Winter prediction

Predictive validity

3rd

CBMreading

79

 

0.71

White students; Winter to Spring prediction

Concurrent validity

3rd

SAT-10

33

 

0.70

Non-White students; Winter data collection

Concurrent validity

3rd

SAT-10

79

 

0.85

White students; Winter data collection

Predictive Validity

4th

CBMreading (WRC/min)

37

 

0.72

Non-White students; Fall to Spring prediction

Predictive validity

4th

CBMreading

35

 

0.74

Non-White students; Fall to Winter prediction

Predictive validity

4th

CBMreading

89

 

0.73

White students; Fall to Winter

Concurrent validity

4th

CBMreading

85

 

0.74

White students; Winter data collection

Predictive validity

4th

SAT-10

37

 

0.71

Non-White students; Fall to Winter prediction

Concurrent validity

4th

SAT-10

34

 

0.81

Non-White students; Winter data collection

Predictive validity

4th

SAT-10

82

 

0.83

White students; Fall to Winter prediction

Concurrent validity

4th

SAT-10

79

 

0.86

White students; Winter data collection

Predictive validity

4th

TOSREC

34

 

0.78

Non-White students; Fall to Winter prediction

Concurrent validity

4th

TOSREC

31

 

0.74

Non-White students; Winter data collection

Predictive validity

4th

TOSREC

33

 

0.71

Non-White students; Winter to Spring prediction

Predictive validity

4th

TOSREC

79

 

0.76

White students; Fall to Winter prediction

Concurrent validity

4th

TOSREC

76

 

0.79

White students; Winter data collection

Predictive validity

4th

TOSREC

84

 

0.73

White students; Winter to Spring prediction

Note. SAT-10 = Stanford Achievement Test Series, Tenth Edition. TOSREC = Test of Silent Reading Efficiency and Comprehension.