US Dept of Ed Wanted Much More Testing from PARCC and SBAC
On February 20, 2015, I wrote a post on the two Common Core State Standards (CCSS) federally-funded assessment consortia, the Partnership for Assessment of Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment Consortium (SBAC), and the setting of PARCC and SBAC cut scores using the National Assessment of Educational Progress (NAEP).
The following post was originally a comment written by veteran Cincinnati arts teacher, Laura Chapman.
I value Chapman’s commentary because she has years of professional experience from which to draw, and she is a relentless, insightful researcher.
Here, Chapman offers noteworthy detail regarding US Department of Education (USDE) intentions regarding PARCC and SBAC, and how those now-years-old, USDE consortium testing ambitions will not be realized.
In short, USDE could not have all of the testing that it really wanted. Imagine that.
I worked on the first and second NAEP tests in the visual arts, vintage 1970s. The subject has no deep tradition of testing and serious issues in thinking through the role of talent and education in achievement. The most useful part of two rounds of NAEP tests in the visual arts was the background information, including the dismal record of access to instruction in and beyond school.
That said, I have also looked at the background on the CCSS and federal funding for PARCC and SBAC tests. USDE had a test-funding plan for the CCSS in April 2010, before the CCSS were published in June. By September 2010, grants were awarded to the PARCC and SBAC groups to begin test development at roughly $300 million total.
Then out of the blue, the two consortiums applied for supplementary funds for curriculum work needed to do the test development. That was a huge snafu in USDE’s thinking from the get go—just do standards and tests, nothing needed in between. In seeking more money, the grant applicants made these statements:
PARCC will “coordinate with the Smarter Balanced Assessment Consortium on… artificial intelligence scoring, setting achievement levels, and anchoring high school assessments in the knowledge and skills students need to be prepared for postsecondary education and careers” (PARCC, 2010, December, p. 3).
The SMARTER Balanced Assessment Consortium (SBAC) asserted:
“SBAC and PARCC are strongly committed to ensuring comparability between their assessments…[including] collaborative standard setting that will facilitate valid comparisons of achievement levels in each consortium’s summative test…” (SMARTER, 2011, January, p. 31).
Sounds to me like a master plan for reporting one national score tweaked from data on two tests of the CCSS –Form A test from PARCC, Form B from SBAC. Of course their references to “achievement levels” meant they would seek comparable cut scores.
Here are some real problems not addressed:
1. The pending reauthorization of ESEA may make the tests and cut scores entirely optional or just obsolete.
2. Major changes in ESEA were introduced for RttT (Race to the Top) in 2010, including the financing of these tests, and also a change so the phrase “college-and career-ready” replaced the word “proficiency” in earlier versions of ESEA. So with or without the PARCC and SBAC tests, states are dealing with the idea of college-and career-ready as the new term of art for high stakes. There are other changes in the federal definitions for college-and career-ready. These may settle the issue of cut scores. I doubt if NAEP cut scores will pass muster as a guide.
3. “College- and career-ready (or readiness)” means, with respect to a student, that the student is prepared for success, without remediation, in credit-bearing entry-level courses in an Institution of Higher Education (IHE) (as defined in section 101(a) of the Higher Education Act), as demonstrated by an assessment score that meets or exceeds the achievement standard for the final high school summative assessment in mathematics or English language arts.
(Note the OR on the high school test, one or the other, but not necessarily both).
4. Here is a brief version of the federal definition of an “Institution of Higher Education” (For a full definition, refer to Sections 101 and 102 of the Higher Education Act). An Institution of Higher Education is a school that:
—Awards a bachelor’s degree or not less than a 2 year program that provides credit towards a degree or,
—Provides not less than 1 year of training towards gainful employment or,
—Is a vocational program that provides training for gainful employment and has been in existence for at least two years. And meets all three of the following criteria:
—-Admits as regular students only persons with a high school diploma or equivalent; or
—-Admits as regular students persons who are beyond the age of compulsory school attendance
—-Is public, private, or non-profit; accredited or pre accredited; and is authorized to operate in the state.
5. One more definition (note the last line):
“Achievement standard” means the level of student achievement on summative assessments that indicates that (a) for the final high school summative assessments in mathematics or English language arts, a student is college- and career-ready…; or (b) for summative assessments in mathematics or English language arts at a grade level other than the final high school summative assessments, a student is on track to being college- and career-ready. An achievement standard must be determined using empirical evidence over time. [Emphasis added.]
I think that this means the feds have no idea what they are a doing with tests.
This seems to say that USDE will be looking at “longitudinal data” to make inferences about cut scores. The data of most relevance will be: (a) either a math test score or an ELA test score; (b) test scores from PARCC or SBAC; and (c) records of students who have taken at least one of these these and been admitted into a post-secondary program without the need for remedial courses (presumably remedial course work in ELA or math.)
In 2011, NAEP commissioned a study intended to find out what college tests and cut scores determined entry-level students’ need for remedial/developmental courses in reading and mathematics based on nationally representative data. The study sought evidence of the validity for statements in NAEP reports about 12th grade students’ “academic preparedness for college.
The conclusion was that NAEP scores will not likely to work as a standard for placement in remedial courses, because there is nothing approaching a national norm– there is too much variability in preferred college entry tests and cut scores.
For other reasons, the reasoning for these tests and their uses is deeply flawed.
Following USDE specifications for lots of formative assessments, PARCC came up with a total of nine tests a year for grade 8: five in ELA, four in math. (In math, four quarterly scores were supposed to be reduced to a summary score. In ELA the same protocol, with an extra performance measure required but not counted.) The same boilerplate was set forth for all grades. (I made a spreadsheet to discern these ambitions). So the piloted tests now available are but a shadow of what might have been, and what USDE wanted.
More reasons for burying the whole of the Obama/Duncan/Gates agenda and questioning the purposes of the PARCC and SBAC tests.