Skip to content

Fordham Institute Favors PARCC, Neglects Item Readability

February 15, 2016

It seems that the Fordham Institute views itself as the grader of the corporate reform initiatives that it is paid to support. In July 2010, it graded the Common Core State Standards (CCSS) and state standards– and found in favor of CCSS in this highly questionable, slanted report.

In February 2016, Fordham Institute released another report, this time on supposedly grading CCSS assessments based on CCSSO (Council of Chief State School Officers) criteria. Why the CCSSO criteria matters is anybody’s guess. Still, Fordham Institute has time and money to publish whatever ti will, and what Fordham Institute, uh. “found,” was that the two federally-funded, consortium-developed CCSS assessments are tops:

As our benchmark, we used the Council of Chief State School Officers’ Criteria for Procuring and Evaluating High-Quality Assessments. We evaluated the summative (end-of-year) assessments in the capstone grades for elementary and middle school (grades 5 and 8). (The Human Resources Research Organization evaluated high-school assessments.)

Here’s just a sampling of what we found:

  • Overall, PARCC and Smarter Balanced assessments had the strongest matches to the CCSSO Criteria.
  • ACT Aspire and MCAS both did well regarding the quality of their items and the depth of knowledge they assessed.
  • Still, panelists found that ACT Aspire and MCAS did not adequately assess—or may not assess at all—some of the priority content reflected in the Common Core standards in both ELA/Literacy and mathematics.

The same day that Fordham Institute released its report, ACT responded with the following sharp commentary at the heart of its press release:

The finding that ACT Aspire assessments adequately assess many but not all of the priority content reflected in the Common Core standards is not surprising. Unlike other assessments included in the study, ACT Aspire is not and was never intended to measure all of the CCSS. Rather, ACT Aspire is designed to measure the skills and knowledge most important in preparing students for college and career readiness. This is a significant philosophical and design difference between ACT Aspire and other next generation assessments. ACT has made the choices we have to align with college and career readiness standards, rather than specifically to the Common Core, and we intend to keep it that way.

According to ACT, it has not bent its assessments to fit CCSS– and it apparently does not consider CCSS to be interchangeable with “college and career readiness.”

As one might expect, floundering, item-vending PARCC is happy with the Fordham Institute finding. It even issued a press release that finally acknowledges Hanna Skandera as PARCC chair.

However, the Fordham Institute promotion of PARCC and Smarter Balanced is a loon’s celebration for what it omits, not the least of which is the grade-level appropriateness of assessment passage readability.

The remainder of this post was written by Louise Law, Director of Elementary Education for the Union #38/Frontier Regional School District in Western Massachusetts. She has served in public schools for over 30 years in a variety of roles including classroom teacher, assistant principal, principal, curriculum coordinator, Title I Director, Director of English Language Learning, and Director of Elementary Education/Assistant Superintendent.

louise law  Louise Law

Law has also led numerous workshops for teachers and administrators throughout New England, and is certified by Phi Delta Kappan as a curriculum auditor. She teaches graduate level courses in curriculum and administration through the Collaborative for Educational Services in Northampton, MA.

Louise Law understands the importance of considering the appropriateness of readability level in assessment– especially high-stakes assessment.

Fordham Institute needs to take a lesson.

PARCC is Out of Line

LouiseLaw

This month the Thomas Fordham Institute published a report evaluating the content and quality of “next generation” assessments, i.e., four different standardized tests for students grades 3 – 12. It should surprise no one that, in its report, and despite its authors’ claim of not being advocates for any particular test, this Gates Foundation-supported organization ($8.2 million since 2003) identified the PARCC test as a superior assessment instrument.

However, a significant omission in the report calls the conclusions of this study into serious question: The Fordham Institute authors did not evaluate whether the reading passages in the PARCC test are appropriate for the grade level tested. In fact this test contains reading passages that are peculiarly inappropriate for the actual grade levels of students taking the tests – and, amazingly, no evaluation of the PARCC tests has addressed this issue.

Fordham Institute’s supposedly unbiased researchers readily admit that this criterion for test design, known in test-design parlance as “text complexity,” is significant, but they say, simply, “We were unable to include text complexity data in our analysis” (pp. 39 and 49). Instead, they explain, they accepted the test-designers’ own evaluation of the complexity of reading passages on these tests and concluded that the reading passages were therefore suitable for the tested grade levels.

The reading passages found in PARCC are far beyond grade levels of the students being tested, and it is difficult to believe that the evaluators were unaware of that fact. The reading difficulty level of any text depends on such qualitative variables as sequencing, language complexity, topic and theme and quantitative factors such as word and sentence length. Teachers know this principle — and so do the writers and editors who choose the reading passages and compose the questions for all these tests. A variety of well established research-based formulas readily available online can be used to determine the readability level of a given text. By any number of such formulas, several reading passages in the 2015 PARCC test are beyond the grade level being tested, some by several years.

A review of the recently released questions from this PARCC test reveals the following:

The PARCC test asked fourth grade students to respond to questions based on passages from The Wonderful Wizard of Oz by L. Frank Baum. According to the widely used Lexile measure of reading level, this book has a readability score of 1030, which means that these passages are suitable for an average eighth grader. Few fourth graders can read them with the comprehension required by the test questions.

The third grade PARCC includes Native American myths which, according to several readability measures, are appropriate for sixth grade. The fifth grade PARCC includes an informational passage appropriate for a ninth grade reading level, and the sixth grade PARCC test contained passages with average readability levels of 10th and 12th grade.

On the mathematics portion of the PARCC test, questions require students to read a great deal of language before getting to the computational part of the question, and much of this language is similarly difficult.

I am the elementary curriculum director for a small school district in Massachusetts, where students have consistently ranked at the top of the nation each year on the NAEP tests. Our district serves a predominantly middle class population, most of our schools’ free and reduced lunch rates are below 20 percent, and students in our schools generally remain with us from kindergarten through high school graduation.  Our schools are well equipped, our families are supportive, and there is a high retention rate among our teaching staff.

Our teachers work hard to ensure that our curriculum and instructional practices align with the Massachusetts standards, and our children generally score well on state tests. In 2011, when the Massachusetts curriculum standards were rewritten to incorporate the new Common Core standards, we devoted considerable time and resources to updating our curriculum materials and to reviewing our instructional strategies, in order to ensure that we would meet those new standards. Our students continued to perform well above the state average.

Despite the many advantages our school community enjoys, our teachers have been astonished by the striking difficulty of the PARCC questions. Like our own state tests (the MCAS, or Massachusetts Comprehensive Assessment System), the PARCC tests include concepts that are appropriate to grade level, and they are consistent with Common Core guidelines, but the material students must read in order to answer questions showing their competence is conspicuously out of line with their grades’ developmental levels.

How did this seemingly gratuitous complexity of PARCC test questions arise?

The reality is that students’ performance on these tests will affect how teachers and schools are evaluated. For high school students, a passing score is required for graduation — a requirement enforced by the very name of the PARCC, “readiness for college and career.” The stakes are enormous.

However, passages that students cannot read are not a useful educational tool. Tests designed this way create anxiety for children as young as eight years old and frustrate teachers. Meanwhile, as students, teachers and schools are insidiously and incorrectly identified as “failing,” publishers will reap tremendous profits selling remedial and test prep materials to school districts eager to help their students score satisfactorily. At the same time, as the public is convinced of the false narrative that our public schools are failing, the proliferation of for-profit businesses that manage charter schools will continue, and the march to privatization of our schools will accelerate.

Assessments based on PARCC should be suspended until the questions have been more carefully vetted and the tests have been validated by education professionals who are not even remotely affiliated with organizations funded by those promoting a particular agenda. Until that time, we are serving the interests of corporate profit rather than of students’ academic and emotional growth, and we are wasting our time with an exercise that undermines teaching and learning.

I’m thinking neither Fordham Institute nor PARCC will be adding Law’s pointed words regarding unsuitable PARCC item readability to its website.

F

__________________________________________________________

Schneider is a southern Louisiana native, career teacher, trained researcher, and author of the ed reform whistle blower, A Chronicle of Echoes: Who’s Who In the Implosion of American Public Education.

She also has a second book, Common Core Dilemma: Who Owns Our Schools?.

both books

Don’t care to buy from Amazon? Purchase my books from Powell’s City of Books instead.

8 Comments
  1. It should be noted that teachers are not allowed to see the tests, and evidence that a teacher has seen any portion of the test is grounds for reprimand and possible loss of position or even license. Tests are to be given with a focus on student attention to task but without looking at the computer screens.

  2. Thanks for another great post, Mercedes.

    It’s worth noting that Nancy Doorey, the primary author of the Fordham study, used to work for both SmarterBalanced and ETS: https://www.linkedin.com/in/nancydoorey. Here is the SBAC Field Test report that she authored on behalf of SBAC in 2014: http://www.smarterbalanced.org/field-test/

    And here is a description of her work at ETS until just last year (yes, the same ETS that holds a 240 million dollar contract to administer the SBAC in California)

    “Director of Programs
    The K-12 Center at ETS // 2009 – 2015 (6 years)
    Assessment: The K-12 Center at ETS

    In 2009 the CEO of Educational Testing Services (ETS) decided to launch a new center for the purpose of driving advances in K-12 assessment. The Executive Director and I were charged with formulating the vision, the strategic plan and the positioning of the new Center. Since then, we:
    • Organized and executed 5 research symposia and 1 national research conference concerning assessment opportunities and challenges underlying the Race to the Top Assessment Program and the designs of the assessment Consortia
    • Developed a website that quickly was drawing more than 300 hits per day and an email distribution list of more than 6,000 subscribers, and
    • Produced the field’s most widely used, Consortia-approved summaries and graphical illustrations of the five next-generation assessment systems being developed by Consortia of states, entitled, “Coming Together to Raise Achievement…”

    It’s odd then (or perhaps fraud then), that the Smarter Balanced website would describe the Fordham study as an “external evaluation.” http://www.smarterbalanced.org/news/national-evaluations-again-confirm-quality-and-alignment-of-smarter-balanced-end-of-year-test/

  3. Is Louise Law or anyone with her knowledge participating in DESE’s workgroups on the new tests?

Trackbacks & Pingbacks

  1. ACT’s College and Career Readiness Standards | deutsch29
  2. Mercedes Schneider: The Fordham Assessment Study Did Not Consider Readability of the Tests | Diane Ravitch's blog
  3. ACT’s College and Career Readiness Standards – 13news.co

Leave a comment