Ed Next: If Feds Allow Opt Out, “One Cannot Assess School Performance”
On July 28, 2015, Education Next editor-in-chief Paul Peterson and executive editor Martin West published an article entitled, “Public Supports Testing, Opposes Opt Out, Opposes Federal Intervention.”
In their article, Peterson and West discuss the current Elementary and Secondary Education Act of 1965 (ESEA) reauthorization that will be heading into House and Senate conference committee in September so that the versions of the ESEA reauthorization that passed the House (the Student Success Act–SSA) and the Senate (the Every Child Achieves Act of 2015–ECAA) might be negotiated to become a single bill.
In SSA, the House includes a blanket opt-out provision. In ECAA, there is also the possibility of opt-out, but states must decide individually on their opt-out policies.
Peterson and West want the resulting ESEA compromise bill to ditch the federal opt-out provision. Here is their reasoning:
One cannot assess school performance accurately unless nearly all (or a representative sample of) students participate in the testing process.
In “nearly all,” Peterson and West are referring to the 95 percent of students that the federal government requires states to test under the current, defunct No Child Left Behind (NCLB).
However, 95 percent of students need not test in order for a state to “assess school performance accurately.”
Consider the research of Peterson and West.
In their article, Peterson and West report results of a survey that they plan to release in full sometime in the near future, and they base their survey results for what all of America wants regarding federal testing and federal opt-out upon the responses of 4,000 people.
They did not survey 95 percent of the entire American public in order to conduct their survey, nor did they need to if the participants were randomly selected. Even so, there were likely people who were randomly selected and who chose not to participate.
Survey research always deals with participants “opting out.”
As for interest in specific subpopluations: if they wanted to be sure to capture the responses of a given subpopulation, they could have stratified their sample to capture said subpopulation.
And if they were not able to capture that subpopulation, such would become a limitation of the survey– just as the fact that some randomly-selected participants could have chosen not to participate– but the survey itself could still be useful.
So, the idea that a federal opt out provision would interfere with the ability “to assess school performance accurately” is not even supported by the fact that Peterson and West use random sampling without guaranteed participation to report with confidence what the entire American public (and all teachers) believe regarding federal testing and federal opt-out.
Add to that their finding that only 32 percent of parents supported the idea of a federal-level, blanket opt-out provision, which they discount as “just 32%.”
Let’s go theoretical for a moment:
If one in three parents surveyed support the federal opt out, one might conclude that two out of three would allow their children to participate in federal tests. (If they have children that might be opted out. Peterson and West do not provide the details.)
If that were to happen, then the federally-sought 95 percent federal-test participation could drop to 66 percent.
A state does not need even 66 percent of all students to test for a state to randomly sample from those who choose to complete the test in order to create a test-based gauge of state performance. The 66 percent participation would be a limitation, but it would be workable. (Randomly sampling from within the theoretical 66 percent would allow for the state to randomly deselect overrepresented subgroups in order to balance subgroup sizes, if it wanted to.)
And if a subpopulation is underrepresented due to opting out, then that is simply reported as a limitation. Even now, many states have privacy requirements to meet to not report exact stats when a subgroup is comprised of too few individuals. And yet the world goes on.
But what of bias– of the resulting outcomes somehow being altered by those who purposely choose not to participate? Easy enough to gauge using the demographics of students who choose to opt out as compared to those who do not. Such should be reported with the findings. Including the limitations of a study with the study contributes to a fuller picture– to “accuracy.”
Even in surveys conducted using random sampling, issues of bias are often ignored when researchers do not account for the demographics (or other preferences) of those who purposely choose not to participate in the survey. Generally speaking, survey research has very low completion rates; a 66 percent response rate would be very good. (That is, 66 percent “opt” to complete the survey.) Even with a much lower response rate, researchers often report the results without any caveat regarding how those who chose not to participate might have biased the survey result.
Again, discussing limitations of research helps readers better interpret the result, which does contribute to accuracy.
There is yet another issue about the Peterson and West survey finding of “little public sympathy” for opt-out. In its opt-out provision in SSA, the House is not telling parents that they must opt out. It is simply allowing parents to make the decision for themselves. Though 52 percent of parents opposed allowing other parents to opt out, one might easily say that it is the parent’s decision, and if 32 percent of parents favor opting out, then 32 percent of parents should be able to choose to opt out. (Note: Not sure the exact number of “parents.”)
The 52 percent who opposed it could “opt in”– if they even have children who test. Again, not sure about this since Peterson and West do not clarify exactly how many parents this is or whether the parents in the study were even asked if they have children attending public school in the grades that are tested.
I also wonder how the survey result might have been influenced by Peterson and Wests’s wording the survey item to make it more personal: “Should you be able to opt your children out of federally-mandated tests?” Wording is important; using “parents” instead of “you” adds some distance between the respondent and the issue, and such distance could influence the result.
Here’s another possible item: “Under what conditions might you opt your child out of mandated tests?” This item could include several response options, including “no conditions” and “other” with a request to briefly explain.
I do not know if Peterson and West asked the two questions above because they include very little survey information in their article.
Another point worthy of note:
The SSA provision for opting out is not a case of the state telling kids to stay home on test day in order to manipulate state test score outcomes. It is the allowance for parents to decide to opt their own children out of federally-mandated testing without the state incurring a penalty for honoring the decisions of its parents.
Education Next promotes school choice, yet it would snuff a federal government possibility to honor parental choice in the form of opting out.
A final thought:
Even if the resulting ESEA compromise bill ditches the SSA’s federal opt-out provision, that does not mean that parents will not choose to opt out. It only means that the federal government would have chosen to make no blanket provision for it at the federal level.
Peterson and West reported it themselves: One in three parents supports a federal-level, blanket opt-out provision.
I consider that noteworthy. The House and Senate should, too.
Schneider is a southern Louisiana native, career teacher, trained researcher, and author of the ed reform whistle blower, A Chronicle of Echoes: Who’s Who In the Implosion of American Public Education.
She also has a second book, Common Core Dilemma: Who Owns Our Schools?, newly published on June 12, 2015.