Hart, Weingarten, and Polling About Common Core
In March/April 2013, Hart Research Associates conducted a poll of American Federation of Teachers (AFT) members regarding opinions about the Common Core State Standards (CCSS), which have been declared “the solution” and “what kids need to learn.” A finding publicized by AFT President Randi Weingarten is that “75% of AFT teachers polled support CCSS.” I took issue with both the finding itself and the manner in which the finding was reported. One of my criticisms regards the small sample size: only 800 AFT members were polled.
Nevertheless, if Hart’s and AFT’s goal was to truly discern whether or not teachers support CCSS, their sampling misses the mark. I would like to detail my position in this post.
Opinion polling is a tricky business. Care needs to be taken in obtaining a representative sample. Gallup is an established name in opinion polls, and one of the standards used by Gallup is the stratified random sample. A stratum is a subgroup; the random sampling happens within the defined subgroup.
CCSS is a nationally-promoted education agenda. However, it is adopted on the state level. There is no uniform, national protocol for adopting CCSS– a critical issue in the years of transition as states decide how to approach ultimate implementation. Thus, using the state as the unit of adoption of CCSS, some number of the 45 states (and DC) adopting CCSS should have comprised the strata used in the AFT poll. As it stands, Hart Research Associates used general random sampling from AFT membership then ruled out members who did not identify themselves as “teachers.” (This information I gleaned from reading the actual survey instrument: Hart AFT survey.) In their rebuttal to my post, Hart admits that 36% of the sample respondents were from a single state: New York. That’s 288 out of 800 individuals surveyed.
Now here is another tricky part. It is possible that 36% of all AFT members live in New York. (There is no public information available on the web to verify or refute this, so I must take Hart’s word for it.) So, Weingarten’s statement that the survey result “represents” AFT members might be correct. Yet when the public hears Weingarten state that “75% of AFT teachers surveyed support CCSS,” they do not get to hear, “over one third of survey respondents live in New York State.” This survey result is biased towards the opinions of New York teachers. In their rebuttal, Hart Research Associates comments that the New York teachers did not favor CCSS as much as other teachers did; Hart reports that 82% of other teachers favored CCSS. Given an overall sample of 800 in which 36% hails from NY, that means that 63% of New York teachers favored CCSS. The public does not get to hear that in New York, CCSS has been implemented sooner than required, in 2013. (This information I have from reading the actual Hart survey instrument.) Thus, New York teachers are likely more familiar with CCSS than are teachers in many other states, more familiar with exactly what Common Core is, and also less likely to favor CCSS.
The fact that CCSS is differentially implemented in New York– and that Hart knew as much– lends support for the argument for survey stratification by state.
And here is another important sampling query: I wonder how many of those New York teachers are English language arts (ELA) and math teachers.
The above question leads to a third tricky issue: In their survey, Hart defines CCSS as “a set of academic standards in English language arts (ELA) and math for students in grades K through 12 that have been adopted in most states.” In the survey protocol, teachers who do not teach grades K through 12 are excluded from the survey. (See the survey instrument for this rule-out.) However, teachers of subjects other than ELA and math are not excluded.
If any teachers are likely to be familiar with CCSS, especially in these years of transition, it would be the ELA and math teachers.
Why not focus the survey on those whose positions are used in defining CCSS: K through 12 ELA and math teachers? This is a critical sampling issue, one sure to affect survey results. (I realize that other teachers– and administrators– and entire schools– will be affected by such heavy emphasis on standards in only two subjects. Yet the ELA and math teachers remain those immediately and directly affected by definition.)
In my original post, I used AFT teacher membership data by state as provided by a group called Union Facts. I was criticized for using this site because Union Facts is an extremist group. However, the information provided was only demographic; it was not used to promote any extremist view. In addition, I searched for an official AFT accounting of such information on the web and found none.
I also realized that using the demographics provided by Union Facts allowed me to write a post that AFT could counter and offer corrected demographic information. As it was, AFT did not offer corrected stats, and Hart Associates only offered limited correction, such as the 36% NY teacher AFT membership stat.
I have no problem with having AFT provide me with correct demographics by state regarding its teacher membership. I will accordingly correct any inaccuracies in my post.
According to Union Facts, AFT has a teacher membership presence in 31 states (though some states have memberships recorded as zero; last updated October 2012). As for CCSS, it has been adopted by 45 states, plus DC, four territories, and the Department of Defense Education Activity. (I omit the territories and Dept. of Defense from continued discourse.)
If a stratified random sample were used for 31 states, and Hart surveyed 800 teachers, then that means Hart could only survey 25 or 26 teachers per state for a phenomenon that was adopted on the state level, not the national level.
Since Hart Research Associates admits that a proportional 36% (288) of AFT teacher members live in New York, that leaves 800 – 288 = 512 teachers to divide among what might be 30 AFT states. Keep in mind that it is important to have a more equivalent state representation since CCSS is not “”more important” or “less important” in any adopting state, and since assuming uniformity of both publicizing CCSS and implementing CCSS across states (i.e., “nationally”) is unfounded.
512 / 30 = 17 or 18 teachers surveyed per each of 30 states (given that NY state had 288 teachers surveyed out of 800).
Hart Research Associates took me to task for suggesting that it should have had a larger sample for its survey. I even suggested 10% of AFT members. Hart noted that it is common for polls to have 800 respondents.
According to Gallup, for their national polls, they use a sample of 1,000 to 1,500 respondents. Yet the CCSS situation is not a national situation in the sense of a presidential election or a general opinion poll about television habits. For CCSS, the unit of adoption is the state. Thus, several hundred teachers per state should have been randomly surveyed. According to Gallup,
Using common sense and sampling theory, a sample of 1,000 people is most likely going to be more accurate than a sample of 20.
Yet for a phenomenon that differs in its transitioning implementation at the state level, Hart has a potential average of 17 or 18 teachers surveyed for all other states except New York.
If AFT has a presence in all 45 states adopting CCSS plus DC, the average number of teachers polled per state except New York declines:
512 / 46 = 11 or 12 teachers per state plus DC.
Gallup suggests at least a sample size of 500 to begin to approach diminishing returns yet decides upon 1,000 to 1,500 “because they provide a solid balance of accuracy against increased economic cost.” But let’s say that Hart Associates settled on surveying 500 teachers per state (a low number) for a possible 31 AFT states. That would mean 500 x 31 = 15,500 teachers surveyed.
Even with 500 teachers surveyed per each of 31 states, proper reporting still would need to emphasize that only two-thirds of CCSS states were included in the survey. This is a more straightforward means of conveying the limitations of the survey than merely saying “AFT teachers surveyed.” Details are important for enabling consumers of survey research to critically weight the result.
In their rebuttal, Hart Research Associates comments that “not many surveys would be conducted” if “20 million voters” needed to be surveyed.
But I am not referring to some issue on a national ballot. I am referring to a situation pushed at the national level but differentially addressed on the state level.
AFT is not broke. Its total assets for years 2007 through 2011 exceeded $100 million.
I think if it had wanted to, AFT could have surveyed 500 teachers– 500 K-12 ELA and math teachers– in each of the 45 CCSS states plus DC:
500 x 46 = 23,000 teachers.
What a robust sample for capturing an overall result for a state-adopted and differentially-addressed educational issue.
This would have enabled AFT to truly understand on a “national” level a standard with a unit of adoption that is the state/DC.
Better poll conducting/reporting than AFT poll: http://www.wpri.com/dpp/news/local_news/mcgowan/union-poll-finds-little-support-for-education-commissioner-deborah-gist
This is a poll conducted by the Rhode Island Federation of Teachers (RIFT) and National Education Association Rhode Island (NEARI) regarding RI Education Commissioner Deborah Gist. Notice the article includes not only percentages but exact numbers, and the polling issue is uniform across the entire state. The sample is a little small (402 members), but it does represent almost 4% of all RI public school teachers (approx. 10,500). I would have liked to know if respondents were randomly selected and if some strata were used, such as school district, to ensure that respondents did not hail from a concentrated geographical area within RI. I also would have liked to know if respondents were from the population of all RI teachers or only union members.