Skip to content

Weingarten Wants Me to Want the Common Core State Standards

May 8, 2013

When I was very young, three or four years old, I used to ride with my father as he drove his New Orleans city bus on its morning route. This was before the age of seat belts, and my father used to allow me to stand next to him and operate the money crank. Riders would put their fare in a glass-topped container; my father would verify the amount was correct, and then I was allowed to pull the handle so that the money would drop into a metal container below.

One day, a lady gave me some change, and I put it in the glass container and cranked it. The lady said, Oh, no! That money was for you to keep!” I remember feeling instant panic and loss at having cranked what was to be a gift to me.

My father immediately replied, “It’s okay. She would rather crank the money than keep it.” Suddenly all was well because I believed what my father had said about me. He said I had rather operate the crank, and his saying as much made it true.

That was over 40 years ago.

Last week, I read a press release by Randi Weingarten in which she stated that most teachers support the Common Core State Standards (CCSS). The tenor of her report was such that she assumed the issue of retaining CCSS was settled.

Weingarten wants me to believe that I support CCSS.

This is not my father’s bus.

I did not buy it.

I have not met American Federation of Teachers (AFT) President Randi Weingarten in person, but from what I have read about her, I have learned that she has chosen to “play to the middle”– to appear to support both traditional public school teachers and corporate reform at the same time.  And now, Weingarten has positioned herself to appear to stand against Common Core via her ‘moratorium” while simultaneously standing with it by reporting that “75% of teachers support the new standards.” Here are the exact words:

…A recent poll of AFT members reveals that 75 percent of teachers support the Common Core.

Seventy-five percent sounds overwhelmingly impressive.

In the case of Weingarten’s survey, that’s approximately 600 teachers– give or take 28 teachers– or maybe more.

Weingarten presents the results of her survey in suspiciously general terms in a 12-page Power Point-type pdf.

I would like to discuss some key points regarding why these results are suspect. It is important to consider what is really in this document– and what is absent– since Weingarten is using this “survey” as evidence that “most teachers” want the CCSS.

I should point out that Gates also wants the CCSS. Consider the content to this chummy article with the notation, “Sponsored content by the Bill & Melinda Gates Foundation and American Federation of Teachers.”:

The Bill & Melinda Gates Foundation launched the Measures of Effective Teaching (MET) study in 2009 to identify effective teaching using multiple measures of performance. The foundation also invested in a set of partnership sites that are redesigning how they evaluate and support teaching talent.

And the AFT has developed a continuous improvement model for teacher development and evaluation that is being adapted in scores of districts to help recruit, prepare, support, and retain a strong teaching force.
From our research, and the experiences of our state and district partners, we’ve learned what works in implementing high-quality teacher development and evaluation systems:
 
1. Match high expectations with high levels of support.
2. Include evidence of teaching and student learning from multiple sources.
3. Use information to provide constructive feedback to teachers, as befits a profession, not to shame them.
4. Create confidence in the quality of teacher development and evaluation systems and the school’s ability to implement them reliably.
5. Align teacher development and evaluation to the Common Core State Standards.
6. Adjust the system over time based on new evidence, innovations, and feedback. [Emphasis added.]
 
Thus, Weingarten’s survey result already evidences a conflict of interest. Gates wants the CCSS, and Weingarten is doing business with Gates.
 
However, the entire report is shaky because its sample is problematic.  Consider frame 2, “Survey Methodology.”  This study is a “telephone survey of 800 K-12 teachers who are members of the American Federation of Teachers.” According to this union facts sheet updated October 2012, AFT membership numbered 873, 454 living in 31 states and 2 districts (one American embassy and Guam).
 
First of all, teachers in 19 states could not possibly participate in the survey, for AFT has no presence.  That right there excludes scores of teachers potentially affected by Weingarten’s proclamation of CCSS acceptance by “teachers.” Second, in the states represented, AFT has units in a limited number of cities. For example, the AFT presence in Texas is limited to Houston alone. Third, most AFT units are in a single state, New York (37 units totaling 833,093 members, or 95% of AFT membership). This information should have been included as a limitation to interpreting survey results.
 
Given AFT’s predominant presence in New York, it seems that an AFT survey is a New York State survey.
 
That fact alone seriously limits the survey results.
 
But there are also other limiting factors.
 
Let us consider the 800 survey completers. The study does not include information on the number of AFT members who were called and who chose not to participate. Nonparticipation can be very telling. For example, it could indicate displeasure with either the organization conducting the survey or the subject of the survey (in this case, CCSS). Therefore, nonparticipants might be voicing a negative opinion by hanging up. Such goes unrecorded here, but in sound research studies, information on the total number of attempted calls is reported.
 
Did AFT (or Hart Research, on AFT’s behalf) have to make thousands of calls in order to finally reach the desired 800 participants? How did AFT/Hart select those to be called? What time of day did researchers call? Did they attempt to call back if no answer? How many times?
 
We really don’t know much about these 800 participants. They are AFT members. Most teach elementary school. Almost half teach high school. We aren’t provided with specific numeric breakdowns. We don’t know where these teachers live, or how many years they have been teaching, or whether they currently are employed. We don’t know their certification areas or educational levels. We don’t know exactly where they live, in what states, or specifics numbers teaching in urban, or suburban, or rural locales. We don’t know their ages, or gender, or ethnicity. We don’t know their incomes, or whether they have families.
 
I am not talking about reporting percentages. I am talking reporting actual numbers in the specific demographic categories noted above. And such should be presented summarily at the outset of the study.
 
And we don’t know why the number 800 was selected.  Given that AFT has a verifiable teacher membership of 873,454, this means that AFT/Hart only surveyed nine one-hundredths of a percent of the AFT membership (.09%).
 
A membership that is only in 31 of 50 states and overwhelmingly in one state.
 
Randi Weingarten draws conclusions of the reception of CCSS for teachers in general (she has already concluded that CCSS is suitable for all teachers) based upon the opinions of one fraction of one percent of AFT teacher opinions.
 
Weingarten notes that 75% of teachers surveyed are fine with the CCSS. The truth is the actual, very limited number is 650 out of 800 teachers.
 
That is seven one-hundredths of one percent of teachers who are AFT members (.07%).
 
Please don’t miss this. AFT did not survey even 10% of its membership before forming an opinion of teacher acceptance of CCSS. AFT didn’t even survey 1% of its membership. AFT didn’t even survey one tenth of one percent (.1%) of its membership. (See my subsequent post on polling and the need for stratified sampling.)
 
Yet Weingarten proclaims that teachers are in support of CCSS.
 
This survey should already be an AFT embarrassment.
 
But there’s more.
 
Let’s consider the noted margin of error. Frame 2 includes a comment about a +3.5% margin of error. For 800 participants, this is in effect saying, “For general survey respondents, any result we report could be off give or take 28 people.” So, if 75% of teachers “favor” CCSS (600 teachers), the actual number could be anywhere between 572 and 628.
 
How is it that a research firm only handling 800 surveys cannot get a more precise reading of the data than this? Error is introduced in a lack of either question quality or precision in answering format, or both. Based upon the AFT report, these survey responses are dichotomous (only two choices). This is the crudest level of measurement an allows for the least precise measurement outcome.
The next level up is a Likert-type scale response. Allowing participants to select from five levels, assuming the questions are well-worded, would offer a more precise measurement and would, in turn, reduce measurement error.
 
(A word here regarding margin of error and measurement error: In a calculation of margin of error whereby the response is categorical [yes/no, for example], the proportion of respondent answers for the two categories is part of the margin of error formula. Thus, measurement precision matters.  If there are more than two categories and the question is suited to the number of categories (all categories are utilized by respondents), the margin of error will likely decrease.
 
If respondents do not understand the question, or if respondents do not have sufficient answer selection options, then the researcher is not measuring what he/she purports.  Furthermore, since Hart offers no reliability data on their survey, one cannot know whether the survey measures anything at all consistently. If the survey is of low reliability, such measurement error will affect not only calculations of margin of error but also survey result usefulness in general.)
 
 Notice also that frame 2 states that the error is “higher among subgroups.” In poor research form, the researchers offer no specifics on just how much “higher” this “subgroup” error is.
 
This AFT study is lousy research.
 
Weingarten could have just dropped the insulting, shoddy “research,” cut to the chase, and said, “Bill and I have already decided to endorse CCSS.”
 
Forget the moratorium.
 
Teachers never asked for this federally-imposed curriculum in the first place.

 

 

 

 
31 Comments
  1. S.M. Winkel permalink

    Brilliant analysis. I wonder how many of Weingarten’s respondents already had been chained to the CCSS rock by their state’s deal with the RTT devil? Bill helped our governor snag a portion of that stimulus money the third time around by donating $250,000 toward the RTT application process. Arizona was awarded a whopping $25M. Now our State Superintendent of Public Instruction is asking the AZ legislature for $600M to cover the upgrade in technology that is required under the CCSS mandates. Oh, and guess what vendor will be supplying all of the new software for the project?

  2. 2old2tch permalink

    Thank you for your thorough analysis of Weingarten’s “definitive” survey on teacher support of CCSS. I couldn’t imagine that anyone was enjoying hours of coding lesson plans. As much as I miss teaching, I do not miss that particular major waste of time.

  3. Lies, Damned Lies, And MicroSoftisms …

  4. “Error is introduced in a lack of either question quality or precision in answering format, or both.”

    The question is lousy. What does it mean to “support” the CCSS? I believe adopting the standards was a mistake and that they will be used as a stalking horse for increased testing and homogenized corporate curriculum, but in some sense I have to support them by just doing my job every day. Also, most English teachers I know have very little idea what the difference is between the old Massachusetts state frameworks and the new CCSS–because really there aren’t many differences, and we don’t yet know what the significance of those differences is!

  5. The survey sample is similar to all national surveys and Hart Associates is a well-regarded company … bashing the survey because you don’t like the results is biased.

    • The survey lacks demographic information; the measurement instrument is crude; the sample is severely restricted, and the implication is misleading. Call it what you will, but your response offers nothing substantial to support your claim that I am biased.

    • Jen permalink

      The survey is misleading to people because it’s false information. I don’t believe her analysis of the survey is biased at all but based on all the evidence she gathered to prove so.

  6. Yvonne Siu-Runyan permalink

    CCSS and testing will enrich the rich!

  7. I was wondering some of these same things. Wonderfully put.

  8. Beautiful analysis, Ms. Schneider!

    Weingarten is among the most perverse of characters in this spooky gallery of players because her role is to protect and defend teachers against this corporate take-over of education, and instead, she chooses to cooperate and cut deals behind closed doors. I’d love to use more colorful metaphors here to describe how she has behaved, but I’d better not . . . .

    I love your commentary on Diane’s blog.

    Sincerely,
    Robert Rendo
    NBCT
    http://thetruthoneducationreform.blogspot.com/?view=snapshot

  9. Marcus Mrowka permalink

    Hart research is a very reputable polling firm with a number of progressive and union clients. 800 people is a significant sample size and the poll used random sampling to ID the teachers.

    I would also like to note that the union facts site linked to in this post is an anti-union site and should not be taken seriously or credibly.

    One can disagree about the merits of the Common Core but this poll was based on scientific research and polling methodology that is used by nearly every reputable pollster in the nation.

    Full disclosure- I work for AFT. If you had questions about the poll, you could have asked AFT or Hart.

    • The reader should not have to ask for the details behind your research. I stand by my commentary.

  10. I regularly oversee market research for major corporations. And, in my experience, I’m sure the research was reasonably well executed given Hart’s reputation. But in my reading the true error is FAR, FAR more significant.

    The key question on which they are placing the entire reputation of the survey is extremely vague and leading. As a result, none of the Hart statistics are meaningful. If the survey question is mis-understood, then the results are only accurate to within +/-100% (in other words they should be thrown out).

    The Powerpoint doesn’t tell us the real question – only the part that sounds best. From what I can pull together, the actual question was:

    “Common Core State Standards are a set of academic standards in English language arts and math for students in grades K-12 that have been adopted in most states.

    “Based on what you know about the Common Core State Standards* and the expectations they set for children, do you approve or disapprove of your state’s decision to adopt them?”

    Now remember, this is done on the phone. On the phone, it’s likely that many interviewee’s did not understand that “Common Core” is capitalized and refers to one very specific set of new standards that are beginning to be implemented. Instead, based on my experience in surveys like this, a significant portion of respondents ACTUALLY answered:

    “Most states have adopted the common standards. Based on what you know about these standards, do you approve or disapprove of your state’s decision to adopt them?” (That’s what the human mind does while listening to a question over the phone.)

    Further, including the phrase “the expectations they set for students”. Expectations? Isn’t that being sure they behave? That they ask before going to the bathroom? That they don’t bully? Or is that a Dickensian phrase? It’s leading and, again, vague.

    These aren’t “expectations”. They are specific measurements of each student’s ability in the subject used to establish a program of “continual improvement”.

    • Thank you, Doug. I have issues with the sample. And with the fact that if Randi Weingarten says 75% of teachers surveyed want the CCSS, then the public believes that such applies to all teachers. Weingarten needs to say that 800 teachers were surveyed and 600 want CCSS. To simply go to the media with a percentage sounds deceivingly global.

      AFT boasts of 1.5 million members. Approx half are teachers. Yet they stake acceptance of CCSS on 600 teachers. I have a real problem with this.

      If AFT is primarily concentrated in NY, then the result is primarily a NY result. Again, this needs to be clarified.

      • I agree with you entirely about sample. But even with the sample they chose, the question wording is extraordinarily flawed – so even if they fixed the sample it would still be a survey of error.

        Thanks for publishing your analysis…

    • Exactly. My colleagues know virtually nothing about the CCSS, and the question is absurd. The only thing the poll shows us is that opponents of the CCSS have a lot of work ahead of them! (And that IS a significant result!)

  11. S.M. Winkel permalink

    Mr. Garnett,
    Depending on when this survey was conducted, the teachers polled may have known very little about CCSS. It makes sense that many could have thought the pollster was asking them about common standards. The teachers in our Arizona district are discovering the truth about CCSS this week (during a half-day workshop on new grading policies), and they are devastated.

  12. Marcus Mrowka permalink

    Here’s a memo from the pollster addressing some of your points. Again this was a scientific study that used solid methodology. http://www.aft.org/pdfs/teachers/ccss_survey-method2013.pdf

    TO: American Federation of Teachers
    FROM: Guy Molyneux, Hart Research Associates
    DATE: May 10, 2013
    RE: Methodology for Common Core Survey

    Following are some facts about the methodology for AFT’s recent survey of AFT K-12 teachers on Common Core implementation that may help to answer the criticisms and questions raised by Mercedes Schneider. Schneider’s objections speak to two distinct questions: 1) does the survey reflect the views of AFT K-12 teachers?, and 2) if so, can the AFT results be extrapolated to all U.S. teachers? The answer to the first question is “yes,” for reasons explained below. The answer to the second question is “not necessarily.” When Randi Weingarten refers to what “teachers” think about the Common Core, she is referring to AFT teachers. This shorthand is not meant to deceive anyone; if it were, the press release and various poll materials would not have stated so clearly and repeatedly that the survey was conducted only among AFT members. (Indeed, even the quote highlighted by Schneider mentions “a recent poll of AFT members.”)

    In fact, it is likely that a survey of all U.S. teachers would report results broadly similar to what we found among AFT members, for reasons explained below. However, it is true that we cannot be sure of this unless further research is done among non-AFT teachers. Such research would be welcome.

    The survey employed a standard sampling methodology, used in countless surveys by many polling organizations. On behalf of AFT, Hart Research Associates conducted a telephone survey of 800 AFT K-12 teachers from March 27 to 30, 2013. Respondents were selected randomly from AFT membership lists. This process of random selection produces a representative sample, allowing us to generalize from the survey respondents to the larger population being sampled (in this case, all AFT teachers). There is nothing unusual or controversial about this method.

    A sample size of 800 teachers is appropriate and common. Schneider notes that “AFT/Hart only surveyed nine one-hundredths of a percent of the AFT membership (.09%),” and adds for emphasis: “Please don’t miss this. AFT did not survey even 10% of its membership before forming an opinion of teacher acceptance of CCSS.” In fact, a survey sample size of 800 is reasonable and quite common: for example, most national media surveys interview between 800 and 1,000 registered voters. Moreover, researchers understand that survey samples are not properly evaluated as a percentage of the underlying population. By randomly selecting respondents, a relatively small sample can provide an accurate measurement on a much larger population. If Schneider’s 10% standard were correct, pollsters would need to interview 20 million U.S. voters to conduct a single survey of registered voters. Needless to say, not many surveys would be conducted.

    A reported margin of error of +/-3.5 percentage points does not indicate a lack of precision or poorly written questions. Schneider asks “How is it that a research firm only handling 800 surveys cannot get a more precise reading of the data than this? [a +3.5% margin of error]” and notes that “error is introduced in a lack of either question quality or precision in answering format, or both.” The margin of error reflects the possibility that any single survey sample will not be perfectly representative of the full population. In this case, there is a 95% chance that a survey of all AFT teachers would yield results within 3.5 percentage points of those found in this survey. Schneider is correct that this means that AFT teachers’ approval of the Common Core State Standards could be as low as 71% or as high as 79% (and a 5% chance the proportion is even higher or lower). The margin of error has nothing whatsoever to do with question wording.

    The survey sample is demographically similar to the population of AFT teachers. In terms of age, gender, school type, and other demographic factors, the survey respondents closely resemble the larger population of AFT teachers. This information is available to anyone upon request. Schneider guesses that 95% of respondents reside in New York State, and criticizes the failure to disclose this “fact.” In reality, 36% of survey respondents live in New York, reflecting the geographic distribution of AFT members. As it happens, approval of the CCSS is actually somewhat higher – 82% – among AFT teachers outside of New York.

    A demographic breakdown of the survey sample, and precise question wording for all questions, is available upon request. Schneider claims that “Weingarten presents the results of her survey in suspiciously general terms” and faults her failure to provide comprehensive demographic information “at the outset of the study.” These survey results were presented not in a refereed academic journal, but in a simple Powerpoint slide show designed for a lay audience. There is no obligation to burden readers with exhaustive methodological details there. What is required is disclosure of this information upon request. The AFT does that. Schneider could have received answers to many of her questions – and saved herself a lot of time – by sending an email.

    It is likely that non-AFT teachers have similar views as AFT members, but we can’t be sure. AFT teachers are not demographically representative of all U.S. teachers: for example, they are more likely than average to teach in urban school districts. And of course they are union members. However, the survey reveals support for the CCSS that is generally similar across most relevant demographic categories. For example, within AFT, 76% of urban teachers and 73% of non-urban teachers approve of the CCSS. For that matter, 71% of urban teachers and 78% of non-urban teachers share the worry that they will be held accountable for results on new assessments before instructional practice is aligned with the new standards. In general, the outlook of urban and non-urban AFT teachers on these issues appears to be more similar than different. The same is true in terms of region of the country. So it is likely that a survey of non-AFT teachers would yield similar findings. However, we can’t know that for sure without further research.

    • I saw this and replied to it on Diane Ravitch’s blog. The letter fails to address most of my points and still does not include demographic information on the study. The letter does not include specific questions asked of the teachers. The letter talks of a random sample but cannot be generalized beyond the scope of AFT membership ( not “nationally”). This is a problem for Weingarten’s use of the AFT sample to promote a nationally accepted curriculum. The letter does not fully address issues of margin of error, including specifics regarding margins of error for “subgroups.” The letter does not disclose information on “hang up” calls or number of calls made but not useful before Hart reached its 800.

      The sample size of 800 is still problematic. Even if all 45 states adopting CCSS were included in sampling, that would mean an average of 17 or 18 teachers surveyed per state. Such a small sample is likely to capitalize on the idiosyncrasies of individuals in the sample rather than representing the population of AFT members.

      The study should have been more rigorous given the implications of the results. I give it a C.

    • Related to my point above… A well executed survey methodology is not a guarantee of accuracy. Because it presupposes that the interviewee understands the questions exactly as the interpreter things they are understood.

      Very, very often this is not the case. We see that in political research where small wording changes dramatically shift percentages by LARGER amounts than margins of error.

      I see this in my work for corporations – where even a survey firms lack of specific product knowledge can mis-interpret open ended questions so that findings are 100% different… All within the same +/-3% margins of error.

      Once more: A margin of error might be significant…and it might have no significance at all.

      My expectation is that the survey process itself was well executed. But that doesn’t mean the survey is reliable or accurate or true.

      • Your last line has me laughing.

        Hart could have done so much more to firm up the results of its survey.

        And not offering the details of the survey to the public, including a link to exact survey script and detailed demographics– moreover, not even offering such as a part of their rebuttal to me– is sooo suspect.

        They do admit 36% of their sample comes from a single state. That means that the remaining states had on average between 12 and 18 teachers surveyed. (I discuss this in a comment below.)

        This survey “result” is being peddled as obvious “support” of teachers for CCSS, and that without discussion of caution or limitation.

        I hold both Hart and Weingarten responsible.

        If you have read the commentary on Hart and Weingarten posts on Diane Ravitch’s blog, you can see that overwhelmingly, it isn’t pretty for Hart or Weingarten.

  13. Annie permalink

    Not that I disagree with your analysis but, just to set the record straight, I live and teach in San Antonio, TX, and I am an AFT member. There are plenty of AFT members in TX that don’t live in Houston.

    • Thank you, Annie. The problem is that quality research readily provides demographic information. Quality researchers don’t tell their readers, “You could have asked me for this info.” They provide it from the outset.

      I was left to seek demographic information on my own. Even in their rebuttal, Hart did not include detailed AFT membership information by locale. This is poor form.

      Hart did admit that over one third of respondents (36%, or 288 out of 800) were from a single state: New York. Thus, the survey result will be biased toward New York.

      If AFT is present in 31 states and NY has 267 out of 800 teachers surveyed, that means that the remaining 512 represent 30 states. That is 17 or 18 teachers surveyed PER STATE. And what if AFT is present in all 45 states using CCSS? Then that’s worse representation for the remaining 44 states (excluding NY): 11 or 12 teachers surveyed PER STATE.

      75% of 17 or 18 teachers is 13 teachers.
      75% of 11 or 12 teachers is 8 or 9 teachers.

      There is no power in reporting, for example, “AFT surveyed teachers across Texas and found that 13 support CCSS.”

      So, even if AFT is all over Texas, representation on this survey remains suspect.

      Keep in mind that I have calculated averages. This means that for any state where Hart surveyed more than average, they had to survey less than average in another state.

      It is easier to hide such sketchy surveying behind percentage reporting (“75% of AFT teachers support CCSS”) than it is to report exact numbers. Hart is not readily offering its exact numbers. Neither is Weingarten.

  14. Ms. Schneider,

    I would also love to know if these teachers polled were Unity members, or heavily connected to or influenced by Unity.

    I would also, if I could be omnipotent, love to know if such teachers were given literature from the AFT that advocated for the CCSS favorably, or certainly if such literature was used as a persuasive or pressure tactic to get the teachers to respond a certain way. I would love to know how many of these teachers received funding from the AFT (funding that would come form the Gates Foundation, to start with) or were affected in some proximate way to this funding.

    All of these factors potentially stood to play a role in how the teachers responded.

    And, of course, the wording in the questions was never universal or dynamic to test for diversity of views.

    I think you know how I feel about Randi Weingarten ever since I did a turn-around a few years ago.

    Sincerely,
    Robert Rendo
    http://thetruthoneducationreform.blogspot.com/2013/05/getting-slammed-six-easiest-breeziest.html?view=snapshot

  15. Three Out Of Four Dentists Recommend Common Core™ For Their Patients Who Chew Standards

  16. John Young permalink

    Reblogged this on Transparent Christina.

  17. How credible is someone in education if he or she has never been in the classroom as a teacher?
    From 1991 until 1997, and with the exception of a six month full time teaching load in the fall of 1994, Randi Weingarten taught on per diem basis at Clara Barton High School in Crown Heights, NY. Total experience? Six years, but this short experience is six more than many of the educational reformers who participated in the creation of the Common Core State Standards (CCSS).

    In order to demonstrate support for the CCSS, Weingarten tweeted the following on June 29, 2013:

    “Teachers were part of the development of #CCSS from the beginning” http://youtu.be/y1DlNpaKW38

    She was putting up a link that demonstrated that teachers, real classroom teachers with hands-on experience, had been involved in the standards from the beginning. The link was to a video on YouTube featuring an ELL classroom teacher Lisa Fretzin who reflects how she “…was part of the review process starting in August looking at the the first draft”:

    While Ms. Fretzin certainly has classroom experience to qualify her to participate in developing the CCSS, her participation was not exactly at the “beginning” of this process. According to her statement on the video, she was not present at the creation; she was asked to “review” which is different than “from the beginning”. Furthermore, her name is not on the list of participants who did create the CCSS for English Language Arts (or feedback group) which clearly identifies only four of the 50 participants as “teachers”. The remaining 46 participants are identified with titles such as: “author”, “consultant”, “specialist”, “professor”, “supervisor”, “director” or “senior fellow.” In all fairness, perhaps many of these participants had worked in the classroom before moving into higher ranking positions as one would hope, but their hands-on classroom work experience is unclear.

    The classroom work experience of lead authors for the English Language Arts CCSS, Susan Pimentel and David Coleman, is zero. Pimentel has a law degree and a B.S in Early Childhood Education from Cornell University. Coleman, (termed “Architect of the Common Core”) classroom experience was when he tutored students in a summer program at Yale. He later founded Student Achievement Partners and is currently serving as the President of the College Board.

    Weingarten’s tweet is disingenuous when she indicates that “teachers were part of the development” when, to the contrary, there is much more evidence to prove that the ratio of teachers to individuals bearing “education titles” was disproportionately in favor of those without classroom experience. There is even evidence that entire grade level experts (pre-K to Grade 3) were not included. Ultimately, experienced teachers have had limited say in the standards they would be implementing day in and day out in their classrooms at every grade level, and maybe that is why there has been pushback from teachers who will be held accountable for having students meet these same standards.

  18. akgreenberg permalink

    Excellent Post! How about creating a new survey here to determine the what the majority really want.

    I Do NOT support the implementation of CCSS. Trash it!

Trackbacks & Pingbacks

  1. Mercedes Schneider: Do Most Teachers Support Common Core? | Diane Ravitch's blog
  2. If the Hart Research poll method is so reliable, then TEST only .09% of ALL children. | Teachers' Letters to Bill Gates

Leave a comment