Skip to content

In Ravitch’s Defense: Milwaukee Voucher Study Found Wanting

April 2, 2013

I hold a Ph.D. in applied statistics and research methods from the University of Northern Colorado (2002).  The title of my dissertation is A Monte Carlo Study of the Type I Error and Power Associated with Descriptive Discriminant Analysis as a MANOVA Post Hoc Procedure.

Why am I beginning this post with information about my credentials?

Yesterday, Dr. Patrick Wolf of the University of Arkansas’ Department of Educational Reform (which is heavily funded by an influential Arkansas family bent upon educational reform– guess who?) published a scathing attack on my colleague Diane Ravitch, accusing her of venturing academically “beyond her element,” so to speak, in her commentary of his work. Wolf states that

…commentators such as Ravitch spend their time and energy attacking me as a person because that demonstrates that they don’t have the ability to critique the methodological rigor and quality of my actual research.  [Emphasis added.]

Let me first comment that Ravitch in no way attacked Wolf  “as a person.”  Here is an excerpt from the recent Ravitch post, to which Wolf links:

The “independent evaluator” of the Milwaukee and D.C. voucher programs is Patrick J. Wolf of the University of Arkansas. As we learned during school choice week earlier this year, Wolf is a strong supporter of school choice and he even wrote an editorial saying that his home state of Minnesota needs more school choice because it was in danger of falling behind Arkansas in doing so. How much more independent can an evaluator be? It is perhaps also noteworthy that the University of Arkansas is generously funded by Arkansas’s biggest philanthropy, the Walton Foundation, which pours millions every year into charters and vouchers and anything that has the possibility of undermining public schools. [Emphasis added.]

Whereas she does not personally attack Wolf, Ravich certainly clearly exposes Wolf’s conflict of interest in evaluating a program obviously supported by his funders.

I agree with Ravitch that this conflict of interest is noteworthy for its undeniable potential in “shaping” study reporting and outcomes.

Wolf also appears to be fixated on Ravitch’s holding a doctorate outside of research and statistics.  As one who does hold one of those doctorates in statistics and research, let me underscore that I find Wolf’s arrogance to be an unpalatable professional embarrassment and that I do not share his foolishness in assuming that one must hold a research and stats degree in order to assess the limitations of his work. That said, I want to assure him that I do professionally possess “the ability to critique the methodological rigor and quality” (or lack thereof) of his research as per his imbecilic stipulation that Ravitch does not.  In addition, let me add that I am receiving no questionable funding from any foundation to conduct this examination.

I don’t even shop at Wal Mart.

Now, to consider Wolf’s (et al.) research:

In Wolf’s tasteless post, he alludes to this, his study with other researchers, Student Attainment and the Milwaukee Parental Choice Program: A Final Follow-Up Analysis, which he claims possesses “methodological rigor and quality.”  Ravitch notes (rightly so) that a serious limitation of this study is the attrition rate of students initially receiving vouchers in 2006:

The Wolf evaluations claim an advantage for voucher students in graduation rates. But consider this. In Milwaukee, according to this analysis (see the summary here) of Wolf’s evaluation, 75% of the students who started in a voucher school left before graduation. So of the 25% who persisted, the graduation rate was higher than the Milwaukee public schools. But what about the 75% who dropped out and/or returned to MPS? No one knows.

The summary to which Ravitch alludes is the work of Casey Cobb of the University of Connecticut, published by the National Educational Policy Center. In it, Cobb’s concerns regarding the attrition rate of Wolf’s study are noted:

By 12th grade, [Cobb] notes, roughly three out of four of the original 801 MPCP 9th graders were no longer enrolled in a participating private school. The sample attrition “severely clouded” the inferences that could be legitimately drawn about MPCP [Milwaukee Parent Choice Program]’s real impact on graduation rates. [Emphasis added.]

Wolf attacked both Ravitch and NEPC for reporting the 75% attrition rate of MCPC students. The version of the study currently available reports a 56% attrition, which is still high. (This information is not available until the close of the study, an issue I will address shortly.) However, according to NEPC’s rebuttal published today, written by NEPC Director Kevin Welner, a 75% attrition rate was the statistic from the original report by Wolf and his colleagues:

Yesterday, after this was posted, I received an email from one of the EdNext readers, pointing me to Wolf’s critique. I immediately went to page 16 of Wolf’s report. Could we have made such a mistake?!  Actually … we didn’t. Here’s what it said on page 16: “A second caveat is that the majority of students (approximately 75 percent) who were enrolled in 9th grade in MPCP were not enrolled there by the time they reached 12th grade.”

So I followed the link in the Education Next piece and downloaded the same report. Here’s what it says on page 16: “A second caveat is that the majority of students (approximately 56 percent) who were enrolled in 9th grade in MPCP were not enrolled there by the time they reached 12th grade.”

That was certainly odd. Then on third page of the pdf I’d just downloaded, I found the following: “Updated and Corrected March 8, 2012.” It doesn’t say what specifically was updated or corrected, but clearly one change was on page 16.

So here’s the timeline:
1. February 2012: Wolf and his colleagues publishes the SCDP [School Choice Demonstration Project] report, stating that “approximately 75 percent” of the voucher students  enrolled in 9th grade “were not enrolled there by the time they reached 12th grade.” (On February 24th, NEPC sent the report to Prof. Cobb for a review.)

2. March 8, 2012: The SCDP changes that sentence, substituting “56” for “75”.

3. April 19, 2012: NEPC publishes the Cobb review, pointing to (among other things) the 75% figure as evidence of the study’s limitations. Nobody had thought to go back and see whether Wolf or his colleagues had changed important numbers in the SCDP report.

4. April 1, 2013: Wolf attacks Diane Ravitch and NEPC for CORRECTLY quoting Wolf’s own report.

Here is an elephant-in-the-room question: Did Wolf and/or his colleagues knowingly and intentionally alter the original 75% attrition figure to the more acceptable (though still high) 56% in order to make the voucher student sample in the study appear more solid that it actually was?

As a former assistant editor of research for the flagship professional counseling journal, the Journal of Mental Health Counseling, I can tell you that I would not have approved a piece for publication had the researchers drastically and positively altered the study’s attrition rate without also having requested to see the data that support the accuracy of the change.

As I read Wolf’s tacky lashing of Ravitch, I noticed a single comment at the end of the post:

Dr. Patrick, Please hurry and de-identify the data you used in your papers and provide it to independent researchers. I have the ability to critique the methodological rigor and quality of your actual research. I am very very much looking forward to it.

Sincerely,

Julian Vasquez Heilig, Ph.D.

Regarding Dr. Heilig’s request: It is proper research etiquette to make one’s data available to fellow researchers wishing to verify the results of a study. The “de-identifying” piece means that identifying information for study participants be removed for privacy purposes before passing data on to other researchers.

Assuming the data set is released in whole (as it should be), what that data set should tell is the exact nature and degree of the sample attrition for the voucher students.

In short, the data would answer the question as to whether or not the researchers are being honest concerning the voucher student attrition rate. It would also reveal when voucher students are leaving. (For example, are a number of voucher students returning to public school within the first year of receiving vouchers? Within two years?)

Wolf offers no such information as part of his study.

As Ravitch correctly laments, “No one knows.”

No study is better than its data.  Furthermore, no study is better than the integrity and suitability of its sample.  Wolf can puff up all he likes, but his (et al.) sample is the fatal flaw in his (et al.) research. The premise of his (et al.) sampling– the idea that it is sufficient to separate two groups of students based upon their initial enrollments as either voucher or traditional public school attendees– ignore changes of these  students from voucher to traditional public and vice-versa over the next several years– then pretend that outcomes measured four years later are the direct result of a single choice made several years earlier but not adhered to by most participants (voucher or nonvoucher)– yields nothing useful. Nothing.

Here is what Wolf and his fellow researchers offer regarding the sample of voucher students in his study:

The 801 MPCP students are the entire 9th grade cohort of students who we determined to be valid voucher-using students after examining the Wisconsin Department of Public Instruction audited list of voucher recipients based on the 3rd Friday count (September 15, 2006). [Emphasis added.]

One must carefully consider what Wolf et al.are saying here, otherwise one will miss it. “The entire 9th grade cohort” means all students who began as voucher students but who did not necessarily finish. Proper sample reporting should include information regarding the sample attrition, at least a table showing how many voucher students continued as voucher students from year to year. It is arguably misleading to omit clear, detailed information regarding sample attrition in a longitudinal study. And it is not that I believe Wolf and his colleagues do not know this. I am sure they do.

The single statement regarding sample attrition at the end of the study (75% or 56%, depending upon which version of the study one reads), is shoddy research reporting. Wolf et al. follow this statement of attrition by writing,

The results of this paper as a whole should therefore be interpreted as the effect of “exposure” to the MPCP rather than long-term persistence in that sector.

Voucher “exposure”? Are you kidding me? What kind of foolishness is this? It is not as though “exposure” to vouchers leaves some established and undeniable change, as would exposure to asbestos or radium or ebola.  So, if students are choosing to forsake school choice for the traditional public school classroom, one way to excuse this attrition is to say that vouchers are so powerful that initially having a voucher supersedes continuing with the voucher.

“Exposure” to vouchers is a convenient focus for a study with 75% (or 56%) voucher student attrition.

The two principal samples compared in the Wolf et al. study are 1) 801 students who were given vouchers in 9th grade (2006) but most of whom chose not to continue with the vouchers, and 2) 801 students who were not part of the voucher program in 9th grade (2006) and who attended traditional public school.

A further confounding sampling issue is the possibility that those 801 in the traditional-public-school-in-9th-grade-(2006) sample could have opted for a voucher in subsequent years.  Since Wolf does not include detailed information regarding voucher use on the study’s sample, readers simply cannot know the degree to which these two purportedly separate samples are actually similar via voucher use at some point in the high school career.

Perhaps Dr. Heilig will be able to tell us once Wolf releases the data set.

If the two supposedly-mutually-exclusive samples in this study are in serious question– not the actual individuals in each sample but the mutual exclusivity of the school choice experiences of those in the two samples– how is it possible to “compare samples?”  It is not.

To recap:  Students in the “voucher” sample initially received vouchers, but most (56%? 75%?) did not continue using vouchers; students initially not using vouchers but attending traditional public school could have, at a later time, opted for vouchers. Therefore, the two samples are confounded in that both have the potential for some combination of student voucher use and voucher nonuse (i.e., public school preference).

I must say, this convoluted sampling kills any utility that this study might have otherwise offered.

Why not simply compare “voucher completers” (students who accepted vouchers in 9th grade and used them consistently through 12 grade) with “voucher noncompleters” and/or “voucher nonusers”?

(These possible samples noted above qualify as mutually exclusive. For example, one cannot be classed as a voucher completer and also have some degree of voucher nonuse. Or, one cannot be classed as a voucher noncompleter but possess some degree of voucher completion.  This makes for clean sampling, an indispensable condition for research rigor and quality.)

Was the number of voucher completers too low? Was it an embarrassingly low number?

No one knows.

Was there some effort to compare voucher completers to voucher noncompleters, but the idea was perhaps nixed because publicizing the concept of “voucher noncompleters” reflects poorly on those pro-voucher folks who are funding this research?

No one knows.

I had planned to examine specific results of this study by Wolf and his colleagues. However, I know that sampling issues previously discussed render results useless.  I would like to highlight a couple of the points Cobb raises in his review, as I believe the poorly conceived, poorly constructed, and inadequately detailed samples inevitably contributed to his assertions:

Summary statements found in the report’s executive summary and conclusion, while not inaccurate, do invite conclusions about MPCP effects that are likely not warranted by the data presented. For example, the report concludes that the “results here suggest that students who used a voucher to attend private school in 8th or 9th  grade were more likely to graduate high school” (p. 16). The problem is that we do not know exactly where they graduated high school or for how long they were enrolled in a voucher program school. This one caveat alone calls into question the usefulness of nearly the entire study. …

A significant point to note about this report is that there really aren’t many, if any, differences to report here. Even if there were, the research design is not robust enough to inform the reader about the causal effects of a voucher program. [Emphasis added.]

In other words, the researchers keep readers in the dark regarding voucher attrition rates, and crediting graduation rates several years after the initial acceptance of a voucher without requiring voucher school follow-through is a stretch.

My conclusions:  Wolf and his colleagues present nothing in this study to justify disruption of a school system via voucher use.  The one distinguishing factor of voucher use appears to be the high attrition rates– that is, the high rate of student return to the traditional public schools.

Perhaps the “life altering impact” of that initial voucher receipt in 9th grade is the realization that traditional public school aren’t so bad, after all.

I get to write as much since no one is funding me.

Can’t wait until Dr. Heilig is able to verify my summations using that actual Wolf-provided data set.

I’m also looking forward to reading that public apology I am sure Wolf will send to Diane Ravitch since he now knows that she is not angry.

She is simply correct.

From → Vouchers, Walton

45 Comments
  1. 2old2tch permalink

    Bravo. I only have a masters in ed therapy and I understood your arguments.

  2. BOOM! Thank you so much for exposing the smoke and mirrors. This is definitely a case of garbage in garbage out.

  3. Suzanne Winkel permalink

    Brilliant! Brilliant! Brilliant! My only regret is that you don’t live in Arizona.

  4. Perfectly clear.
    I suspect if I could take research methods and stats classes from you, I might be a PhD rather than a disillusioned ABD…

  5. Wolf is to education reform as RJR scientists were to the tobacco industry.

  6. Chris Lubienski permalink

    Good points. Also the SCDP study doesn’t capture peer effects, if I remember correctly, and acknowledges that voucher impacts might actually be due to the introduction of a new testing accountability regime… explaining why there was no measurable impact the previous 4 studies.

  7. “Why not simply compare “voucher completers” (students who accepted vouchers in 9th grade and used them consistently through 12 grade) with “voucher noncompleters” and/or “voucher nonusers”?”

    Why not? Two words: selection bias. Wolf did the only thing that professional researchers would do: use an intent-to-treat analysis. It’s misleading to suggest that this analysis was somehow shady or wrong or “confounds” the analysis, when the exact opposite is the truth.

    • Intent to Treat Analyses (ITT) are suited to randomized clinical trials in medical research. In such trials, participants do not select which treatment they will take, nor can the control group members “opt into” the treatment at some point during the study. Wolf’s et al. work is not a randomized clinical trial. Wolf’s et al. “treatment” is voucher selection, which the public school group (“control”) is able to also opt into and out of, just like the “treatment” group can, the only difference being that the “control” didn’t take vouchers at the beginning of the school year in 2006. Thus, the two groups of students are confounded.

      See http://www.ncbi.nlm.nih.gov/pmc/articles/PMC28218 for more information on ITT as it is appropriately applied in medical research.

    • ITT analyses are not limited to medical trials, but are standard practice in social science, to the point that it would be unprofessional either not to do an ITT analysis or to do what you suggested (look at completers by themselves). Moreover, ITT still works even when there is treatment crossover.

      • Provide evidence for what you are writing.

        The use of ITT for assessing voucher success is ridiculous. Groups are not randomly assigned, and a chief premise for use of ITT is the preservation of random assignment to groups despite patients’ not following a medical treatment protocol.

        The two student groups are confounded. Wolf et al. present no details regarding the degree of this counfounding. That is suspicious, coupled with the fact that the attrition rate was altered from 75% to 56%.

        Why not use your time commenting to address this suspicious alteration of attrition?

      • ITT is not just for random assignment studies. It is used routinely in matching studies too, like the Milwaukee study.

        Evidence: all of the standard works on social science research methods, like Shadish, Cook, and Campbell.

        Or for that matter, check out this completely non-controversial report from the Coalition for Evidence-Based Policy:
        http://coalition4evidence.org/wp-content/uploads/2012/01/Validity-of-comparison-group-designs-updated-Feb-2012.pdf

        The report explains how a social science study with a non-randomized control group should be analyzed. The report cautions that such a study should “follow the same practices that a well-conducted randomized controlled trial follows in order to produce valid results (other than the actual random assignment).” For example, the study should “use an ‘intention-to-treat’ analysis.”

        Note: Diane Ravitch sits on the board of the organization that published that advice.

      • From the link you provided above:

        “The study follows the same practices that a well-conducted randomized controlled trial follows…”

        In “well-conducted, randomized controlled trials,” participants from the control group cannot “opt into” the treatment. However, “cross over” in the Wolf et al. voucher study is occurring in the control (comparison) group. That renders it useless as a control group.

        The “control group” in Wolf et al. is corrupted. Students in the control group also had access to the “treatment” (vouchers).

        And this quote from the study produced by the board on which Diane Ravitch sits directly contradicts your assertion that “ITT still works even when there is treatment crossover”:

        “…the study should have an adequate sample size, use valid outcome measures, prevent “cross-overs” to or “contamination of” the comparison group….”

        The only two conditions in Wolf et al. are students’ using vouchers or not using vouchers. Both conditions are present in both groups of students: In both groups, students are alternately using or not using vouchers, at a rate not publicized by the authors.

        I am done with this argument.

        The only additional information I would like to hear is information regarding the altering of attrition from 75% to 56%. Please provide details regarding this manuscript alteration.

      • I have no idea about the alteration. Seems odd, but typos do happen.

        The report from Ravitch’s own organization does say that it’s a good idea to prevent crossovers, but the implication you take from that is wrong. The whole reason to use intent-to-treat analysis (both in randomized trials and in matching studies) is that it is the only correct method to use when there turns out to be attrition/crossover. If you can prevent attrition/crossover from occurring, then that’s the one situation where you wouldn’t need intent-to-treat analysis. Otherwise, intent-to-treat is the only way to go.

        Crossover does happen in medical trials too. One medical article even calls intent-to-treat analysis the “gold standard” approach for use when there is crossover in a randomized trial (which, again, makes it the practice that Ravitch’s own organization recommends without reservation): http://www.massbio.org/events/calendar/1582-bsdm_ct_the_impact_of_crossover_on_statistical/event_detail

      • I stand by the criticisms offered in my original post, including the NEPC criticisms, and those offered once again in this exchange. I will not repeat them here.

        You should investigate that 75% to 56% attrition alteration with the same veracity as you used to defend using ITT on a voucher study, or with the same rude vigor with which Wolf attacked Ravitch for “getting it wrong.”

        Perhaps you are Wolf, JB, or someone working for him.

        And I challenge you to honor Dr. Heilg’s request for access to the data set used in the Wolf et al. study. It sure would be nice to know just how far from that “gold standard” these two groups strayed.

        Finally, let me add that your group’s (I assume you are of their camp) decision to not investigate students who choose to begin with vouchers and use consistently until graduation makes me wonder whether those funding the venture really don’t want to know the answer. Or perhaps it was investigated but not publicized (?)

        No one (on this side of the study) knows.

    • My question to you, JB, is: Who are you, i.e., what work have you done that would qualify you as an “expert” in this quite fine toothed discussion on research methods? What is your name? I ask because it is easy to hide one’s own hidden agenda through an anonymous blog name. You know who I am:
      Duane

  8. I am not Wolf or anyone who is now or ever has been affiliated with the Milwaukee research team. I’m merely concerned that so many people don’t seem to understand intent-to-treat analysis, why it’s used, and why it is so improper to look only at completers.

    As for your claim that Wolf and his team didn’t look at students who stayed in the voucher program, you need to read his response: http://educationnext.org/update-on-the-milwaukee-school-choice-evaluation-dust-up/ He links to a peer-reviewed article where that subject is discussed. But like professional social scientists, Wolf and his team point out that students who stuck with vouchers for all 4 years were “higher performing students,” and therefore it would be improper (there is no doubt about this) to compare them to public school students.

    In other words, Wolf avoided the very comparison that you recommended above, precisely because your method would have been too favorable to vouchers. Think on that before you accuse Wolf of being biased towards vouchers.

    Wolf also points out that he and colleagues published a peer-reviewed article specifically about all the students who left the voucher program.

    • The point you have inadvertently argued is that vouchers are not for all students. Those who “stick with it” are the “cream.” Therefore, let’s conduct the study on the “cream” and show as much clearly. Then, let’s advertise them to the public as ultimately a benefit not to students in general, but as a public-funding-of-private-school “leg up” for the more capable students.

      The idea that “exposing” students to vouchers is some magic inducement for future achievement is then shown as a lie. It is not “voucher exposure” that matters. The “high achievers” already are who they are.

      • JB

        Jason Bedrick.

        Policy Analyst.

        Cato Institute.

        Confirm or deny.

      • Deny. It’s a pseudonym. My identity isn’t important. As a matter of logic, an argument is valid or invalid based on the merits, not based on who a person is.

      • George Buzzetti permalink

        JB is absolutely correct an argument must have merits no matter who it comes from. Always look at the arguments, see if they are valid, then look at who made them and then figure out if they make sense all things considered.

      • JB is a coward who advocates a destructive reformist agenda and who knows his revealed identity will somehow expose his motives.

      • I advocate a destructive reformist agenda? Talk about putting words in my mouth. I’m not a fan of the Milwaukee voucher program overall, but the worst anyone says is that its results are equal to the public schools, and the best possibility is that it increases graduation rates.

        Anyway, the only agenda I’ve put forth is this: 1) people who aren’t familiar with intent-to-treat analysis in the social sciences shouldn’t ridicule it; and 2) people who aren’t even minimally aware of all the publications Wolf has written about Milwaukee shouldn’t criticize him for not writing about things that he wrote about in copious detail.

      • Also (cough*Ravitch*cough), don’t lie about what Wolf found. That is, don’t say his study found no test score effects when it did. Ravitch can still disagree with the Milwaukee research design, or with the very usefulness of test score effects, but it’s beyond the pale to say there were no test score effects in his study at all. (Actually, she probably wasn’t lying, in the strict sense: she doesn’t seem to have read Wolf’s work at all, so she’s speaking from ignorance rather than deliberate disregard of something that she actually read and understood.)

      • Jason Bedrick permalink

        Hello, I am the real Jason Bedrick. I do not know who “JB” is. I work for the Cato Institute and I always publish under my own name. (Using the initials “JB” to mask my identity would not be very clever.) You can see that I’ve posted under my own name on Diane Ravitch’s blog in the past:

        http://dianeravitch.net/2013/03/01/welfare-for-the-rich/

        http://dianeravitch.net/2013/03/31/how-vouchers-worked-in-cleveland/

        I would greatly appreciate it if “JB” would unmask himself. While I agree with George that an argument should be considered on the merits, there is really no need for anonymity here.

        Dr. Schneider, you can email me at jbedrick@cato.org to discuss this further if you would like. I may have a way of demonstrating that I am not “JB”.

      • George Buzzetti permalink

        Thank you Jason. That is why I also publish under my own name and George1la. Many know who I am. I understand why many use other names especially if they are presently in the education system as I have a lot of experience with those retributed on. I met some more tonight at a fund raiser for Ratliff who can totally change the makeup of the LAUSD Board of Education.

        I would ask all to help to elect her. LAUSD can then go from a 4-3 for the devils to a 4-3 for the angels. The corporate privatizers are pulling in money and support nationally to stop this so what is wrong with turning the screws the other way like I am suggesting concerning the “Parent Trigger?”

        Jason and I are on different political spectrums but that does not matter on this issue being raised in this response. We both publish under our real names so the message is clear along with where it comes from. He and I obviously do it on purpose for the same reasons. I want them to know where it came from. That is important. I think Jason believes the same thing. Since 1995 I have worked with employees who have had their lives ruined by retribution and false charges including child abuse. We now have a data base of over 600. In 1997 I had this audited by the State of California. Oct, 1997, 96121. I understand school employees. So, if you are still in the system continue to use your alter identification and please give us your experiences and input. We are with you.

      • Jason, I look forward to viewing evidence that you are not JB. If you do not mind, please offer such evidence here.

    • Jason Bedrick permalink

      Since we both post on your site, you should be able to tell where we’re both posting from using our IP addresses. I live just outside of Phoenix.

    • Jason Bedrick permalink

      Also, I’ve written about this very topic elsewhere under my own name.

  9. Support for the Wisconsin “corporate reform” engine is starting to sputter. Thanks for your insightful dissection of the recent “voucher study”. WI is on the verge of expanding vouchers to communities around the state with the combination of an “independent authorizer” (appointed by the voucher supporting, traditional public school de-funding, republican rock star, Scott Walker). Your work will speak to reasonable people to help reign in the “reform engine”.

  10. George Buzzetti permalink

    Since the dropout rate is actually taking the ADA of the 9th grade 4 years before they are in the 12th grade and comparing the ADA of that 12th grade I come up with a drop out rate of 75%. Not bad, if you are after total loss, not advancement. How can this guy defend this? It is typical. He is the kind of person, when I was in the Aerospace business a long time ago an “Educated Idiot.” This came as a result of engineers who were magna cum laude from Cal Tech and such who could not read their own prints or know what to do. It applies everywhere. I have also found that PHD’s give more excuses for robbery from students and corruption than people from the inner city. People in the inner city live every day with scams and spot them easily. In my neighborhood, upscale, the locals tell us they do not care what happens to the public schools as they spend $20-30,000/year for each of their children to go to private school. I tell them “Real smart, when your child drives out with the rest of us they face the guns and car jackers also.

    • George, some Ph.D.s write excellent blog posts in rebuttal against other, self-serving, Ph.D.s. :)

      –Mercedes… PhD

  11. Excellent review of the voucher stat mess, thank you! Very readable and convincing.

  12. George Buzzetti permalink

    I am not knocking a PHD but my experience tells me that does not mean much in this business. It is more about who you are and what you really do. As my friends grandfather taught him “I hear real good, but I see a whole lot better.” I have seen too many PHD’s who make excuses and are destroyers of worlds. So “Show me.”

Trackbacks & Pingbacks

  1. The Rise of the Dogmatic Scholar: “A Cult of Ignorance” pt. 2 | the becoming radical
  2. Ravitch v. Wolf: The Rise of the Dogmatic Scholar – @ the chalk face
  3. In Ravitch’s Defense: Milwaukee Voucher Study Found Wanting | Cloaking Inequity
  4. Methodological Flaws in Milwaukee Voucher Study? | Diane Ravitch's blog
  5. Best Comment of the Day (So Far) | Diane Ravitch's blog
  6. Best Comment of the Day (So Far) | CommonCored.usCommonCored.us
  7. USA: garbage in garbage out? | AVALIAÇÃO EDUCACIONAL – Blog do Freitas
  8. Seeds of the Dogmatic Scholar – @ THE CHALK FACE knows SCHOOLS MATTER
  9. On Vouchers In General and Particularly the 2014 All-voucher Arizona Push | Dr. Rich Swier

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,144 other followers

%d bloggers like this: