This morning, I read a post on education historian Diane Ravitch’s blog about an influential nonprofit in New York, Families for Excellent Schools (FES). It seems that nonprofit is wielding its influence to advance charter schools in New York City. As Ravitch writes:
Perdido Street blogger asks why it is impossible to find out who contributed to the lobbying group Families for Excellent Schools, which spent $6 million this year to prevent Mayor Bill de Blasio from regulating the charter school sector and won a law that forces the city to pay the rent of charters not located on public school grounds.
The blogger quotes extensively from the business magazine Crain’s New York, which described how this lobbying group exploited loopholes to avoid complying with state laws that require disclosure of donors to political action committees. “Group is visible,” the article’s title says, “but not its donors.”
FES became a nonprofit in April 2012. Between July 2012 and June 2013, it reported an “income” of just over one million dollars.
About those FES “donors”: It might appear that all FES donors are invisible, but they are not.
Not if they are using other nonprofits to support FES.
There is a wonderful, donor-supported search engine for nonprofit tax forms, citizenaudit.org. The search engine will allow the public 40 free views per year. (I just exhausted my free views.)
One of the beauties of this search engine is that it searches for terms within tax forms.
I searched for “families for excellent schools.” I found two nonprofits, a 501(c)3 named Families for Excellent Schools, Inc., and its accompanying lobbying arm, the 501(c)4, Families for Excellent Schools Advocacy, Inc.
501(c)3 nonprofits are limited in their lobbying, but donors may take a deduction for donating to a 501(c)3. In contrast, 501(c)4 nonprofits are free to lobby as much as the like, but donors cannot take a deduction for donating to them. Thus, a 501(c)3 may also run a 501(c)4, allowing donors to donate to the 501(c)3 for the work of the associated 501(c)4. It’s as easy as the 501(c)3 “donating” its cash to the 501(c)4. And it introduces an extra layer of money changing hands– one that makes it a little more difficult for the public to follow who is conducting the lobbying and who specifically is paying for it.
In my citizenaudit.com search of FES, I also found a listing of several other nonprofits that mention the organization as a grant recipient:
Eli and Edythe Broad Foundation (see pg. 24)
Moriah Fund, Inc. (see pg. 14)
(Note: Once a viewer exhausts 40 views per year, a number of the links above default to the citizenaudit.com sign-up page. Not all, since I sough elsewhere for alternative links.)
WNYC reporter Robert Lewis captured much of the above FES grant information in his March 2014 article:
The Walton Family Foundation, of Walmart fame, has given more than $700,000 over the past two years. …
According to the records that are available, other large donations to the organization (FES) include $200,000 in 2012 from the Broad Foundation; $200,000 from the Peter and Carmen Lucia Buck Foundation in fiscal year 2012-13; $100,000 in 2012 from the Moriah Fund; $25,000 from the Ravenel and Elizabeth Curry Foundation in fiscal year 2011-12; $19,000 in fiscal year 2011-12 from the Tapestry Project; $50,000 in fiscal year 2012-13 from the Vanguard Charitable Endowment Program; and $1,000 in 2012 from the Dalio Foundation.
Lewis also notes the following:
Families for Excellent Schools shares an address with the New York arm of StudentsFirst….
This sharing of an address is strong evidence that FES is Astroturf reform from its outset. Yet there is a bossy center around which FES and its fiscal feeders appear to revolve. Consider a few board connections from among organizations listed above.
Tapestry Project’s executive director is attorney Eric Grannis.
And Eva Moskowitz sits on the StudentsFirst NY board.
And FES shares an address with StudentsFirstNY:
345 Seventh Avenue, Suite 501, New York, NY.
Eva at the bossy center. But that center is very much a collaboration involving Moskowitz, and StudentsFirst, and money from both philanthropies and hedge-fund managers.
Before I ended the post, I thought I’d see what organizations shared the Seventh Street address. I came up with the two expected:
I also found the 2012-2014 election spending reports for another group:
NYPSF is a hedge-fund, charter-school-promoting PAC. (One must pay to access the link to this Capital New York article. I did not pay, so I do not know if the NYPSF hedge funders are named, but I’m thinking at least some are.)
On September 5, 2014, NYPSF contributed $19,700 to “Friends of Kathy Hochul” Andrew Cuomo’s running mate.
There are numerous other detailed expenditures.
825 K Street, 2nd Floor, Sacramento, CA.
But back to New York.
A few final thoughts regarding 325 Seventh Avenue, Suite 501, and the organization that instigated this post, FES:
FES appears to be little more than a Moskowitz- and StudentsFirst-associated mushroom organization designed to offer the illusion of multiple (grass roots) organizations rallying behind “school choice.”
Though it might seem that it is not possible to know who is supporting FES, it is possible to make some telling connections via examination of 990s and physical addresses– especially shared addresses.
Those trying to hide from public view behind one organizational front might find themselves exposed via association with another.
The perils of layered corruption, eh?
Schneider is also author of the ed reform whistleblower, A Chronicle of Echoes: Who’s Who In the Implosion of American Public Education
In February 2012, then-new Louisiana Superintendent John White told Rick Hess of the American Enterprise Institute (AEI) about how he planned to work a marvel of renovation and preservation for New Orleans’ McDonogh High School by allowing Steve Barr, CEO of Green Dot Charters in Los Angeles (a businessman with no vested interest in the New Orleans community) to assume control of the school and work hand in hand with the locals:
RH: Post-Katrina, there were concerns about outsiders invading New Orleans schooling. There have been intense racial politics. How did you negotiate that during your time at the RSD, and how does that shape your approach going forward?
JW: It’s extremely important as a leader to never give up on your ideals. But on the other hand, never give up on respecting everyone at the table. That gives you a baseline of credibility off of which to operate.
RH: Can you offer an example of how you do this?
JW: Yes, at John McDonogh High School. When I first came to New Orleans, the word on the Street was we were going to shut down the building of that 100-year-old high school. And now we’ve announced that we are spending $35 million to renovate it. Steve Barr [founder of Green Dot Public Schools] and teachers now are going to take over the school’s management…By staying at the table, by sitting through the discussion, by always insisting that this can be a college and career school, we had a compelling vision that attracted great partners, and are in a position to turn one of the lowest performing high schools in the country into a real beacon for change.
A compelling vision of attracting great partners to created a beacon of change? Teachers taking over school management?
Not if that “great partner” decides he wants out.
The short of it: No *beacon*. No renovation of McDonogh happened; California-based Steve Barr (of Future Is Now charters– formerly Green Dot) pulled out, citing a “facilities decision” as his reason. Never mind that New Orleans has a number of unoccupied school buildings that a money man like Barr could have renovated or even demolished in favor of moving in temporary buildings while a renovation of McDonogh happened. However, such effort would have been a mark of one invested in the community more than in himself.
I teach at a 100-year-old high school that was systematically renovated over the course of several years, and that while the school remained a functioning school of approximately 1800 students.
But such projects require investment in the community. Our school was not being long-distance “managed” by someone with a short-term, profit-focused commitment.
Barr’s principal concern is not in how his decision to bail on his commitment affects the community on the receiving end. It is on his “venture.”
McDonogh closed in June 2014. As a part of washing its hands of New Orleans, Barr’s ironically-named Future Is Now (FIN) left behind equipment that the Recovery School District (RSD) (another ironic name) is auctioning off in the aftermath of the FIN-RSD divorce.
On October 11, 2014, RSD auctioned off laptops that still had student information on them, including student social security numbers.
And so we have yet another example of the problems introduced by “charter churn”: the changing of hands of equipment and the opening of fresh doors by which the security of student data might be breached.
In his weak attempt to explain the breach Recovery School District (RSD) Superintendent Patrick Dobard offers the following to Danielle Dreilinger of the Times-Picayune:
Dobard said his office has trained charter staff on property-disposal procedures but not checked up on devices until now. “We relied on the operators actually following the protocol,” he said.
And there we have yet another problem with lack of oversight in the name of charter *freedom*: “Trusting” that those preparing for disposal the property from the school that “just didn’t make it” will actually “follow procedure.”
But why should the “charter staff” from a folded operation care about procedure? Barr’s chain did not care enough to invest in keeping the “100-year-old high school” open. In fact, Dreilinger could get no word from the FIN spokesperson other than an “it’s not my problem” response:
Former Future Is Now spokesman Gordon Wright said the organization had no response because it no longer exists. [Emphasis added.]
No more FIN in New Orleans for Gordon Wright to worry about. Moreover, as of June 2014, he is with Education Post, the new, well-financed corporate reform blog where Wright recently wrote a post “protecting” two teachers from education historian Diane Ravitch on a blog that worships US Secretary of Education Arne Duncan. (Yeah, I know. Funny.)
But back to McDonogh, where students will not fall into new schools in the way that corporate reform promoters like Wright are able to fall into new, corporate-reform-promoting jobs.
In sum, we have a 100-year-old New Orleans high school that was supposed to be renovated; a California-based charter manager who decides not to follow through on his commitment because it was “too hard”; the haphazard liquidation of property that resulted in a breach of student data; no charter manager to answer for the breach because the charter manager is no longer the charter manager, and a textbook example of unforeseen problems associated with “charter churn.”
Nevertheless, Dreilinger’s article includes a strategically-placed statement that downplays this inevitable and ever-present churn:
The Recovery system oversees about 50 charters in New Orleans, plus more in Baton Rouge. Every year, a small number of charters have closed.[Emphasis added.]
If “every year” a “small number” close, that is a lot of churn. The result is a district that never stabilizes and remains forever open to the uncommitted, nonresident Steve Barrs to come and go relatively unscathed while critical property– not the least of which is student personal information– ends up sold on the auction block of apathy-birthed incompetence.
Schneider is also author of the ed reform whistleblower, A Chronicle of Echoes: Who’s Who In the Implosion of American Public Education
Before the proverbial ink is dry on the assessments to be given in 2014-15 by both federally-funded testing consortia wed to the never-piloted Common Core State Standards (CCSS), along comes yet another *philanthropic* organization with the Next Great Idea: To structure statewide accountability systems around CCSS.
The Hewlett Foundation *convened* a group of individuals, some of whom I readily recognize as key players in the CCSS-and-assessments game, to formulate this *new accountability system.* Though previously released in September 2014, on October 16, 2014, the group officially released its report, entitled, Accountability for College and Career Readiness: Developing a New Paradigm. The report is credited to Linda Darling-Hammond, Gene Wilhoit, and Linda Pittenger. Darling-Hammond is a Stanford education professor and senior research advisor for one of the two CCSS consortia, Smarter Balanced. Wilhoit is the former CEO of the CCSS copyright holder, the Council of Chief State School Officers (CCSSO). It was Wilhoit and CCSS “lead writer” David Coleman who asked billionaire Bill Gates and his wife to fund CCSS. Wilhoit is now with the University of Kentucky Center for Innovation in Education (CIE), which Gates paid one million dollars in February 2013 to help launch expressly “to advance implementation of the common core.” Pittenger, also formerly of CCSSO, is with Wilhoit as CEO of CIE.
There is also an extended group of individuals who contributed to the Hewlett-induced, CCSS-centered, statewide accountability discussion. Among them are Michael Cohen of Achieve (Achieve was the nucleus of CCSS development, along with ACT, College Board, and Student Achievement Partners); Carmel Martin of the Center for American Progress (Though decidedly for CCSS because it’s “better,” Martin, who participated in the Intelligence Squared debate on CCSS in New York on September 9, 2014, did not know CCSS was copyrighted.), and Phillip Lovell and Charmaine Mercer, both from the Alliance for Excellent Education (AEE), a group dependent upon the Gates Foundation “for general operating support” to the tune of $6 million since 2012 for that purpose alone, plus multiple millions more since 2003.
Thus, it is no stretch to note that all of these folks are clearly motivated to promote CCSS and its attendant assessments as the center of a massive “accountability” push.
Nevertheless, their Hewlett-funded report includes this disclaimer:
The final product — authored by Linda Darling-Hammond, Gene Wilhoit, and Linda Pittenger — reflects the individual and collective insights of the participants, but it does not reflect an endorsement by any of these individuals or the organizations with which they are affiliated. [Emphasis added.]
Now that’s just over-the-top funny to me: “We took the money, we wrote the report, we are promoting the report publicly, but we aren’t endorsing the report.“
Here’s the knee-slapper: The report is a call to accountability, but the writers and contributors want an exit clause from being held accountable for promoting it.
I must say, before this group of influential individuals rushes ahead to promote the idea that states should construct complete accountability systems around CCSS, it seems that a report holding CCSS and its diehard promoters accountable should be issued.
Indeed, the CCSS assessments haven’t even been administered yet. How about focusing those “generous” philanthropic dollars on a report on the rollout of the PARCC and Smarter Balanced tests?
Or does educational research now bow to what those with the dollars state as “accountability-worthy”?
Let’s back it up even further: The Hewlett-purchased “new accountability” report hinges on the too-oft-repeated premise that CCSS *ensures* college- and career-readiness.
That CCSS “ensures” anything is only opinion.
This truth makes any proposal constructed on a CCSS center nothing more than crackers crumbling.
The October 16, 2014, version of the Hewlett-funded report is over 40 pages long. That’s how many pages it apparently takes to “get the conversation started” on the CCSS-centered, statewide accountability system. The authors state that “considerable discussion and debate will be needed before a new approach can take shape,” yet nowhere do they pause long enough to consider that “discussion and debate” are not justification enough for constructing a CCSS-centered statewide accountability system: Piloting is needed.
Forget all else that those 40-plus pages try to sell (for it s a sale).
Piloting was needed for CCSS, and it never happened. Instead, overly eager governors and state superintendents signed on for an as-of-then, not-yet-created CCSS. No wise caution. Just, “let’s do it!”
That word “urgency” was continuously thrown around, and it makes an appearance in the current, Hewlett-funded report. No time to pilot a finished CCSS product. Simply declare that CCSS was “based on research” and push for implementation.
This is how fools operate.
America has been hearing since 1983 that Our Education System Places Our Nation at Risk. I was 16 years old then. I am now 47.
America is not facing impending collapse.
We do have time to test the likes of CCSS before rushing in.
Darling-Hammond, Wilhoit, and Pittenger, how about an accountability report on CCSS?
Let’s go back a bit more.
How about an accountability report on No Child Left Behind (NCLB) and its strategic placement on a life support that enables former-basketball-playing US Secretary of Education Arne Duncan to hold states hostage to the federal whim?
The Hewlett-funded report notes that between 2000 and 2012, PISA scores have “declined.” Those are chiefly the NCLB years and beyond, with the continued “test-driven reform” focus. It is the test-driven focus that could use a hefty helping of “accountability.”
And let us not forget the NCLB-instituted push for privatization of public education via charters, vouchers, and online “education.” An accountability study on the effects of “market-driven,” under-regulated “reform” upon the quality of American education would prove useful.
There is also the very real push to erase teaching as a profession and replace it with temporary teachers hailing from the amply-funded and -connected teacher temp agency, Teach for America (TFA). A nationwide accountability study on the effects of the teacher revolving door exacerbated by TFA would be a long-overdue first of its kind.
I will leave it to those willing to read the Hewlett-produced 40-plus page report. But know that it serves a practical purpose for those advancing CCSS:
As CCSS continues to falter, fail, and face rejection by both locals and the politicians who realize they are elected by locals, the Hewlett report provides CCSS proponents with the ultimate “faulty implementation” exit:
CCSS failed because it lacked complete system support.
But don’t you buy it.
Tell Hewlett et al. to back it with the first step in a true accountability system:
The pilot test.
Then, leaving “urgency” where it belongs– in the toolbox of manufactured panic– we can calmly and rationally take it from there.
Schneider is also author of the ed reform whistleblower, A Chronicle of Echoes: Who’s Who In the Implosion of American Public Education
There’s a new blog in town, and it is fortified by millions in corporate-reformer cash in order to spread the message that “the reforms are working.” It’s called Education Post. I first wrote about it in this September 3, 2014, post.
EdPost is keen on US Secretary of Education Arne Duncan. (More discussion on that point in in this September 18, 2014, post.) And Duncan is keen on the Common Core State Standards (CCSS), a product for which “lead writer” and non-educator/edupreneur David Coleman is now famous. (Duncan and Coleman go back to at least 2001 and Duncan’s time as Chicago Schools CEO. I detail this in my upcoming book on the history and development of CCSS– due for publication in April 2015.)
In 2007, Coleman started a company called Student Achievement Partners (SAP). In 2011, SAP became a nonprofit. Today, SAP deals solely in CCSS. As for Coleman, in 2012, he rode a non-educator/edupreneur wave right into the presidency of College Board.
EdPost promotes “conversation” that promotes Duncan and Coleman and their pet CCSS.
Let us switch gears for a moment.
The piece includes the all-too-familiar CCSS propaganda:
The Common Core State Standards exist to make sure that our students graduate high school ready for and able to attend colleges and universities or enter a career. What students should be able to do is part of a progression, a staircase of understanding and skills that leads from kindergarten to 12th grade graduation. …
The clear, high bar set by the standards ensures that my students are on the path to college and career when they leave my classroom at the end of kindergarten, so that is the bar I aim for.
Sounds great. I like the “staircase” analogy. But it’s the “ensures” part that betrays that this message is propaganda, for CCSS has never been tested. So, anyone stating that CCSS “ensures” anything is peddling fiction.
Now, it happens that the teacher quoted above is taking issue with the idea that CCSS requires kindergarteners to count to 100. The teacher is fine with this– after all, it is part of non-educator/edupreneur David Coleman’s CCSS– and CCSS is magically guaranteed set those kindergarteners to climbing the “stairs” of CCSS so that twelve years later, they will be (of course, of course) “college and career ready.”
In her response, this teacher notes, “It is simply not ok with me that educators like [New York principal Carol] Burris choose what is too easy or too hard for their students.”
But apparently it is okay to follow the untested choices of others on the matter.
No mention of what happens to the children who cannot count to 100 by the end of kindergarten. Surely there are some who cannot.
Are they, at the tender age of five, declared to be “behind in college and career readiness”? Are they– or their schools– or their teachers– “failing”?
On October 11, 2014, education historian Diane Ravitch reacted to this “count-to-100″ foolishness on her blog– and her reaction was a strong one. Here is an excerpt:
This is one of the silliest, most embarrassing articles I have read in a very long time. It was allegedly written by two teachers as a rebuke to Carol Burris, the experienced high school principal who has made a hash of Common Core in her many writings for Valerie Strauss’s Answer Sheet in the Washington Post.
The teacher who says she teaches kindergarten wants to make sure that her 5-year-old students are “college-and-career-ready.” Really? So if a 5-year-old can’t count to 100, they won’t have a career or go to college? Surely, she jests.
Has she ever heard of “Defending the Early Years,” an organization of early childhood experts who believe the Common Core standards are indeed developmentally inappropriate. In this article, Professor Nancy Carlsson-Paige says that it is “ridiculous” to expect little children to count to 100. So what if they learn to do it next year or the year after?
I don’t remember counting to 100 by the end of kindergarten. I have a Ph.D. in stats. Neither does my sister, who is an electrical engineer. Kindergarten was not even required when I started school (1972). I do know that I could count to 13 at the beginning of the year, and I did know a classmate who could count to 100 at the beginning of the year.
I remember being shocked to learn that there were more than 13 numbers.
In kindergarten, my nephew tested in the 97th percentile for perceptive reasoning. Furthermore, he could count to 100 by the end of kindergarten. But he had trouble mastering his colors and his alphabet.
He would have flunked the CCSS stairs.
I didn’t even have stairs.
In addition to this “kindergarteners should master counting to 100 because CCSS says so,” Ravitch also took issue with the fact that the teachers in the Real Clear Education article were associated with Coleman’s CCSS-promoting SAP.
EdPost to the rescue.
In an article entitled, “Why Is Diane Ravitch Belittling These Teachers?,” J. Gordon Wright offers a blurb in response to Ravitch. (Wright’s defense really is just a burp of a response. Below is the entire post.)
In a recent blog post, education historian Diane Ravitch belittled two teachers who happen to disagree with her and school principal Carol Burris on the merits of higher standards.
Honest differences of opinion are one thing. But describing these teachers’ words as “silly” and “embarrassing” simply because they don’t share your views should be out of bounds. No matter where you stand on the issues, we can all agree that we need to make it absolutely safe for teachers to express themselves without feeling bullied or humiliated by people with a large public platform.
Ravitch points out concerns from some in the early childhood community about meeting Common Core standards with young children. As a father of young public school students and someone who worked alongside some of the nation’s leading experts in child development, I think an ongoing debate about the best way to raise standards in the early grades is needed and important. But belittling comments stifle rather than support that debate.
It’s also disingenuous for Ravitch or anyone to suggest that someone lacks credibility solely because of an affiliation to a certain organization. Whether you are linked to foundations, unions, or others, what matters is the strength of your argument, not the source of your funds. [Emphasis added.]
Let’s examine the idea of “feeling bullied or humiliated by people with a large public platform.”
In November 2013, in response to opposition to the CCSS he was decidedly promoting, Duncan made the following statement “from a large public platform” and in an obvious effort to scapegoat (“bully”? “humiliate”?) a specific group– and their children.
“It’s fascinating to me that some of the pushback is coming from, sort of, white suburban moms who — all of a sudden — their child isn’t as brilliant as they thought they were and their school isn’t quite as good as they thought they were, and that’s pretty scary,” Duncan said. [Emphasis added.]
But where is J. Gordon Wright’s article entitled, “Why Is Arne Duncan Belittling These White Suburban Mothers and Their Children?”
Nonexistent, you say?
Let’s do another.
David Coleman tailored the CCSS ELA to suit a particular literary analysis called New Criticism, which completely discounts the experiences that readers bring to texts. (For an excellent discussion on this point, see New York professor Daniel Katz’s September 19, 2014, post.) And yes, Coleman’s preference for New Criticism does indeed drive the curriculum associated with CCSS, for it completely ignores another prominent form of literary analysis, Reader Response.
In defense of his preference for the New Criticism-shaped CCSS, in a talk entitled, “Bringing the Common Core to Life,” Coleman once told an audience,
As you grow up in this world you realize that people really don’t give a shit about what you feel or what you think… it is rare in a working environment that someone says, “Johnson I need a market analysis by Friday but before that I need a compelling account of your childhood.” That is rare. [Emphasis added.]
Coleman did apologize to his audience before he said it. But he still said it, and it clearly is an affront to the sensibilities of his audience.
Who cares what you feel, right?
But where is J. Gordon Wright’s article, “Why Is David Coleman Belittling the Personal Experiences of Both His Professional Audience and People in General?”
Nonexistent again, you say?
And again I say, indeed.
In the Real Clear Education article in which the two SAP-affiliated teachers defend CCSS, the second teacher is a high school English teacher. She states that CCSS “is not a curriculum.” However, Coleman has made it clear that *his* CCSS ELA is purposely not associated with Reader Response criticism– which means that CCSS ELA is driving curriculum away from Reader Response criticism. Thus, to state that “teachers have the freedom to choose their own curriculum” is deceptive, for CCSS ELA purposely restricts what curriculum fits it.
The same is true for CCSS math.
CCSS math “chair” Phil Daro admits that CCSS math has been written specifically to drive the math curriculum in (his preferred) direction “for building a new kind of instructional system.” It just so happens that Daro’s “new instructional system” is the one that has produced EngageNY/Eureka Math– a curriculum over which Daro had the last word.
CCSS drives curriculum.
The two SAP teachers aren’t telling the curriculum-driving part of the CCSS story. Perhaps they do not know it. But they should know it before wholeheartedly promoting CCSS as associates of SAP.
EdPost’s J. Gordon Wright craftily dismisses affiliations and funding in his statement,
It’s also disingenuous for Ravitch or anyone to suggest that someone lacks credibility solely because of an affiliation to a certain organization. Whether you are linked to foundations, unions, or others, what matters is the strength of your argument, not the source of your funds.
Ravitch doesn’t state that the two teachers “lacked credibility solely” for affiliation with SAP. Prior to her focus on the teachers’ SAP affiliation, Ravitch first addresses the “kindergarteners should count to 100 because CCSS says so” issue.
That noted, an organization’s “source of funds” certainly does matter. The source of EdPost’s funding is the reason that only the funders’ “side” of that supposed education “conversation” makes an appearance on the EdPost blog.
EdPost clearly promotes CCSS acceptance. That’s their predetermined “conversation.”
As authentic as staged spontaneity.
If the purpose of the EdPost blog is to promote the likes of Duncan and Coleman (which it is), it should at least stop trying to snow the public with the repeated use of the term “conversation.”
Then again, coming clean is a lot to expect of yet another handsomely-funded, top-down propaganda vehicle.
We are in an age of so-called “data-driven” education reform. Numbers and analyses are being worshiped as the end-all, be-all evidence of education quality. Student standardized test scores are at the center of the majority of such analyses. Moreover, in order to “drive” the privatization of public education using quantitative data analysis, corporate-reform-bent philanthropies and businesses are dumping money into “institutes,” groups of often questionably-credentialed individuals who promote attractive reports full of impressive numbers and analyses meant to wow the public into believing that test-driven “reform” is working.
The edge that these institutes (and other corporate-reform-promoting “nonprofits”) have in wielding statistical analyses is that neither the public nor the media is able to critically examine the quality of the work. Therefore, both are susceptible to swallowing whole the institute’s summation of its findings.
After all, if the physical appearance of the report is attractive, and if the report comes from *An Institute*, it must be trustworthy.
The public often does not critically consider the agendas of those financially supporting an institute; it often does not know the qualifications of those producing the reports, and it cannot discern whether the report outcomes amount to little more than a propaganda brochure “finding” in favor of the favored “reforms” institute donors.
All of this I had in mind as I read the retracted, October 1, 2014, Cowen Institute report, Beating the Odds. On October 10,2014, Cowen Institute removed the report from its website due to “flawed methodology.”
I wrote about the retraction in this October 10, 2014, post. I did not go into great detail on Cowen’s error because I needed to think of how to communicate it to readers in a way that is not too technical.
I will try to do so in this post.
What Cowen Institute did wrong was a major blunder– the kind that skilled researchers do not make. Cowen Institute’s researchers apparently thought they were conducting a value-added modeling (VAM) analysis. Instead, they conducted a more basic analysis known as multivariate linear regression (MLR)– and even that, they botched.
Not only did Cowen Institute conduct the wrong statistical analysis, and not only did it misuse and misinterpret the more basic analysis it conducted, but Cowen Institute also did not even realize it’s gross error until a week beyond publication.
I am baffled at how this happened. The incompetence astounds me. When I first read in the Times-Picayune that Cowen Institute had withdrawn the report due to “flawed methodology,” I expected a more sophisticated error. In fact, when I realized that Cowen had conducted (and interpreted, and published) the wrong analysis, I doubted my own senses.
It made me wonder just how crappy the rest of Cowen’s research actually is.
In their flawed study, Cowen Institute stated that it had produced predicted values on three outcome measures (EOC passing rates, an ACT index, and cohort graduation rate) for all Louisiana high schools. The stated goal of the study was to compare predicted outcomes with actual outcomes to determine which schools performed better, worse, or as predicted.
The focus was on actual school performance as compared to a predicted performance. Though many think of VAM in terms of evaluating the teacher based upon students test scores, Cowen was attempting to “VAM” the schools based upon student test scores and grad rates.
But the Cowen analysis was not VAM.
Before I proceed, let me note that I am convinced VAM cannot work. In December 2012, I analyzed Louisiana’s 2011 VAM pilot results and explained how erratic (and therefore useless) VAM is. Student test scores cannot measure teacher quality; neither can student test scores measure school quality, and attempting to hold teachers and schools hostage to statistical predictions on their students is lunacy.
VAM should not be connected to any high-stakes evaluation, period.
That noted, allow me to offer a brief word on VAM and MLR.
Both VAM and MLR (the basic analysis that Cowen actually conducted, albeit poorly) assume that lines can be used to capture the relationship among the variables. Thus, both rely upon the basic equation for a line. (Perhaps you remember it from algebra days gone by: y = mx + b.)
One can think of VAM as a more sophisticated version MLR. VAM has levels of lines to it because it considers layers, such as those evidenced when one considers that students are in classes; classes are in schools, and schools are in districts. One might think of VAM as having equations within equations. In contrast, MLR operates only on one level (i.e., no equations within equations).
The Cowen researchers conducted their analyses on one level, and they used MLR. Their report includes three separate MLRs, one for their three outcomes of interest: EOC, and ACT, and grad rates. In an attempt to predict these three outcomes, the researchers used five measures: 1) percentage of students who failed LEAP tests, 2) percentage of students who are over=age for their grade level, 3) percentage of students on free/reduced lunch, 4) percentage of students in special education, and 5) whether the school is a selective admissions school.
Two Cowen researchers thought these three MLRs were VAM.
In order to conduct school-evaluating VAM on the outcomes of EOC, and ACT, and grad rates, the researchers should have incorporated previous measures of EOC, and ACT, and grad rates, into their analysis. Makes sense, doesn’t it? For example, in order to predict future EOC scores for a given school, one must consider previous EOC scores for that school. Yet no such incorporation of previous scores is present in the Cowen study.
Really, really not good.
It gets worse.
Not only did the two Cowen researchers use the wrong analysis; they didn’t even use MLR well. That’s what gets me more than any true-yet-botched attempt at actual VAM. MLR is an analysis with which one with a stats and research background should be familiar, and these Cowen folks botched even it.
When used appropriately, MLR can be used for either of two purposes: to explain or to predict an outcome. If I have a theory about what factors contribute to a certain outcome, I can use MLR to test my theory and determine the degree to which my theory explains some outcome.
MLR for explanation does not evaluate individuals. It actually evaluates the researcher’s theory about what factors contribute to some outcome.
The more common usage of MLR is to predict. I have a friend from college, a fellow stats major, who tried to come up with an MLR equation to predict winners of horse races. (It seems that many stats people dabble in gambling since gambling is all about probability, as is stats.) In coming up with his equation, my friend tried to determine which predictor measures could help him determine which horses would win future races. As such, he wanted a useful equation, one that could predict a future outcome: the winner of a future horse race.
Now this is important: Even if one of the predictors of future wins is some tabulation of past wins, the purpose of the MLR was not to evaluate jockey “effectiveness” based upon the horse’s performance. That would have been a VAM goal (and a futile one, as previously noted). Instead, my friend’s focus was on the usefulness of his equation at predicting the winner. That’s an MLR goal.
Cowen tried to evaluate individual schools based upon an MLR prediction equation. This was wrong to do.
Had the Cowen Institute researchers properly conducted an MLR for prediction, here’s how their study generally might have looked:
First, the research question would have focused on the utility of the MLR equation in predicting future outcomes, not on evaluating the schools.
Second, in order to produce a MLR prediction equation, the researchers should have at least two random samples, one to use to develop the equation, and at least one other to test the usefulness of the equation.
Finally, the researchers could have then decided whether to make (or suggest, if not possible to make) adjustments to the equation in an effort to improve prediction, or they could have decided that the equation is satisfactory.
There you have it.
One should not test a prediction equation using the same sample data one uses to arrive at the equation because the equation has been tailored to fit the sample data as best as is possible. Nevertheless, in the case of the Cowen study, it seems that the “actual” outcomes were the very same ones used to arrive at the predictions.
And the three MLR prediction equations had error issues of their own.
A major factor in determining the usefulness of the MLR prediction equation is the amount of difference in outcome scores accounted for by the equation. This value is called R-squared. A perfect prediction equation would have an R-squared of 1.0. This utopian result would mean that in development, the prediction equation accounted for all differences in the outcome, and all actual data points fell perfectly on the line of prediction. Such does not happen in reality. However, it is possible for an MLR equation to have an R-square close to 1.0, such as .98.
The lower the R-squared value, the more unaccounted-for “noise” in the prediction equation, and the less likely the equation will be useful for predicting future outcomes.
In their analysis, Cowen Institute reported three values of R-squared, one for each of its three MLR lines: .684 (EOC), .768 (ACT), and .412 (grad rates).
The highest R-squared, .768, means that for the Louisiana high schools in the year that this analysis was conducted, approximately 77% of the differences in ACT scores can be accounted for by the predictor variables that the researchers included in the analysis (mentioned previously: 1) percentage of students who failed LEAP tests, 2) percentage of students who are over=age for their grade level, 3) percentage of students on free/reduced lunch, 4) percentage of students in special education, and 5) whether the school is a selective admissions school).
An R-squared of .768 indicates that approximately 23% of schools’ differences in ACT scores remains unaccounted for by the Cowen MLR prediction equation. Generally speaking, this R-square is modest. The research focus should have been on reconsidering the five predictor variables in order to increase R-squared and improve the equation.
Improve the equation, not evaluate the individuals in the sample.
The remaining two R-squared values (of .684 and .412) are not as impressive, with .412 being, in fact, useless. (An R-squared of .412 means that the MLR equation is mostly “noise.” A waste.)
Researcher discussion should have been on the utility of their equations at accurately predicting future EOC scores, or ACT index results, or cohort graduation rates– not on evaluating actual results, present or future.
I shake my head.
A word to “choice” promoters wishing to showcase their product using VAM: Do not do what Cowen did.
The clouds of individual data points around the three MLR lines in the Cowen report (pages 15 – 17) is not to be used to evaluate the points above the line as “better” and those below, a “worse.” No, no. Those clouds are to be used to evaluate the MLR lines themselves as modestly- to poorly-fitting.
In developing an MLR prediction equation, the better the MLR lines, the fewer the points off of the line, and the closer to the line the points that are not directly on the line.
Again, this has nothing to do with evaluating individual data points (remember, the data points here represent Louisiana high schools).
This Cowen report could be a case study in bad research on several levels, not the least of which is the researchers conducted the wrong analysis– research dysfunction at its finest.
I think I have written enough.
The Cowen Institute at Tulane University has been promoting the New Orleans Charter Miracle since 2007. Cowen Institute has been trying since then to sell the “transformed” post-Katrina education system in New Orleans.
The results are tepid. Still, Cowen tries to sell this New Orleans. Consider this excerpt from Cowen’s history:
[Following Hurricane Katrina] the majority of schools reopened as charter schools, which are publicly-funded and operated by nonprofit organizations or universities, giving New Orleans a greater percentage of students in charter schools than any other district in the United States. Education entrepreneurs and veteran educators from around the country flocked to the city to participate in the greatest public school renaissance in the country. …
…the new model of delivering education to the city’s youth has begun to yield results. Parental involvement, teacher quality, and community engagement have all improved. Between the 2006-07 and the 2007-08 school years, student achievement rose for nearly every school in the city – and across all school types. Overall, the schools collectively saw a 15 percent increase in school performance scores from 2005-2008. Even so, New Orleans still ranks 65th out of 68 school districts in Louisiana, a state which has some of the lowest public school achievement levels in the country. While public schools in New Orleans are still performing at a level far below where they need to be, the improvements they have shown since Hurricane Katrina is very promising. New Orleans, once ranked as one of the worst school districts in the country, has the potential to develop a model for unprecedented innovation in public education. [Emphasis added.]
I’m sorry, but “65th out of 68 school districts” is hardly a “promising,” “innovative” “renaissance.”
Still, Cowen has been producing its reports with a mind to “chronicle the transformation of public education in New Orleans” in order that it might be “utilized by key stakeholders across the New Orleans area, the state, and the entire country.”
Remember, the New Orleans “miracle” was meant to be reproduced, allowing other “choice” districts to become 65th out of 68 in their states. (Tongue in cheek, of course.)
In its efforts to feature New Orleans charter success at “beating the odds,” Cowen took upon itself the task of VAMing Louisiana’s schools. VAM (value added modeling) is a statistical procedure that attempts to predict test score outcomes that (in this case) schools “should” achieve if they are to be considered “on target.”
It is no secret that VAM does not work. To read more about the problems with trying to use VAM to “predict” education “success,” read Dr. Audrey Amrein-Beardsley’s blog, Vamboozled!
For whatever reason– perhaps because VAM is test-driven-reform “vogue”– Cowen tried to use it, anyway.
According to Cowen’s supposed “VAM” report, released on October 1, 2014, a number of the New Orleans charters are doing a stellar job– if one considers “stellar” to mean surpassing so-called “VAM” predictions that Cowen created. Not exactly practical, real-world-translatable “success,” but instead, a theoretical hand-clap of, “Hey, look at us, we’ve improved!”
And yet, before the New Orleans Miracle celebration had time to die down, Cowen embarrassingly removed the VAM report from its website; ever so briefly stated that it was removed due to “flawed methodology” and “inaccurate conclusions,” and added, “The report will not be reissued.”
That must have been some error.
And indeed, it was.
Without getting too technical, what it looks like Cowen is calling VAM is nothing more than three basic multiple linear regression models fitted to state data to produce a “line of best fit” for the cloud of data points of each of the three regression models. Here’s the problem: In fitting a “best” line to a cloud of data points, some points must be above the line and some, below.
Therefore, those charters “exceeding” their prediction are nothing more than an artifact of the analysis.
This is not even VAM.
Cowen featured the report for over a week, during which time some charter sales notables sang about it as evidence that Choice Is Working. As the October 10, 2014, Times-Picayune reports:
Before the retraction, the report had leaped to prominence in some New Orleans education circles, touted by everyone from state Recovery School District Superintendent Patrick Dobard to Leslie Jacobs, a former member of the Louisiana Board of Elementary and Secondary Education and a driving force behind the state’s education reforms of the past 20 years. It also was highlighted via social media by leaders of the charter management organization Collegiate Academies, which runs Carver Prep and Carver Collegiate, and of the nonprofit charter support group New Schools for New Orleans.
(Leslie Jacobs’ celebratory response can be found here.)
Alas, however, the celebration is over. All guests have been asked to leave, and Cowen has removed its report.
But I still have it, cited above.
I think it might make an interesting case study for a number of researchers to whom Louisiana Superintendent John White and his department of education deny access to data.
And, of course, Amrein-Beardsley might like to give it an expert once-over.
It seems such a shame to just erase it… you know, like the pro-“choice” folks are trying to do with traditional public education….
Let’s talk the Common Core State Standards (CCSS) copyright. CCSS is owned by the National Governors Association (NGA) and the Council of Chief State School Officers (CCSSO). It’s theirs to do with whatever they wish. So saith the copyright– which they have already modified once.
NGA and CCSSO can do what they wish with the CCSS license and not be held responsible for any outcome:
THE COMMON CORE STATE STANDARDS ARE PROVIDED AS-IS AND WITH ALL FAULTS, AND NGA CENTER/CCSSO MAKE NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. [Emphasis added.]
So saith the copyright.
There is also the CCSS memorandum of understanding (MOU), drafted by NGA and CCSSO and signed by most governors and state superintendents in 2009. Now, CCSSO CEO Chris Minnich, a political science major who once was employed by Harcourt (now Pearson), stated on October 8, 2014, in Real Clear Education, that the CCSS MOU was void as of 2010. However, though it was signed in 2009, the CCSS MOU includes information on CCSS usage, as well as future revision and expected public promotion. US Secretary of Education Arne Duncan has even accepted the CCSS MOU beyond 2010 as evidence of state agreement to abide by CCSS as part of the Race to the Top (RTTT) application’s “common standards” component. Thus, the CCSS MOU was not only for “development” as evidenced by both CCSS MOU content and CCSS MOU document usage.
But back to another NGA/CCSSO document– the CCSS copyright.
Minnich says that the copyrighting of CCSS was done “to protect the states.”
But “the states” are not the owners of CCSS. Two organizations are: NGA and CCSSO:
NGA Center/CCSSO shall be acknowledged as the sole owners and developers of the Common Core State Standards, and no claims to the contrary shall be made. [Emphasis added.]
Minnich says that since NGA and CCSSO are “ultimately run by [their] membership”; therefore, “the states” own CCSS copyright. The problem with such a statement is that it assumes that all NGA and CCSSO state “members” want CCSS. State leadership changes. Some state leaders’ opinions about CCSS have changed. But NGA and CCSSO remain the “sole owners.”
And no claims to the contrary shall be made.
Let’s talk money. Minnich states that neither NGA nor CCSSO are profiting from the copyright:
We (CCSSO) also don’t make any money off the copyright, neither does the NGA. We don’t charge licensing fees. Individuals are free to use the standards as long they tell us they’re using them.
Let’s just set aside the fact that NGA has accepted over $2 million and CCSSO over $17 million from billionaire Bill Gates alone to “implement” CCSS, though the “making money” part for both NGA and CCSSO is definitely present. Sure, the money is not from licensing fees. Not right now.
But it could be, if and when NGA and CCSSO so choose.
According to the CCSS license,
NGA Center and CCSSO reserve the right to release the Common Core State Standards under different license terms….
But let’s even move beyond the possibility of NGA’s and CCSSO’s charging of a fee for CCSS usage. Let’s talk sale. NGA and CCSSO can alter “licensing terms”– and outright sale of CCSS would certainly qualify as “releasing” CCSS “under different license terms.”
After all, they are the sole owners.
In short, two organizations–NGA and CCSSO– hold all of the CCSS cards.
The power that NGA and CCSSO wield via the CCSS copyright is not an issue that Minnich wants the public to thoughtfully consider. On the contrary, Minnich is working hard to promote the image of NGA and CCSSO as “protectors of the states” via the CCSS copyright:
…The biggest thing for us is to protect the states. If the standards weren’t copyrighted, a publisher could’ve taken the standards and sold them back to the states. We did this on behalf of the state agencies and governors…. [Emphasis added.]
What Minnich fails to acknowledge is that NGA and CCSSO could do the very same: Sell CCSS to a third party– a publisher that could take the standards and sell them back to the states.
A publisher already deeply invested in cornering the CCSS market– like, say, Pearson.
Minnich mentions hoping for some of that “market force” that Gates is so fond of promoting. In his spin, Minnich tries to promote the idea that CCSS standardization is good for smaller publishers– presumably to turn those smaller publishers into bigger publishers:
One of the things we were hoping to do with the standards was open up the market to more publishers. Having a common set of standards across the
country allows smaller publishers to come to the forefront.
Before the Common Core, each publisher had to produce 50 different sets of textbooks for 50 different standards – one for each state. The bigger publishers had the capital and resources to do that – to write to the different standards for different states, the smaller publishers really couldn’t. The smaller publisher would probably only be able to write to one, and hope that the other states would buy them. Now, given the more common standards, we would hope the smaller publisher would be able to get into that market and really increase the innovation that’s going on with materials.
The problem with this “smaller publishers to the forefront” idea is that the bigger publishers– like the Pearson from which Minnich happens to hail– are already big. They are the Walmarts among the mom-and-pop corner stores.
We know what happens when a Walmart moves in on mom-and-pop territory.
Goodbye mom. Goodbye pop.
And the terms of that CCSS copyright– terms that clearly benefit NGA and CCSSO and not individual states– would allow NGA and CCSSO to make the sale, if ever and whenever they wish.