Skip to content

Louisiana’s VAM: Quantitative Bungling on Display

May 23, 2019

This post is about value-added assessment (VAA), also called value-added modeling (VAM). I saw that Louisiana’s “father of VAM,” LSU psychology professor, George Noell, and others, had recently published a VAM reflection related to VAM usage on Louisiana’s teacher preparation programs (TPPs), entitled, “Linking Student Achievement to Teacher Preparation: Emergent Challenges of in Implementing Value Added Assessment,” and I just had to write about it.

(You can read the article for free by signing up for a 14-day trial here; meanwhile, I have contacted the publisher for permission to link to full article. Stay tuned.)

Notice that Noell’s et al. title includes the carefully-selected term, “linking,” because it is a tricky game to establish that VAM proves causation and not just correlation. However, the big buzz about VAM usage in education is that VAM is often reverently consulted in decisions about the professional fates of teachers, schools, and TPPs.

VAM is used to judge, and in those judgments, the judges assume that the teacher, or the school, or the teacher prep program “caused” some associated VAM score. If the VAM score is deemed pleasing, then good for you, teacher, school, or TPP.

If not, well, you better fix whatever needs fixing (though VAM is not precise enough to inform on this point) or you could be *correlated* right out of professional existence.

The first piece I wrote as a public education advocate was this 2012 “VAM Explanation for Legislators.” I did so at the behest of a fellow Louisiana teacher and advocate, who asked if I would write something that our Louisiana legislators could understand, “on the eighth-grade level.”

Not sure if I hit the appropriate grade-level readability, but I did rip the erratic results of Louisiana’s VAM pilot study published in February 2011 by George Noell and Beth Gleason.

Interestingly, Noell was no longer associated with the project and had concerns about the outcome, but he was keeping quiet about it publicly.

In 2015, Noell resurfaced to promote VAM for the Louisiana Department of Education (LDOE), and in 2015, I ripped Louisiana’s VAM once again in this January 11, 2015, post.

In this post, I do not delve deeply the shortcomings of VAM as a measure of teacher or school or TPP “value” in student learning as captured in the narrow space of standardized test scores; I simply focus on two truths about the limits of VAM, limits that Noell et al. illustrate in their 2019 article about VAMming TPPs:

  • VAM outcomes connecting student test scores to teachers or TPPs do not establish causation, though the VAM name implies that it does, and
  • VAM lacks the precision to advise those affected by it on viable next steps.

First, as to causation: The very name, “value added,” implies causation, with the resulting outcome some number supposedly demonstrating how much “value” was “added” to a teaching candidate’s student test scores by the TPP, or to the student test scores by their teacher. However, as the American Statistical Association (ASA) notes, “VAMs are generally based on standardized test scores, and do not directly measure potential teacher contributions toward other student outcomes.”

Why conduct VAM if the results are not intended to get those judged by it to “do” something to try to “improve” future VAM results? Nevertheless, this underlying belief that those VAMmed are being directly measured using student test scores and therefore are responsible for taking action to improve future VAM outcomes is undeniable.

Second, among the Noell et al. VAM disclaimers is the lack of VAM information detailed enough to inform those judged by VAM outcomes.

From Noell et al. (and let us throw in a causation disclaimer, while we’re at it):

As these TPPs (the ones with low VAM aka VAA outcomes) initiated their self-studies, they requested a number of detailed subgroup reports about their data. … As data were spit more finely, results were inevitably less stable, but some programs did identify patterns that they found suggestive… For example, one program identified that students taught by their graduates were performing poorly on essay assignments while also performing relatively strongly on assessments of usage, spelling, writing conventions…. Although these types of analyses cannot provide causal guidance, they may be valuable in connecting teacher educators to what happens once their graduates leave the program.” (Emphasis added.) …

The initial reporting of VAA results made clear that VAA-TPP data were sufficient to motivate change and that they lacked detail to guide what needed to be changed or how to accomplish that change. Our experience with the programs that wanted to make changes to improve VAA results hammered home the importance of communicating clearly about what VAA cannot do and being willing to support teacher education leaders as they begin exploring potential solutions for poor VAA results. …

…These subgroup, descriptive and subscale reports may be slicing the data so thinly that it can lead program faculty to begin contemplating programmatic changes in response to transient phenomena and chance variation. Follow-up reports should be accompanied by appropriate cautions regarding their use.”

*VAM has its limits. Not our fault.*

And on top of this, reiteration of the Big May-be:

The data suggest that TPP-VAA may be responsive to programmatic changes in the TPP. (Emphasis added.)

VAM is high-stakes, yes, and those affected by it must fish in the dark for how to improve VAM scores, but *disclaimer* We Who Reside on the Untouchable Side of VAM cannot be sure if their efforts (and the time, money, and other disruption behind such efforts) actually did anything to influence subsequent VAM outcomes.

And there is more:

Noell et al. note that the cycle of feedback (from fishing-in-dark program alterations to VAM score result) may take years.

This just keeps getting better and better.

Well, TPPs, VAM can damn you, but it cannot offer information precise enough to assuredly advise you in the point of the VAM game– improving future VAM scores. Moreover, “results cannot prove” (direct quote) that changes made in the name of attempting to un-damn your VAM score actually impacted your VAM result.

AND (of course, of course) you may not see any influence on VAM (for which we’ve already read the disclaimer) for YEARS.

Well, then. I have a word for this entire process:

Asinine.

jackass

The embodiment of VAM

__________________________________________________________________________________

Interested in scheduling Mercedes Schneider for a speaking engagement? Click here.

.

Want to read about the history of charter schools and vouchers?

School Choice: The End of Public Education? 

school choice cover  (Click image to enlarge)

Schneider is a southern Louisiana native, career teacher, trained researcher, and author of two other books: A Chronicle of Echoes: Who’s Who In the Implosion of American Public Education and Common Core Dilemma: Who Owns Our Schools?. You should buy these books. They’re great. No, really.

both books

Don’t care to buy from Amazon? Purchase my books from Powell’s City of Books instead.

 

4 Comments
  1. Harlan Underhill permalink

    Granted that a VAM score based on comparing a standardized test score from one year to a standardized test from the next is shaky ground on which to base salary and promotion decisions even if a class of kids is made up of the same identical persons the second year as the first, and granted even more so that changes to teacher preparation programs based on VAM scores of their graduates is even more tenuous, BUT that still doesn’t dispose of the common sense notion that a teacher who knows more SHOULD do her students more good over a year than one who knows less or is less competent at classroom management than a teacher weakly prepared in various topics of knowledge or pedagogy. We know from anecdote that some teachers simply are better than some others. If so, why can’t those differences be validly measured? What is your answer to the question or do you completely reject the premise that student performance can be affected by teacher performance and that student performance can be assessed by statistically valid and reliable tests?

    • Laura H. Chapman permalink

      I wonder what you might propose as a visual arts test for grade four students that can be managed by a competent, highly experienced teacher of art who has a master’s degree and who sees about 800 students, all fourth graders, once every week for about 50 minutes.

      You frame the question about testing as if the primary responsibility of the teacher is the transmission of knowledge in forms that can be tested with relative ease.

      I am not opposed to all forms of testing. I worked on the very first NAEP tests in the Visual Arts and studied the results of tests…administered about once in every decade and now only in grade 8 when many students are not even enrolled in art classes. These tests reveal the role of family wealth in access to varieties of art experience outside of school in addition to the bearing of time allocations for in-school instruction (less significant).

      National and state-wide tests have been reified, and VAM, even if finally killed as it should be, has left a lot of collateral damage.

  2. Reblogged this on What's Gneiss for Education and commented:
    Thank you Mercedes for your hard work.

Leave a comment