Skip to content

Louisiana School Grades: Adjusting the Farce for Student Growth

June 6, 2021

Louisiana state superintendent Cade Brumley has proposed increasing the impact of “student growth” upon Louisiana’s school letter grades.

The expectation based upon computer simulation is a 50-percent cut in the number of schools that would otherwise receive D and F letter grades.

The June 4, 2021, Advocate first reported on Brumley’s efforts to increase the student growth proportion from 25 percent of a school’s grade to at least 38 percent, which the Advocate reports is more in keeping with national school grading trends.

However, the Advocate misleads readers in how it defines student growth– a definition that makes student growth appear to be independent of student test scores and even test-score comparisons with peers. From the Advocate:

Since 2017 student growth – whether students meet learning targets regardless of test scores and how they compare to their peers – has accounted for 25% of the score.

On the contrary, student growth is test-score dependent at its core, with growth determined also dependent upon peer comparisons, as noted in the Louisiana Department of Education (LDOE) document regarding use of student growth as a school performance component beginning in 2017-18:

From the Louisiana Department of Education (LDOE) website (linked above):

Beginning in 2017-2018, for school performance scores, growth of students will be measured in two ways – Value-Added Model and Growth to Mastery.

Growth to Mastery measures the distance between the student’s curret ELA or math scaled score, and the scaled score required to achieve Mastery by grade 8 in elementary/middle schools (750) and by the second high school LEAP assessment in high schools.

For students who are already at Mastery, Continued Growth measures the distance between the student’s current ELA or math scaled score and the scaled score required to achieve Advanced on the same timelines listed above. …

(At end of doc) If a student does not achieve the Continued Growth target, the school is awarded points based on the student’s performance compared to similar peers.

The Value-Added Model (VAM) measures students’ success compared to similar peers year to year. … The VAM anticipates how well students will perform on the test in comparison to their peers with similar prior test scores and background.

So, let’s be clear: Louisiana’s school grading, including the student growth component, is heavily test-score dependent and comparison-dependent.

According to the 2019-20 formula, test scores account for 90 to 95 percent of the K8 school grade and 45 percent of the high school grade (50 percent of high school grades are tied to graduation rates and diploma types; K12 schools all derive 5 percent of the school score from an “interests and opportunities” component.)

Increasing the student growth component will contribute to moving school grades toward the center. Lower grades (F and D) will “improve,” and higher grades will “decline” unless some sort of handicap is instituted to account for scoring ceilings (i.e., the point at which “improvement” becomes either practically negligible or numerically impossible.)

The Advocate article includes no reference to the impact upon higher school grades if the student growth component is alloted greater weight:

Kathy Noel, deputy assistant superintendent for assessments, accountability and analytics, said Louisiana falls into the lower quartile of states in how much credit they give students for yearly academic gains.

Noel said simulations show about 50% of the state’s D- and F-rated public schools would improve a letter grade under the new ratings. A total of 23% of public schools were rated D or F in 2019, the latest snapshot.

I emailed Noel and asked about simulation results related to the student growth change upon higher-scoring schools (A B and C). I will add any response she sends to this post. (UPDATE: Noel response on June 07, 2021: “Good morning.  I have attached a summary of the simulations across all letter grades.  There is not a substantial change for A schools.  Minor improvement is noted in B and C schools.” Link will not load.)

Another issue: In this whole scheme of district/school/teacher grading, it seems that little thought (and no accounting) is given to the concept of measurement error, with test results wrongly assumed to be exact, indisputable, spot-on, end of story. Consider this example from Louisiana’s “studentgrowth” link (see above) concerning how VAM works– and how the example stops short of any discussion of measurement error:

VAM is a prediction of how a student is expected to score on state assessments for the current year relative to the student’s peers including prior achievement and demographics.

The actual score for each student is compared to the expected score to determine if he or she has made more, less, or an expected amount of progress. The following example illustrates how these variables would apply to a student.

Suzy scored Approaching Basic in ELA each of the past three years with no grade retention. As a result, she is expected to score Approaching Basic (719) this year.

Suzy has a speech/language disability. All students with speech/language disabilities scored, on average, 1.5 points below their peers. Thus, her expected score is reduced to 717.5.

Suzy missed ten days of school. All students missing ten days of school scored, on average, 1.5 points below their peers. Thus, her expected score is further adjusted to 716.

No other influencing VAM characteristics apply to Suzy, so they do not impact her expected score.

Suzy is expected to be the average of the limited number of characteristics included in her VAM scoring outcome, without any means for accounting for individual uniqueness (Suzy is now an “average”), and without including any adjustment for measurement error. Suzy is expected to score 716, not 715. If she doesn’t “grow” to specs, her school’s grade is negatively impacted.

Next, consider how the Growth to Mastery also includes no acknowledgement of measurement error. it is a “simple calculation” and a timeline to score that certain score leading to Mastery:

Growth to Mastery is a simple calculation of the scaled score points a student needs to improve each year to reach Mastery status by the grade 8 or second high school assessments.

It consists of the prior year performance, the distance to Mastery, and the number of years left to grade 8 or the second high school assessments. This measure is known in advance. …

For students scoring Mastery the previous year… if a student does not achieve the Continued Growth target, the school is awarded points based on the student’s performance compared to similar peers. …

For students scoring Advanced (the highest possible rating) in the prior year… if the student drops to the Mastery level or below, the school is awarded points based on the student’s performance compared to similar peers.

Divide the number of points needed to reach Mastery by the number of years left to do so, and viola! we know what that kid needs to score per year. No individual considerations involved. By the grade 8 or second high school assessment date, reach Mastery. Score a point below, that’s not to specs, and the school score is negatively affected.

Growth points awarded based on how a student compares to “similar peers.”

Chin up, though: If everyone declines in concert, the school can get some points out of it– and that appears to be the magic of student growth in bolstering the F or D school grade.

Grading schools is a bad idea for many reasons. The following December 10, 2010, Associated Press opinion on Louisiana school grades includes a number of those reasons. The piece also illustrates problems with comparing grades across the years since the grading criteria alluded to in this 2010 article is obsolete:

Analysis: Letter grades for Louisiana schools too complex

BATON ROUGE — Letter grades seemed so simple in school. When your teacher gave you an “A,” it meant you did really great work, a “C” was average and an “F” meant you had failed at the tasks on which you were being judged.

Leave it to Louisiana’s education leaders to make a direct letter grading scale so messy and confusing. Of course, state lawmakers and Gov. Bobby Jindal had a hand in the problem as well, suggesting that it’s a straightforward task to grade Louisiana’s public schools – and providing little guidance on how it should be done.

It seems like a great idea: Assign a letter grade from “A” to “F” for the nearly 1,300 schools so parents can understand what type of education their children are receiving.

“People can relate to grades,” said Penny Dastugue, president of the Louisiana Board of Elementary and Secondary Education.

The idea is where simplicity ends, however.

The road to BESE determining how to assign grades got wrapped up in issues of poverty, performance improvements and the other struggles that face school superintendents, principals and teachers every day.

Should a school be rewarded for how much it improved its students’ achievement rates and given a better grade even if its overall results still show a large percentage of students performing below their grade level and the state’s standards?

Is it fair for a school in a poor neighborhood where many students don’t have parental support and don’t get basic reading training before they enter school be graded against a school in a wealthier neighborhood where more students start off with greater advantages?

If you curve the system, will it really provide any useful information to parents and will it meet the intent of what lawmakers and the governor wanted out of the grading scale?

Does a letter grading system in some cases discount the strides a school is making or the hard work its teachers are doing? Could it damage morale and make it more harder for a lower-graded school to attract strong teachers and education leaders to help improve it?

BESE wrangled with those difficult questions before backing a grading scale Thursday, in a 6-4 vote. In the end, the board went with a tougher letter-grading system than was proposed by district superintendents, a panel of educators and Superintendent of Education Paul Pastorek.

The first letter grades will be assigned to schools in October 2011, when the latest school performance scores are released by the state education department. The letters will replace a previous grading system that had involved a series of stars.

Schools will be graded “A” through “F” based on the performance score they receive in the state accountability system, which consider student standardized test scores, attendance rates and dropout rates.

A “plus” will be added to the grade if a school meets its annual improvement goal, while a “minus” will be added if the school’s performance declines.

Pastorek wanted a different structure that would give schools that improved their growth score a letter boost, but BESE members objected. School district superintendents wanted a more generous grading scale than what got approved.

Dastugue acknowledged that even the revised, tougher scale “is a pretty generous curve,” though she also called it reasonable and balanced.

Schools that receive an “A” can have as many as 12 percent of their students performing below “basic” or below grade level. Schools with a “B” can have as many as 23 percent below grade level, with a “C” can have up to 36 percent and with a “D” up to 61 percent.

In other words, at a “C” level school, one in three students can fall behind the state standard of where they should be performing. Is that average? Is a “D” school where 6 in 10 students aren’t performing at their grade level really a passing school?

To know what the grades really mean, parents will still have to do a bit of homework.

Grading schools is a bad idea for many reasons, not the least of which is the instability of grading criteria across time. And let’s be realistic: If people have to “do homework” to try to understand “what the grades really mean,” then school letter grade “simplicity” is a farce, and the grading needs to go.

________________________________________________________________

No time like the present to sharpen your digital research skills!  See my latest book, A Practical Guide to Digital Research: Getting the Facts and Rejecting the Lies, available for purchase on Amazon and via Garn Press!

Follow me on Twitter @deutsch29blog

5 Comments
  1. I call this TABTATIT —
    The Anything But Teaching Approach To Improving Teaching

  2. Christine Langhoff permalink

    What an utter farce.

  3. Measurement error???

    Really? How can there be error in something that does not exist? There is no measurement being done in standardized testing.

  4. Dang, hit the post button before complete. As Paul Harvey would say: “Here’s the rest of the story!”

    The most misleading concept/term in education is “measuring student achievement” or “measuring student learning”. The concept has been misleading educators into deluding themselves that the teaching and learning process can be analyzed/assessed using “scientific” methods which are actually pseudo-scientific at best and at worst a complete bastardization of rationo-logical thinking and language usage.
    There never has been and never will be any “measuring” of the teaching and learning process and what each individual student learns in their schooling. There is and always has been assessing, evaluating, judging of what students learn but never a true “measuring” of it.
    But, but, but, you’re trying to tell me that the supposedly august and venerable APA, AERA and/or the NCME have been wrong for more than the last 50 years, disseminating falsehoods and chimeras??
    Who are you to question the authorities in testing???
    Yes, they have been wrong and I (and many others, Wilson, Hoffman etc. . . ) question those authorities and challenge them (or any of you other advocates of the malpractices that are standards and testing) to answer to the following onto-epistemological analysis:
    The TESTS MEASURE NOTHING, quite literally when you realize what is actually happening with them. Richard Phelps, a staunch standardized test proponent (he has written at least two books defending the standardized testing malpractices) in the introduction to “Correcting Fallacies About Educational and Psychological Testing” unwittingly lets the cat out of the bag with this statement:
    “Physical tests, such as those conducted by engineers, can be standardized, of course [why of course of course], but in this volume , we focus on the measurement of latent (i.e., nonobservable) mental, and not physical, traits.” [my addition]
    Notice how he is trying to assert by proximity that educational standardized testing and the testing done by engineers are basically the same, in other words a “truly scientific endeavor”. The same by proximity is not a good rhetorical/debating technique.
    Since there is no agreement on a standard unit of learning, there is no exemplar of that standard unit and there is no measuring device calibrated against said non-existent standard unit, how is it possible to “measure the nonobservable”?
    THE TESTS MEASURE NOTHING for how is it possible to “measure” the nonobservable with a non-existing measuring device that is not calibrated against a non-existing standard unit of learning?????
    PURE LOGICAL INSANITY!
    The basic fallacy of this is the confusing and conflating metrological (metrology is the scientific study of measurement) measuring and measuring that connotes assessing, evaluating and judging. The two meanings are not the same and confusing and conflating them is a very easy way to make it appear that standards and standardized testing are “scientific endeavors”-objective and not subjective like assessing, evaluating and judging.
    Thase supposedly objective results are used to justify discrimination against many students for their life circumstances and inherent intellectual traits.

  5. The way it is used in psychometrics, measurement error is nothing more than mental masturbation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s