Second Letter to White/BESE Regarding Scoring Bias
My second effort to reach White/BESE regarding scoring bias. DOE employee Jennifer Baird sent an email not addressing the issue but insteas defending DOE:
December 1, 2012
Mr. White and BESE Board Members:
In response to my letter dated November 21 concerning scoring bias in the 2012 school performance scores, Mr. White had Dr. Jennifer Baird send to me the email I forwarded to you with this document. I am not sure if Mr. White blind-copied it to you, so I have attached it here to be sure you have read it.
Dr. Baird’s email amounts to a flimsy attempt for DOE to justify the bias rather than confront it. It is as though Mr. White said, “I know: I’ll have someone with a Ph.D. write to Dr. Schneider and tell her that we’re right.” In my letter dated November 21, I present thorough and undeniable evidence of scoring bias. Mr. White’s response via Dr. Baird is really no response at all.
I have been reading archived Bulletins 111. One source of bias in favor of high schools/combination schools involves the graduation index. In October 2010 (pre-letter grade), the graduation index was set at 65 (pg. 2242). However, in August 2011 (pre-letter grade), the index was raised to 80. An explanation on pg. 3200 (November 2011 reprint of Aug 2011 bulletin) cites, “Changes in Bulletin 111, Chapter 6, provide detail for the change in the calculation of the graduation rate adjustment factor to eliminate a negative effect on schools with a graduation rate above the state goal or current grade target.” Two issues here: 1) Moving the graduation index will either inflate or deflate scores; thus, the scores have less to do with true “performance” and more to do with unstable measurement criteria; and 2), the term “negative effect” is another way of saying “scoring bias”; therefore, BESE/DOE recognized that issues of scoring bias could potentially pose problems in school performance scores prior to the application of school letter grades.
In July 2012, under John White and the current BESE, the graduation index threshold is once again set at 65, yet no evidence is offered for bias checks. No examination is done to see the effects of this change on outcome scores—the evidence of such negligence is in the inflated 2012 outcome when one examines elementary/middle (no benefit from a graduation index at all) vs. high/combination (benefit from a lowered graduation index) schools. Furthermore, as noted in the Bulletin 111, Chapter 6 comment above, is there indeed “a negative effect on a subgroup of high schools, namely, those with a graduation rate above the state goal or current target”? It is poor measurement procedure not to have examined and addressed such issues prior to letter grade release.
Common sense tells me that if I set and A as 94-100 one semester then lower the threshold to 90-100 the next, I will have more A’s that second semester, and that the increase cannot be attributed to student performance so much as to my lowered criteria. I can congratulate my students on their great performance for yielding such an increased number of A’s in the second semester as I write in my Schneider EdConnect publication or as I interview for an Advocate article, and I can also brag that this increase in A’s happened on my watch (and therefore must be evidence of my superior performance) as I face an upcoming, annual evaluation, but it is a lie. All that I have shown is that grades, and all associated rewards and penalties, are at the mercy of my capriciousness.
The 2012 school performance scores and corresponding letter grades are inflated, and this 65-no-80-no-wait-65-again threshold bouncing of the graduation index is an undeniable contributor to the inflation.
There is another issue in the lack of calibration of the EOC scores to the GEE scores. Mr. White, via Dr. Baird, may write, “We’ve used transition scores before,” but that does not erase the evidence of the bias in the 2012 school performance scores. Proper calibration was not completed.
How do I know this? The answer: I know how to conduct a proper calibration. I know that if I am measured in inches, I am 64. If some value judgment is attached to my height in inches, that judgment is calibrated to inches. So, if the value judgment notes that 60 to 80 is “good” and 80+ is “excellent,” I can inflate this result by being measured not in inches, but in centimeters, since centimeters will “look” larger if one looks at the numeric value alone. In centimeters, I am 163. Without proper calibration, I can “advance” from “good” (64) to “excellent” (163) without having grown at all. In order to avoid the inflated value judgment, I can either 1) convert the centimeters (EOC) into inches (GEE), or 2) convert the value scale into centimeters (EOC). I cannot convert the inches (GEE) into centimeters (EOC) and leave the value scale in inches (GEE). This is what DOE/BESE has done with its “transition baseline.”
There is an additional layer to this “transition baseline” issue. I postulate that DOE knows they did not correctly calibrate the scores because they tried to hide the “transitional baseline” in plain sight in the 2012 school performance score spreadsheet on the DOE website. First, the “transitional baseline” is hidden on the second page of the spreadsheet. “Surely,” one could argue, “this is not intentional; after all, it’s so much data. It was better divided into two spreadsheets.” That may stand as a valid argument. However, on that second page, the “transitional baseline” is hidden under a false name: 2011 Baseline School Performance Score. Now, there IS a real column with this name on the first page of the spreadsheet, and it really is the 2011 baseline scores. Who would think to compare the data from the 2011 baseline as listed on the first page to the 2011 “baseline” on the second page? Who would figure out that the mislabeled column is really the “transitional baseline”? Who would realize that the high/combination school scores are inflated from looking at a single, mislabeled column on the second page, if even one thought to check for a second page to the spreadsheet?
Not most people.
One could certainly argue that this false labeling of the “transitional baseline” is intent to deceive.
The score inflation is most obvious when one compares the 2011 baseline to the transition baseline. That is where I noticed it first, in the clearly labeled spreadsheet sent to BESE prior to 2012 school performance score release.
Once again, I write and ask DOE/BESE to face the issue of bias in the 2012 school performance scores. Mr. White, please don’t send me any more lame attempts via messenger to justify your position. I will continue to refute them and go public with my responses.
Mercedes K. Schneider, Ph.D.
Applied Statistics and Research Methods