Skip to content

Florida Teacher Faces Dismissal Over Computerized Music Testing

October 30, 2016

In Florida, music teachers must administer computerized tests to assess themselves.

It is a setup.

If students have difficulty with signing onto the computer, or with submitting a response, the teacher assisting these students could be accused of cheating.

It’s called “zero tolerance.”

On October 28, 2016, the Tampa Bay Times published a story about a music teacher who was accused of cheating on the computerized test designed to test her.

Never mind that the outcome of the test was terrible. Zero tolerance.

Never mind that using computerized tests to determine fine arts outcomes is imbecilic on its face. The State of Florida wants to test all teachers in their subject areas, and the narrow, unrealistic practice of using computerized standardized tests to determine teacher worth might as well be extended to its illogical extreme.

Instruments down, kids. Time to grade the music teacher. Have a seat at the computer.

From the Tampa Bay Times story on Hillsborough County music teacher, Vanessa Lewis:

Lewis, who was used to a format in which she played music and read questions, now had to administer the test via computer. She is not good at technology. And she missed a training session, relying on the school’s art teacher to fill her in on the instructions.

Her kids, who had in some cases sat through weeks of testing already, were exhausted.

“They were done,” she said.

Still, her job depended on their scores.

Children told their teachers that Lewis fed them some of the answers.

“My own students said it took forever because they had to sit with their hands raised until she came to them and checked their answers,” third-grade teacher Kim Galang told the district. And it was a good thing, a child reportedly told Galang, “because I had like five wrong.”

“She pointed to one and said, ‘That’s wrong,’ ” a second-grader told the principal.

Lewis rebutted their statements, one by one. She said she did not tell the second-grader that the answer was wrong, but that the child had not submitted the answer properly. She admitted that she told a fourth-grade child to “read the question regarding beats.” But she insisted she did not give the answer.

She said the seven children interviewed did not reflect all 800 she taught. That teachers who made statements against her were not even there. That little kids can’t be expected to navigate a test like that without help.

Lewis points to the test scores as evidence she didn’t cheat. They were terrible, she said. “In the toilet.”

She hired an attorney to represent her in the termination case that is now under way. He is asking for an open hearing before the School Board in December.

Whether she actually meant to cheat may not matter.

Hillsborough has a zero-tolerance policy for testing irregularities.

Even helping a kid catch a blank question can get you fired.

In August 2016, ABC News reported that Hillsborough County had the largest number of teacher shortages in Florida as of July 2016 and was actively recruiting from as far as Puerto Rico to address the problem.

Hillsborough County would do well to address setting its current teachers up for cheating accusations when they are forced into precarious, “zero tolerance” situations involving nonsensical computerized testing.



Released July 2016– Book Three:

School Choice: The End of Public Education? 

school choice cover  (Click image to enlarge)

Schneider is a southern Louisiana native, career teacher, trained researcher, and author of both A Chronicle of Echoes: Who’s Who In the Implosion of American Public Education and Common Core Dilemma: Who Owns Our Schools?.

both books

Don’t care to buy from Amazon? Purchase my books from Powell’s City of Books instead.

  1. Laura H. Chapman permalink

    I attended schools in Hillsborough County, so I alway perk up when news comes from there. Lately it is mostly about truly outrageous policies and practices. Online test is tough. In this case it an outrageous requirement. The teacher has 800 students in multiple grades. The assignment is not unusual for any “specialist.” The test expert chooses to describe the arts as “boutique” subjects. ( The Common Core called them “technical subjects).

    In the name of accountability, we are called upon to teach the kids to master the tests, even if they learn to hate the subject. That absurd outcome is a possible in every subject but it is really exposed to view in the arts. It is also differently absurd for music versus dance, and theater and visual arts, and the new hybrid called “media arts.”

    Back in the days of Race to the Top, Tennessee was leading the way in modifying the student learning objective (SLO) process of evaluating arts teachers to accommodate a portfolio system for demonstrating “growth.” Growth has acquired the truncated meaning of “an increase in a score from one point in time to another.”

    The Tennessee initiative received state approval in 2012. The modified SLO process included a masked peer review of student work called “evidence collections.” Art teachers assembled these collections in a digital portfolio that also included other documents for the evaluation. A dedicated online site facilitates the process.

    Many of the criteria for submitting a portfolio were linked to the concept of “purposeful sampling.” For example, the teacher had to select samples of student work from two points in time (comparable to a pretest and posttest) in order to represent student “growth.”

    A teacher had to submit five evidence collections. Each collection had to be coded to identify the specific state standards for arts learning addressed in the lessons or units. The evidence collections had to include targets for learning in three of the four major domains in state arts standards: Perform, Create, Respond, and Connect. The online template offered guidance for submitting portfolios and understanding how the scoring system worked.

    In this system, the art teacher rated the evidence collections—a form of self-evaluation. This self-evaluation became part of the portfolio. Then two exemplary art teachers who had job-alike experience independently rated the portfolios—a form of masked peer review. These raters had been trained to use rubrics for the evaluation. They rated digital portfolios, which meant that they were also relying on the teacher’s skill in digital photography, video, and audio recording to evaluate the submissions of student work. The final rating placed the teacher into one of six levels of performance from “significantly below expectations” to “significantly above expectations.” A third rater could be enlisted to ensure the final rating has a consensus.

    In Tennessee, the “student growth” measure counted for 35 percent of a teacher’s overall evaluation. By 2013, 1,500 art teachers had been evaluated by this method. Judging from several accounts on the internet, the process is so time-consuming that only a few districts are using it. It is not easy to scale into a statewide system. The plan is still a work-in-progress in another sense. There are new and grade specific national standards in all of the arts. Tennessee appears to have adopted these, with all of the spillover effects of reworking the online submission system, teacher re-orientation, and modifying the rubrics for judges.

    Computers are doing more than determining what and how students are tested and teachers are evaluated. In a recent conversation with a former student who is teaching visual art, I learned that her school is required to use Schoology software and the data dashboard it provides. They are being asked to evaluate students based on standards, and a four-point scale indicating degree of “mastery.” This teacher also has 800 elementary students, but they are all in the same grade.

    There is an expectation within the software for a “progression of learning” from earlier grades, through the current grade and thence to the next grade and so on. In addition to that false assumption, the system seems to assume that a teacher is only effective if all standards, at every grade, have “mastered” content and skills “aligned” with grade-level standards.

    The system fails to acknowledge that instruction in any one of the arts, especially in the elementary grades, might be introductory, exploratory, with the teacher focused on nurturing affinities for the subject, acknowledge and cultivate individual interests, take advantage of unscheduled but fortuitous opportunities for learning, leaving a life-long impressions that art is wonder-full, not a “boutique” subject.

    From the Schoology website and my extended conversation with this teacher the dash board system will generate bar graphs for performance for each student, each class, etc., accessible to parents, and feeding into a judgment of this teacher’s “effectiveness.” This teacher is less concerned about her own rating than the way students and parents will perceive a new four-point rating system. (My child got a 1 and not a 4 in art? Why?)

    Schoology software embeds a host of assumptions about proper education, class scheduling, enrollment patterns, and so on. The art teacher recognizes this and is agony trying to communicate her concerns to her colleagues and administrators. The investment has been made. The software developers are not educators. They built a platform from a desire to share college course notes. The software and color-coded charts and graphs notwithstanding, the software is pedagogically a farce.

    The teacher and student evaluation in Ohio is also a farce in another sense. Ohio still has teacher evaluations that are “a hot mess” of data-points fed into a state reporting system. SLOs are still in use (vintage 1999) along with value-added measures (VAM). Neither are reliable or valid.

    A portfolio component is offered as part of an “alternative” for teacher evaluation. It has a draconian writing and rationalizing scheme. (Think warm-up for a master’s degree thesis on how and why I did everything last year, and what I learned from all of the student outcomes.) The portfolio option also calls for training independent raters until their ratings are “calibrated.” There are more of those pseudo-scientific sub-routines and specifications. They are a throw back and homage to programed instruction, late 1950s.

  2. Schoology is brought to you by guess who? Pearson!

  3. Keep fighting, Vanessa!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s