Publicizing Test Scores
On Tuesday, lohud.com, a Gannett Company publication covering New York’s lower Hudson Valley, published an editorial: Teacher Evaluation Data Must Be Made Public.
I agree with the Tweeter who posted it, and, of course the person whose response is included:
I love the rhetorical question… where to start, where to start?
There is the worry, of course, of being data driven rather than data informed, a key difference I see growing here in Wisconsin, particularly as laws change, increasing the feeling that it is necessary to be data driven in order to survive as teacher and as a school with reputations intact.
We won’t even get into the money involved in this. My goal someday is to really study Pearson to figure out how much revenue they, among many companies generate through tests.
But right now, I worry about the individual teachers involved.
The posting includes this paragraph:
“Release of the data was eye opening, and in unexpected ways. In compelling detail, reporting by The New York Times and WNYC, among others, showed how some data could make even standout teachers — teachers whose students typically achieve at high levels — look like abject failures, based on the anomalous poor showing of a few students. One teacher’s bad report card was directly tied to the failure of a student who, the night before an exam, had to care for a family member.”
I assume this paragraph was for credibility, stating, in essence, look, we see the problems. We see that this teacher was treated unfairly. We recognize there are worries.
This credibility-building is enhanced, potentially, by adding:
Randi Weingarten, president of the American Federation of Teachers, told The Wall Street Journal in February that release of the New York City data was “outrageous,” adding that it “amounts to a public flogging of teachers based on faulty data.”
I say potentially because taking the nugget “outrageous” out somewhat tilts Ms. Weingarten’s quote to sounding a bit more hysterical, a purposeful move, I believe, on the editor’s part.
But the editor never addresses those worries. The editor never looks at the misinformation that tests will propagate. The editor never questions the impact of the “public flogging” of teachers.
I wonder why people will enter a career where their reputation and their ability to advance depends so much on others who have no stake in the matter (student test takers) and on data proven to be so erroneous.
So I posted, too.
The paragraph about the teacher’s evaluation being tied directly to a poor test by a student who needed to care for a family member is an interesting insert into the text. This reminds me of the case of Pascale Mauclair, a highly accomplished teacher labeled failing by NY public release of data. The fact that multiple, random, and compounding factors influence resulting test scores and then the public reputation of a teacher is hinted at through that paragraph, but not addressed. The margin for error on scores is so great that I wonder what, besides parent demand, would warrant the release? This article asserts accountability, but that assumes the test results are fair and indicative. If a parent would like to know about a teacher, as we all do, ask at playgrounds, court-side, and sidelines. Unless test scores decrease the margin of error currently associated, they do not enhance accountability. Instead, they obscure. They demoralize. They demonize.