Mea Culpa: My latest mistake

Today I apologized to my students. Yesterday I had two girls leave my class crying. The two are related.

It is the end of debates and two weeks before AP testing. I get nervous and anxious, trying to press kids on what to improve. I forget how nerve-wracking it is to debate in public. I forget how hard it is for kids to organize their debate team around sports and music and work and life. I forget. And in my desire to make sure the next group improves, I (and the class) point out errors.

And then a crying child (or worse, two)  brings me back to reality. Why do I not praise what is going well? Students will be more tuned into specific praise and more praise and then a little focused criticism. Why do I let my anxiety make me forget my better nature?

The girls who left my room crying after “losing” a debate deserved better, and they had real strengths that, once they left, I realized I should have said.

So today I apologized. Today we looked for positives. And the day felt better.

And then tonight I went into my Google Calendar and put a banner across the weeks that debate will likely be next year, reading “tell them what they are doing well.” I think I will tape that message to my desk as well.

Advertisements

Publicizing Test Scores

On Tuesday, lohud.com, a Gannett Company publication covering New York’s lower Hudson Valley, published an editorial: Teacher Evaluation Data Must Be Made Public.

I agree with the Tweeter who posted it, and, of course the person whose response is included:

Image

I love the rhetorical question… where to start, where to start?

There is the worry, of course, of being data driven rather than data informed, a key difference I see growing here in Wisconsin, particularly as laws change, increasing the feeling that it is necessary to be data driven in order to survive as teacher and as a school with reputations intact.

There is, too, the real worry that teachers will narrow the curriculum in response to the test, a claim backed up by multiple sources.

We won’t even get into the money involved in this.  My goal someday is to really study Pearson to figure out how much revenue they, among many companies generate through tests.

But right now, I worry about the individual teachers involved.

The posting includes this paragraph:

“Release of the data was eye opening, and in unexpected ways. In compelling detail, reporting by The New York Times and WNYC, among others, showed how some data could make even standout teachers — teachers whose students typically achieve at high levels — look like abject failures, based on the anomalous poor showing of a few students. One teacher’s bad report card was directly tied to the failure of a student who, the night before an exam, had to care for a family member.”

I assume this paragraph was for credibility, stating, in essence, look, we see the problems.  We see that this teacher was treated unfairly.  We recognize there are worries.

This credibility-building is enhanced, potentially, by adding:

Randi Weingarten, president of the American Federation of Teachers, told The Wall Street Journal in February that release of the New York City data was “outrageous,” adding that it “amounts to a public flogging of teachers based on faulty data.”

I say potentially because taking the nugget “outrageous” out somewhat tilts Ms. Weingarten’s quote to sounding a bit more hysterical, a purposeful move, I believe, on the editor’s part.

But the editor never addresses those worries.  The editor never looks at the misinformation that tests will propagate.  The editor never questions the impact of the “public flogging” of teachers.

I wonder why people will enter a career where their reputation and their ability to advance depends so much on others who have no stake in the matter (student test takers) and on data proven to be so erroneous.

So I posted, too.

The paragraph about the teacher’s evaluation being tied directly to a poor test by a student who needed to care for a family member is an interesting insert into the text. This reminds me of the case of Pascale Mauclair, a highly accomplished teacher labeled failing by NY public release of data. The fact that multiple, random, and compounding factors influence resulting test scores and then the public reputation of a teacher is hinted at through that paragraph, but not addressed. The margin for error on scores is so great that I wonder what, besides parent demand, would warrant the release? This article asserts accountability, but that assumes the test results are fair and indicative. If a parent would like to know about a teacher, as we all do, ask at playgrounds, court-side, and sidelines. Unless test scores decrease the margin of error currently associated, they do not enhance accountability. Instead, they obscure. They demoralize. They demonize.