Melissa Tooley
Director, Educator Quality
Ever since the federal Race to the Top competition and Elementary & Secondary Education Act (ESEA) began incentivizing states to develop more rigorous, multi-measure teacher evaluation systems, much of the surrounding debate has focused on the inclusion of student learning growth as one of the measures within these systems.聽 Meanwhile, classroom observation measures鈥攐n which the bulk of most teachers鈥 final evaluation ratings are based鈥攈ave received much less attention. Recently, however, several education and groups have recommended improvements to the design of observation measures. Observation design is 鈥攂ut聽is it sufficient聽to ensure that classroom observations positively impact teacher quality and student learning?
TNTP鈥檚 鈥溾 report is the most recent call to rethink how new observation measures are designed. The report advocates for improving teacher observation scoring rubrics by: 1) ensuring they assess whether appropriate-level content is being taught and 2) focusing on a handful of skills students should be demonstrating or outcomes they should be accomplishing during a lesson, rather than on which strategies teachers are using to try to elicit student skills and outcomes. 聽To inform teacher development, TNTP recommends that each student-centered objective still be paired with a list of potential strategies that teachers could employ to achieve it. The organization plans to release a prototype of such a rubric early next year for public comment.
By changing rubrics in these ways, TNTP contends that observations will improve in several ways, including:
Some might wonder whether states and districts should consider making changes while many schools are still getting comfortable with the observation measures currently in place. But TNTP says four actions can be taken now without major disruption to current systems: 1) ensuring that lesson content carries significant weight in the observation rating; 2) consolidating observation items that are redundant; 3) eliminating items that can鈥檛 be directly observed in a classroom visit (e.g., collegiality); and 4) giving observers鈥攁nd their managers鈥攄ata on the 聽quality and quantity of feedback they provide to teachers, and eventually factoring these data into observers鈥 own performance evaluations.
While evaluation design is important, implementation is even more so.
The聽first of these near-term actions is immensely important to ensure that all students are being taught the content they need to master at their grade level, and one that has been missing from most discussions about aligning teacher evaluations and CCSS. The second and third actions just seem like common sense. But it鈥檚 the goal of the fourth action鈥攖o focus on how observations are implemented and used鈥攖hat I鈥檓 afraid still isn鈥檛 getting enough attention. As TNTP鈥檚 report acknowledges, while evaluation design is important, implementation is even more so鈥攁nd thus, 鈥渞ubrics are only as effective as the observers who use them and the systems that support them.鈥 Which begs the question, can classroom observations鈥攔egardless of how well they鈥檙e designed 鈥揵e used to effectively evaluate and improve teachers鈥 performance if there aren鈥檛 incentives for evaluators to perform observations with fidelity and use the results to ?
, state , and 鈥溾濃攚ho often don鈥檛 see eye-to-eye鈥攕eem to agree that observers (who are most often school principals) should be well-qualified to do both of these things. But, to date, most state and local policymakers have not taken steps to ensure that that they are. Some states require observers to pass a certification test (e.g., ) and a few others mandate that observers be held accountable for delivering constructive feedback and development plans (e.g., ). But many states leave observer training up to individual school districts鈥攑erhaps not surprising given that require districts to design their own evaluation systems, and play only a small role in implementation鈥攔egardless of district capacity. And many districts are more focused on ensuring that observers complete observations and submit ratings than that they provide feedback to help聽improve instruction.
How can we encourage more states and districts to pay attention to the important issues of teacher observation implementation and use? The answers aren鈥檛 readily available, and perhaps this is why many have opted to focus on aspects of observation聽measures (like design) that seem easier to fix. But I鈥檒l throw out a few ideas. As the U.S. Department of Education continues its ESEA waiver monitoring process, observer quality聽could be an area that it requests information about and provides feedback on as it scrutinizes states鈥 implementation of teacher and principal evaluation systems. And if there is a next round of , it鈥檚 an area worth considering putting some muscle behind.