When talks between the U.S. Department of Education and teacher-preparation programs over performance-based accountability collapsed last month, no one should have been surprised. Like most other accountability efforts in education, the department’s proposed plan was more stick than carrot. But those of us in the higher-education field of teacher preparation must accept our share of responsibility for the breakdown in negotiations as well.
The department had proposed that states evaluate teacher-preparation programs using graduates’ job placement, teachers’ job retention, the achievement gains of their students, and graduates’ and their students’ satisfaction surveys. Those assessments would have been used to classify programs’ impacts, and only those in the “effective” or “exceptional” groups would have been eligible for the federal Teacher Education Assistance for College and Higher Education (Teach) Grant Program, which provides financial support for students planning to teach in schools that serve low-income families.
In this continuing ritual of education-reform point/counterpoint, teacher-preparation programs, state education agencies, deans of schools of education, and teacher unions have raised legitimate concerns: State tests that derive individual teachers’ contributions to student test-score gains (what is known as a teacher’s “value added” to student achievement) are administered in only about half of the grades or subjects; state-standards tests are narrowly focused and fail to assess all of the ways teachers contribute to student learning; the “value added” scores lack reliability and validity; many teachers-to-be are prepared in one state but teach in another; programs have little control over graduates’ decisions to remain in the teaching profession long term; and the rules might “decommission” the very programs that supply teachers to high-need schools.
These concerns are not unreasonable. In fact, to say that carrying out the proposed rules would have been a mess is an understatement. For example, the fact that all grades and subjects would not be tested meant that a large number of education graduates would not have test scores to use as evidence of their performance. And how can a program prepare teachers to teach effectively to a wide range of different state standards, since not all graduates stay put? Combine those challenges with the immediacy and high stakes attached to the rules, and you have a process that would only have diverted time and energy from traction on credible program improvement.
But the tug-of-war that marked this rule-making process, like nearly all of the political debates around teacher performance, reflects that teacher unions, higher education, and even alternative teacher-preparation routes such as Teach for America have ceded responsibility for building credible and open internal quality controls. Education schools and other preparers of teachers have failed to build competency- and knowledge-assessment systems to identify and measure the skills that teachers need for successful performance. Such systems would be capable of publicly verifying that teachers met certain known performance benchmarks before they entered the profession, and passing would mean a high likelihood that the students taught by a graduate would make progress academically.
State agencies today certify teachers using an accumulation of academic credits and assessments that do not discriminate between good and poor performers. Nearly all graduates pass criteria that have no known association with teaching and learning in elementary and secondary classrooms. But when teacher-preparation organizations say that state-standards tests and value-added metrics are neither reliable nor valid, they sound like unions arguing against teacher evaluation—placing blame on imperfect assessments rather than finding alternatives and testing them.
Those of us who work in the fields of teacher training and education research could have been ready—with assessments, internal quality controls, data systems for follow-up, and mechanisms to feed back to programs the results of our graduates’ work with students. That someone else has come along and said, “Do this or else,” should come as no surprise. The Education Department’s error, and it is not a trivial one, was attaching high stakes and penalties to what should be a process of using information to systematically guide improvement and evaluation of programs.
We in teacher preparation need to know the learning gains of the students taught by our graduates, and we should know how well they teach. Even if the tests are lousy, narrow, and imperfect, they are important to employers and provide one form of information that could be useful for program-driven planning, quality control, and evaluation.
Many states and teacher-preparation programs are working on protocols for assessing their graduates, and it is vitally important that such assessment work continue. It is equally important that the Education Department develop and adopt policies that support the use of assessments and accountability for teacher preparation.
The rule-making process is not over. The Education Department may now put forward whatever teacher-preparation assessment criteria it likes; however, department officials have indicated that they will be receptive to contributions from those in the field of teacher preparation. We should certainly take better advantage of this opportunity than we did of the last one.
If we cannot counter the federal plan with a cohesive set of our own solutions, then we should follow the Education Department’s procedures for a period of, say, three years, but with two caveats: that no penalties be attached during that time, and that the information obtained from applying the rules be made public. During those three years, we should work out the kinks in measurement, develop and test parameters of accountability that could inform policy, and use the teacher-preparation process to begin improvements not only in our primary and secondary classrooms but in higher education as well.