It’s that time of year when faculty members have to produce the letters of recommendation that we’ve promised to write for students applying to graduate programs. The task can be a pleasure. We get to take a moment to reflect on strong students, and in some cases, reveal tales of spectacular growth, clear achievement, or even touching redemption.
Digital systems have spoiled what used to be a lovely — and useful — ritual.
In the past, students would provide a folder of materials to move the process along, including a list of institutions, their respective admission deadlines, and stamped and addressed envelopes. Faculty members could plan on devoting a couple of hours to doing the job properly and punctually. However, bearing witness for high-quality performance is no longer a simple matter of drafting and mailing letters that capture a student’s academic journey in a given course or major.
Technology has overcomplicated the process. It’s not the letter that is the problem; it’s all the additional information that online admissions systems demand. Now, beyond the actual letter, we have to answer extensive survey questions and rate the student on various scales. The joy of vouching for talented students quickly fades when faculty members know that each request will become an annoying time drain as we tailor our perceptions to each university’s preferred format. So while an online system may have increased efficiency on the admissions end, it’s made life more difficult for the recommenders.
Some graduate-admissions committees use centralized systems; others invent their own reporting formats. Consequently, we may complete as many as 10 unique surveys about the same student. One of us recently spent three hours wrestling with numerous online obligations and obstacles, just to submit recommendation letters on behalf of one student.
That experience led us to the following conclusion: Digital systems have spoiled what used to be a lovely — and useful — ritual.
Most faculty members craft a recommendation letter carefully to explain the nature of the teacher-student relationship and capture what makes the student stand out from others. Each time we submit a letter online for a student, we may be required to fill out tedious and repetitive details about our own backgrounds: How long have you been teaching? What type of students do you teach? How well do you know the student? Here’s our favorite question: How many undergraduates have you taught during your teaching career?” For many of us, that’s an unknown and unknowable number. (We’ve been too busy to count.)
For each letter we submit, we’re also routinely asked now to compare or rank the student’s achievements, abilities, and personality against those of their peers. We are expected to make finely turned judgments about what percentage of motivation, writing skills, persistence, integrity, and teaching potential, among other characteristics, the undergraduate possesses (e.g., “This student ranks in the top 5 percent of research-capable students I have taught during my career.”).
But many rating scales are problematic. By what measures are we supposed to figure out whether a student was in the top 5 percent or the top 1 percent? Some surveys include as many as 20 separate ratings on qualities that, most likely, are already addressed qualitatively in the letter of recommendation.
Sometimes rating systems ask the recommender to specify the nature of the comparison group on which you are making a percentage judgment. Are you comparing this student’s writing skills or motivation with undergraduates at private colleges and universities, with first-year graduate students, or with master’s-level students?
The result of all this “data” collection: Graduate-admissions committees often end up comparing apples, oranges, and grapes in trying to make sense of a student’s relative quality and standing against imagined others.
Rating systems are always going to reflect some response biases of the recommender. It is easy enough to indicate that a particular student is “better” on some criteria “than 85 percent of the students I have taught over the last five years,” but it’s an estimation that can’t be verified and so is of questionable legitimacy. Some faculty members are generous in rating a student and others are more critical, just as we vary between being easy and tough graders. Indeed, well-intentioned recommenders may overestimate student ability on these online ratings as an issue of faith in student potential rather than actual performance.
The variability is yet another factor that makes the online ratings next to useless — yet still attractive to admissions committees looking for a fast, quantifiable way to assess a student’s future prospects in a graduate program. We suspect that most faculty members — faced with these vague online forms and uncertain how the information will be used — simply maximize the ratings to give students the best chance of admission.
We note, too, that some graduate programs make it optional to submit a separate recommendation letter — even though it’s a rich, detailed source of information particular to the student — while requiring faculty recommenders to fill out the quicker, if more questionable, comparison ratings. It is especially disheartening to spend time crafting substantive letters only to have them designated as “optional” with an accompanying suspicion that the letter is likely to go unread.
Not all online systems are sufficiently evolved to allow one set of ratings to work for all the universities that might be participating in that system. Even if a student has targeted programs that share the same online application system, ratings submitted for one graduate program within a system don’t necessarily transfer to all relevant applications. Each request means we have to start over again.
This year, one online system asked us to evaluate an undergraduate based on a five-level rating of student “goodness.” That was followed by a dropdown box asking for confirmation of the rating we had just provided. At least we think that is what we were being asked. We aren’t really sure. These sorts of extra steps are especially insensitive, given the amount of time this process already demands from professors.
What could be done to improve this brave new world of online graduate-school admissions? We have a few less-is-more ideas.
- If we must provide ratings, can we simplify them? Perhaps offer a brief prediction about a student’s long-term success in graduate studies or in the profession, with a short explanation. Our goal should be to make the case for the student in the fewest words possible.
- Alternatively, can higher education agree about the essential criteria that should be included in a recommendation letter? And then allow the letter to make the case for the student’s admission, rather than relying on vague survey questions and ratings to try to quantify what is essentially unquantifiable?
- While we’re at it, how about improving our recommendation letters, too? Could we agree that each letter should identify not just a student’s most compelling strengths but also one or two areas for improvement? And can we be assured that any mention of weakness will not move the student’s application to the rejection pile or to the certain Siberia that is the wait list?
- Give faculty members an honorable way to opt out of digital ratings systems. Permit us, if we so choose, to simply send a carefully crafted digital letter in place of the tedious and unreliable ratings.
- Most boldly, should faculty members collectively refuse (politely) to fill out separate ratings scales for each student? “See attached recommendation letter for pertinent information” would seem to be sufficient. If we all adopt that attitude, perhaps we could force some change that would make the process more palatable and, perhaps, more honest?
An individually crafted letter humanizes an applicant and provides a more-complete picture of what that student can actually do. Faculty members still want to help undergraduates further their education. Except now, when students approach with a hopeful look in their eyes and a list of universities to which they want you to sing their praises, it is no longer a joyful experience but one of impending annoyance leading toward the digital ether.