Higher education costs a lot. But inside the walls of colleges and universities, the real economy is based on prestige, not dollars. Prestige is the “money” of higher education — it’s what motivates everyone from institutional leaders on down.
The prestige economy that surrounds doctoral programs is almost entirely defined by research. That’s a constricted way to assess those programs, and an approach fraught with problems — the biggest one being that it doesn’t look at enough variables.
In this monthly column, Leonard Cassuto explores the ins and outs of doctoral training, with advice for both graduate students and their faculty mentors.
The National Research Council’s influential ranking of doctoral programs, for example, has an exceedingly narrow ambit: It measures research output only, and it ranks programs by the scholarly productivity of their professors. Worse, the ranking system itself is poorly handled.
The NRC gathers data via questionnaire, and the results don’t measure achievement. Instead, they casually survey reputation. Professors are asked to identify high-ranking programs in their field based on nothing more than the impressions of the moment when they happen to be filling out the survey. (I know that from personal experience: I once started filling out the questionnaire, realized its limitations, quit in frustration, and never filled out another.) Because no evidence is solicited for respondents’ rankings, NRC findings resemble the results of the hoaxes that routinely rank Princeton’s (nonexistent) law school in the top 10.
Ranking graduate programs solely according to their research output — even if that were actually being done fairly and accurately — would be like ranking cars solely by their engines. The engine is important, certainly, but anyone who drives knows that other parts of the car matter, too, like suspension, brakes, and safety features.
Our graduate schools need to be assessed by more student-centered measures. Yes, prospective students should know something about the influence of the research performed by a program’s faculty. But graduate school isn’t just a research pageant. It’s also school — and the way a program treats its students is bound to be important to someone deciding whether to spend years as one of them.
Applicants need to know how well a program professionalizes its students, for example. How is the advising? What about the career counseling and job preparation — for both academic and nonacademic careers? What kinds of support (financial and otherwise) does the program offer students, and for how long? What is the typical time to degree? The rate of attrition? Friendliness to students from disadvantaged groups?
Those are just a few of the questions increasingly asked by would-be graduate students. Right now they can’t turn to any single place to get the answers. They can only investigate programs one by one. Moreover, individual programs sometimes don’t make available the kinds of information that students need — such as the career outcomes of graduates.
What if graduate-school applicants could get that kind of student-centered information in one place, on one website? That information is as important to students — probably more important — than how many citations the faculty have earned for their research publications.
A new assessment system wouldn’t need to rank programs against each other according to some kind of absolute (and probably unsound) scale. Instead, it could gather information and present it transparently.
In short, prospective graduate students need their own Yelp.
As anyone with a cellphone or a laptop knows, Yelp operates as a clearinghouse of information on restaurants, bars, barbers, and just about anyplace where people might consider shopping. The customers supply the information and base their ratings on a five-star system. But the proprietors also interact with the commenters, and the Yelp site provides links to company websites for both reviewers and the reviewed.
As of now, there is no agreed-upon format for collecting information on the career outcomes of M.A.s and Ph.D.s, and no expectation that graduate programs should provide it. Many programs rely on a “greatest hits” approach: They list only the most enviable jobs landed by their graduates. But how salient is that data if most of the program’s graduates don’t get those jobs?
Research rankings also cast a long shadow on this selectively curated presentation because the faculty jobs that make the greatest-hits chart are usually located at research universities. The question is: How best to move away from such a thorough reliance on this narrow measure?
A Yelp-like national website that rated — not ranked — programs and graduate schools would go a long way toward solving the problem. The information on it could be supplied by departments, and also by students and graduates. Yelp for graduate students would save more than time: It would offer applicants (and current students) the kinds of relevant information they need, in an easy-to-use format.
It would also create an accountability that doesn’t exist now. A graduate program’s reputation could be based on real data from the providers and broad views from the users — instead of solely on off-the-cuff, unstudied impressions.
Yelp for graduate students could rate programs as they seek to achieve a variety of goals. This endeavor could also provide an agenda for foundations to reward graduate schools and their departments. Among the areas that could use outside funding:
-
- Self-assessments of graduate programs, including student and alumni surveys.
-
- Data-keeping on career outcomes of Ph.D.s.
-
- Innovative recruitment strategies to achieve a more diverse student cohort — both by collaborations among academic institutions and by grants based on both financial need and membership in underrepresented groups.
-
- Career- and professional-development seminars.
-
- Nonfaculty internships — not just ones outside the ivory tower but also those that might take place on a campus in student services, fund raising, communications, deans offices, and university relations.
-
- Teaching opportunities at institutions other than research universities.
-
- Concrete proposals for improving graduate-students’ rates of completion and time to degree.
-
A lot of those elements are already under discussion in recent debates over graduate-education reform, and some are being tried by programs, graduate schools, and national organizations. Their efforts deserve attention — and when they work, replication. A collective, student-centered focus through a Yelp-like clearinghouse would help to keep a consistent focus on what needs reforming.
A website like this could begin at any institution, with or without foundation support. Think of the free advertising that would come from the association with such a project — and the good will it would generate. A resource for student-centered information about graduate programs would be a public service. It wouldn’t replace research-based rankings, but it would provide a counterweight to them.
What if a doctoral program’s prestige arose, in part, from the way that it treated its students? We should dare to dream of such a thing.
Leonard Cassuto, a professor of English at Fordham University, writes regularly about graduate education in this space. His latest book is The Graduate School Mess: What Caused It and How We Can Fix It, published by Harvard University Press. He welcomes comments, suggestions, and stories at lcassuto@erols.com. His Twitter handle is @LCassuto.