Who’s Going to Write the Next ‘Academically Adrift’?

A few weeks ago I was at a luncheon that featured Virginia Foxx, chair of the U.S. House of Representatives Higher Education subcommittee, as the speaker. Her prepared remarks were straightforward and unsurprising—Congress needs to insist on more transparency for consumers and crack down on wasteful spending, unless that means using more transparency to crack down on wasteful spending in the for-profit sector, in which case not—until she got to the section of her remarks about student learning. At that point she stopped, looked up from her notes at the people around the room finishing their Cobb salads, and said, “You’ve all read Academically Adrift, right?” Foxx then proceeded to decry the sorry state of higher education learning, using words like “scandal” and “ripped off” and so forth, based on the findings in Richard Arum and Josipa Roksa’s book.

I was reminded of this last night when I picked up the latest issue of The New Yorker from the pile of mail on my kitchen table and saw that Louis Menand had written the featured book review, which focuses substantially on Academically Adrift. Now, I don’t know the exact number, but I’m going to go out on a limb and guess that the number of quantitative research studies written by university-based sociologists and published by academic presses that get the No. 1 slot in The New Yorker book-review section is pretty small. The point being: like it or not, Academically Adrift is now officially the first and last word on the state of college student learning in America, and that won’t change until somebody else conducts a study that tackles similar questions with a comparable level of rigor and care.

The book has been criticized by some, which is how it should be with a subject this important. Student learning is a fantastically complex construct, subject to philosophical disputes and measurement error. No single method of studying it can be definitive. And, in fact, nobody has been a stronger advocate for more research in this area than Arum and Roksa themselves, who recently wrote in The New York Times:

College trustees, instead of worrying primarily about institutional rankings and fiscal concerns, could hold administrators accountable for assessing and improving learning. Alumni as well as parents and students on college tours could ignore institutional facades and focus on educational substance. And the Department of Education could make available nationally representative longitudinal data on undergraduate learning outcomes for research purposes, as it has been doing for decades for primary and secondary education.

One important opportunity here is the federal Beginning Postsecondary Survey, which is administered periodically (the last three began in 1990, 1996, and 2004 and followed students through college) and should start anew in 2012. Previous versions of the BPS have gathered a wealth of information but haven’t included any measures that would allow researchers to estimate how much students learn between the time they begin college and the time they leave. Adding such a measure will cost money and would undoubtedly raise controversy among higher-education lobbyists, who are opposed to such impertinent questions on general principle. But without such measures, higher education will be stuck with Academically Adrift. With growing scrutiny over the price of higher education and what students and taxpayers are getting in return, that seems like a less-than-optimal place to be.

Return to Top