Former Mayor Ed Koch of New York City used to be fond of asking anyone who would listen, “How am I doing?” There is a growing debate in higher education about how we are doing, but the debate is multifocused and poorly distributed.
The code name for the debate is, of course, “accountability,” and the point of reference is all too often the late and sometimes-lamented Spellings Commission. I am convinced that the debate is as consequential for individual institutions as it is for the field of higher education as a whole.
The problem is that if it is higher ed as a whole that is being assessed, nobody’s institutional ox is gored. Of course some institutions, especially those that answer closely to legislative oversight, are laser-focused on institution-specific assessment. But private institutions are not immediately answerable to government agencies, even though they are theoretically under the purview of regional accrediting agencies. And many private institutions, especially the most selective and elite, apparently feel that they are so self-evidently successful at promoting student learning that they do not have to institute specialized assessment instruments.
I have written elsewhere about the need for longitudinal assessment, but I was again reminded of the problem last week when I received a release on the new SNAAP (sounds like a Dutch beverage, doesn’t it?) instrument. The Strategic National Arts Alumni Project has been developed by the Indiana University Center for Postsecondary Research (George Kuh and the folks who brought us NSSE) in partnership with the Curb Center for Art, Enterprise and Public Policy at Vanderbilt University. SNAAP will be a long-term longitudinal survey of arts alumni to show, among other things, “how students in different majors use their arts training in their careers and other aspects of their lives.”
My own university, like many others, is investing heavily in both facilities and faculty for undergraduate art-making in the visual and plastic arts, and in performance (dance, music, theater). Many of us feel intuitively that such an investment will pay off in the enrichment of liberal education, just as advocates for arts education in the schools have long claimed (in supposed response to the work of Howard Gardner) that precollegiate student learning capacity will be enhanced by engagement with the arts.
But social science research has yet to prove the intuition definitively correct in the schools, and we frankly don’t have a clue how useful an analogous arts approach will prove in colleges. The information garnered from SNAAP will be helpful, but it will not be available for a long time, and it does not appear to address questions about the impact of the arts on undergraduate education.
But shouldn’t we care whether arts education, freshman seminars, service learning or any of the other “enhancements” to undergraduate education actually improve student cognition and capacity to achieve? I think so, and I think we need to worry about such new forms of assessment sooner rather than later if we are to be taken seriously with regard to our “product.” It is probably not going to happen, however, if faculty do not wake up to the need. If we care about educational innovation, we should care whether it is consequential. This may be too important a matter to be left to university presidents, at least in the private sector of higher ed.