In early 2022, after six months as the new president of Hofstra University, I and my senior leadership team embarked on an exercise to choose peer institutions. This was done in the context of a multifaceted, yearlong initiative to gather input from students, faculty, staff, alumni, and trustees to kick off strategic planning. How do we define our university, we asked stakeholders, and what are our ambitions for the future?
Coincidentally, as we were completing this peer exercise last month,
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
In early 2022, after six months as the new president of Hofstra University, I and my senior leadership team embarked on an exercise to choose peer institutions. This was done in the context of a multifaceted, yearlong initiative to gather input from students, faculty, staff, alumni, and trustees to kick off strategic planning. How do we define our university, we asked stakeholders, and what are our ambitions?
Coincidentally, as we were completing this peer exercise last month, The Chronicle published a visualization showing nearly 1,500 colleges’ self-identified peer institutions. The data was drawn from responses to an annual query from the U.S. Department of Education’s Integrated Postsecondary Education Data System, or Ipeds, for a list of comparable institutions. The visualization shows not only which institutions a college identified as its peers, but also which institutions chose that college as a peer.
The visualization is fascinating and reveals a lot about how institutions think about their peers and themselves. Harvard University selected only three peer institutions: Yale, Princeton, and Stanford. But 22 institutions, including Bowdoin, named Harvard as a peer. Bowdoin, a small, liberal-arts college with about 1,800 undergraduate students and no graduate programs, chose 98 “peers,” including the entire Ivy League and many large universities, some of which enroll more than 10,000 students. Bowdoin itself was picked by 35 institutions as a peer. All of them were small, liberal-arts colleges or universities that primarily serve undergraduates.
Another issue that arises in choosing peers is whether public and private institutions can be peers. For example, the University of Michigan at Ann Arbor identified 59 universities as peers, 27 of which selected Michigan back (all of them public). None of the 17 private universities that Michigan chose reciprocated. Many public universities, including Purdue University and the Universities of Alabama at Tuscaloosa, of Arizona, of Nebraska at Lincoln, and of Texas at Austin, chose only other publics as peers, while others, such as the University of North Carolina at Chapel Hill, included privates. A sampling of private universities reveals that none of them chose public universities as peers (assuming Cornell is considered private), with the exception of the California Institute of Technology, which chose the University of California at Berkeley as a peer. My sampling included Brown, Case Western Reserve, Rice, Stanford, and Vanderbilt Universities; the Universities of Pennsylvania and of Southern California; and Washington University in St. Louis. Yet even when universities stick to their kind, so to speak, the overlap between whom they chose as peers and who chose them never reaches 50 percent of colleges in common on both sides.
ADVERTISEMENT
The mismatch between whom an institution chose as peers, and the colleges that reciprocated, pervades the data set. It raises the question of how institutions designate peers, which is a mystery. In some cases it is likely to be decided by someone in the Office of Institutional Research or the provost’s office in response to the Ipeds survey, while in others perhaps some process leads to a consensus among administrators. Regardless, there is clearly no shared definition of what constitutes a peer institution.
Hofstra historically responded to the Ipeds query by listing some combination of whom we thought we looked like, whom we wanted to look like, and whom we wanted others to think we look like. And like virtually all of the other institutions, there were discrepancies between whom we chose and who chose us.
This year, however, we did something different: We undertook a data-driven process to identify peers. I believed that in the hypercompetitive world of higher education, particularly private higher education, benchmarking against similar institutions could inform our strategic planning and help us think about our ambitions. It would provide us with a group of universities that we would get to know and to which we would consistently return to measure our own progress, and perhaps learn a few things. What I did not anticipate at the outset was how beneficial the process itself would turn out to be.
Like other universities, we already did a lot of benchmarking. Enrollment managers know who their competitors are and, to the extent possible, keep their eyes closely trained on those institutions’ financial-aid packages, marketing practices, and application overlap. Deans and department chairs of individual schools and programs, particularly at the graduate and professional level, do the same. Research offices know the research expenditures of their competitors. Financial-affairs offices compare their endowment size and their industry ratings. And competition is the name of the game, so to speak, for college athletics programs.
But is it possible to sum up one college in a way that allows it to be fairly compared to another?
ADVERTISEMENT
Various ranking organizations, mostly magazines, purport to sum us up. U.S. News & World Report is the most prominent, with its annual ranking of universities and colleges, a highly questionable and much-maligned effort whose shortcomings were recently cataloged in The Chronicle. If colleges had faith in such rankings, they might just defer to them to identify peers, perhaps defining a peer as a college that is within five points above and below them according to U.S. News. And yet that is not what happens. Apart from some overlap among U.S. News’s top 25 universities and colleges, there is very little overlap between the rankings and the self-reported peer choices that colleges make. It is not how most, if any, institutions choose their peers.
At Hofstra we began our process by brainstorming names of colleges that we thought were similar enough to be peers. The only constraints were that they were private universities with Division I athletics programs. I thought we should end up with five to eight universities if we were really going to get to know them, but the brainstorming session generated a list of about 50.
There was only one way to eliminate 85 to 90 percent of the list: We had to determine what metrics mattered most in defining a peer, and then measure the institutions against those metrics. Choosing the metrics was the conversation that revealed the key metrics by which we define ourselves.
Some members of the senior leadership team (a group of 14 that includes the vice presidents, the chief diversity officer, and the dean of medicine) thought that location was a key metric — and that, as a suburban university in a large metropolitan area (New York City), we could not be considered similar to a rural college or a college in the middle in a big city. The number of students living on campus was key for others, because that affects the social life of students, a significant part of the college experience. Other people on the team focused on diversity as a key factor in defining the student experience and culture.
Still others believed that size mattered and that we could not compare ourselves to a college that was not within 30 percent of our enrollment, and that the ratio of undergraduate to graduate students was also key because it creates a particular academic culture on the campus. Academic culture can also be measured by the student-faculty ratio, the diversity of the faculty, the university’s research profile, and the number and types of graduate and professional schools.
ADVERTISEMENT
Some felt that we had to include the percentage of students eligible to receive Pell Grants because it said something about our mission and affected the amount and allocation of financial aid. There was general agreement that cost was important and we should compare ourselves only to colleges that were within a certain tuition range.
As we worked through this exercise, we found that a lot of Roman Catholic colleges were on the list, and that generated a conversation about whether a nonsectarian university like Hofstra could be a peer of a sectarian college, on the assumption that the latter has a particular mission that affects other aspects of its operation.
These conversations were invaluable. As a new president who had spent her career at large public universities, I learned a lot about the culture of our midsize private university and how it fit into the larger ecosystem. I witnessed longtime colleagues exchange views about the importance of different aspects of the university and teach one another about the intricacies of their own units. Sharing such a wide variety of data and deciding which type of data was most important in defining our peers, and therefore ourselves, helped us to build consensus about our university’s breadth, strengths, and challenges. Though not intended as such, the process also turned out to be an effective team-building exercise as we engaged in a focused, data-driven discussion about what matters in describing our university.
In the end we examined all colleges on our list with 18 metrics that we settled on. We agreed that some of the metrics carried more weight than others. We found that some of the colleges that were on the original list were completely unaligned along those metrics, while others had significant commonalities, but when it came to the most important metrics, they were significantly off. Eventually, we identified eight peer colleges.
It would be disingenuous to assert that the exercise was scientific. We used mostly Ipeds data, which provides an incomplete picture of the reality of any institution. For example, the cost of attendance can vary significantly because of tuition-discount rates, which are not reported. We also found that, in terms of enrollment, we had to be careful that we were not comparing ourselves to a university with a large percentage of online courses or degrees (the pandemic notwithstanding). To counteract those concerns, once we had a short list, we looked at university websites to corroborate some information. We were very cognizant of the need to compare apples to apples, which sometimes came at the expense of using more specific data.
ADVERTISEMENT
We made room for gut instinct as well. In one instance, while discussing a university whose data supported identification as a peer but perhaps not as strongly as others, several participants spoke up strongly for inclusion, stating that they had been on that campus and knew people there — and it really felt like Hofstra. That university made the list.
I hope we will learn from our peer universities as we observe how they tackle the broad array of challenges facing higher education in a post-pandemic world: everything from the coming enrollment “cliff” to free-speech issues to the growing diversity of our campuses to the regulatory changes upending college sports.
But I am certain that the greatest benefit of choosing peers is the process of building consensus around key metrics. At Hofstra this exercise brought us closer as a leadership team as we shared and discussed critical data about who we are. The peer-selection process is already informing the much longer conversation, grounded in those data, about what we want to strive for in the future.