To the Editor:
In "A Study to Measure Value of Community Colleges Falls Short" (The Chronicle, October 18), Donald Heller, dean of the College of Education at Michigan State University, criticized a report written by two of us that relies heavily on wage data from the company PayScale. We believe that Heller is overstating the weaknesses of the data used in the report.
Although the report, "What's the Value of an Associate's Degree?," may have created a false sense of precision in our estimates of lifetime earnings and return on investment, our analysis makes it clear that students graduating from some of the institutions studied are experiencing far higher monetary rewards than others.
Heller raises five concerns about the data, which we'll address here:
Availability of PayScale data. While it is true that the full PayScale data set is not openly available, there are two compelling reasons for this. One is that PayScale has to protect the privacy of the users who have shared their data. The second is that PayScale is a private, for-profit company. Opening up the complete data set would affect its ability to do business.
That said, PayScale has compiled a unique data set that addresses the growing concern on the return on investment from a college education. The earnings data that PayScale produces and rigorously validates are an important piece of the puzzle when examining the value of education. It's not perfect (no data set is), but it is the broadest, most inclusive data set available measuring higher-education return on investment.
Furthermore, PayScale is working to make its data more accessible to colleges, researchers, and policy makers. Indeed, "What's the Value of an Associate's Degree?" is an example of how its data are being made available to shed light on the success of graduates from more institutions than has heretofore been possible.
Representativeness of PayScale samples. Given the wide spread in pay depending upon one's major, we understand that over- or underrepresenting a given major group can bias the median pay reported for a college. However, for each institution included in PayScale's reports, the breakdown of majors is compared with the number of completions reported by Ipeds. Consequently, in most cases no systematic bias is observed, and in the few cases where it exists, the sample is adjusted to account for it.
College sample size. As is well known, it is not necessarily the size of a sample that matters but its quality. Therefore, PayScale does not place a strict requirement on sample size for a college's inclusion in its higher-education reports. Instead, an error band reflecting the level of uncertainty is calculated around the median pay figures; institutions with an error greater than 10 percent are not included. This band factors in both sample size and spread in pay. Together, this error calculation and the per-school analysis of majors help PayScale report median pay values that are representative of the true population.
Self-reported salary figures. Heller is concerned about PayScale's model of data collection, asking, "Do [survey participants] round up to the nearest $5,000? $10,000? Do they inflate the figure in order to make their alma mater look a little better?" However, PayScale would not have a business if it were not able to validate the accuracy of the data it collects.
First, PayScale does not offer incentives to complete a survey. Since the main motivator for completing a PayScale survey is to find one's labor-market price, there is no motivation to lie or bolster one's alma mater.
Second, for the limited attempts at fraud that occur, there is a set of automatic and manual validity tests that each survey undergoes. The data are also regularly compared with other sources of compensation data, both publicly and privately available, and no systematic bias has been observed. In fact, there are strong correlations when doing apples-to-apples comparisons.
College vs. major. Heller argues that "the greatest variations in earnings are not among colleges, but within individual colleges—with the differences driven primarily by students' majors and occupational choices."
To a large extent, we agree. As a result, PayScale publishes reports examining alumni earnings not only by institution but by major as well. However, not all students applying to college have chosen a major, and thus a general ranking of future potential earnings may be useful when narrowing down college choice. And overall earnings by institution are still important, as tuition does not currently differ across programs of study. Whether you are a computer-science major or a philosophy major, you pay the same tuition.
Unfortunately, the federal government is not yet collecting wage data, and neither are most colleges. PayScale data sets have limits—but so would any linked wage/student data that the government would eventually release. So, do we use and improve upon existing paths to measure postgraduation employment, or do we wait until some hypothetically perfect data set comes into being?
While this point is debated, millions of students each year are making decisions about what colleges to attend and how much to pay (and borrow) without sufficient information about the likely returns on their investment.
Our goal in reporting the variation in wage outcomes was not only to help students avoid bad investments but also to encourage institutions to serve their students better. Collaboration among the data providers, colleges, and policy makers can only help students who are facing difficult challenges when it comes to making smart choices about their futures.
Mark S. Schneider
Vice President and Institute Fellow
American Institutes for Research
Lead Economist and Director of Analytics
Jorge Klor De Alva
Nexus Research and Policy Center