We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
If you continue to experience issues, please contact us at 202-466-1032 or firstname.lastname@example.org
While the return to requiring scores from an institution that assiduously avoided using the words “test optional” when announcing its test suspension isn’t surprising, it is disappointing that Schmill’s 1,400-word public explanation offers contradictory statements, vague equity claims, and no new research. The post includes 25 superscripts and footnotes that give it the appearance of a research paper, but only a handful of the notes actually link to research. Those hoping to find a convincing, data-supported argument for the tests’ value at the institution won’t find it here.
“Our ability to accurately predict student academic success at MIT is significantly improved by considering standardized testing,” Schmill writes, further stating that “the word ‘significantly’ in this bullet point is accurate both statistically and idiomatically.” The lack of any quantitative data to support this makes the statement impossible to evaluate. Additional posts and articles haven’t done much to clarify matters, as at various points the institution indicates that it both has and lacks confidence in its ability to predict student success without test scores.
Of students who don’t take a standardized test, Schmill has said that the admissions office is concerned that it “won’t have enough information to be confident in their academic readiness when they apply” and that the office “cannot reliably predict students will do well at MIT unless we consider standardized test results alongside grades, coursework, and other factors.” But he also has said that “for students who don’t have an SAT score, there was something else that gave us confidence that the students would succeed here” and “students who were accepted when test score requirements were waived had done well so far.”
More troubling than those contradictions is that the few research studies referenced in the footnotes of the blog post are far from settled fact. A University of California report that found using standardized-test scores in addition to grades helped predict undergraduate performance better than grades alone is of questionable relevance due to the university system’s dissimilarity to MIT and, more importantly, because the report has been contested by several UC researchers and by a member of the task force that issued it. Similarly, Sonia Giebel, co-author of a cited Stanford University paper and a Ph.D. candidate at Stanford’s Graduate School of Education, took to Twitter to contest the relevance of her paper to MIT’s conclusions, saying “our paper does not offer explicit conclusions about how essays factor into admissions decisions.”
MIT appears to be reiterating the trite argument that testing helps discover “diamonds in the rough,” citing research that doesn’t actually focus on the high-scoring, high-achieving students that would consider applying to MIT. Schmill’s referenced studies focus on students scoring a 20 composite on the ACT and 1060 on the SAT, well below MIT’s standard (it’s 25th-percentile marks haven’t been below 34/1490 in the past three years).
Not only is the research cited not particularly relevant to MIT, but the College Board and ACT show that more underrepresented students are hurt by the test than are helped by it. If MIT had data to suggest otherwise, sharing it, as the University of Missouri did when it voted recently to renew its test-optional policy, would help resolve the issue of the tests’ relevance and observed biases.
As Schmill wrote, the institution has a “unique education and culture,” and “all MIT students, regardless of intended major, must pass two semesters of calculus, plus two semesters of calculus-based physics.” Whether or not requiring admission tests is the right thing for MIT, the college is a very different place than the vast majority of institutions where the math requirement can be satisfied with algebra, statistics, or quantitative reasoning. Most colleges do not compete with MIT for same type of student and the ones that do, like CalTech and Georgia Tech, are unlikely to set their policy based on what MIT does.
While the changes in MIT’s policy are significant for its 35,000 or so hopeful applicants, recent changes at several large public-university systems across the country are much more significant. Those changes will directly impact the lives of hundreds of thousands of applicants and have reverberations across the collegiate landscape.
The permanent test-free admission policy for the California State University system, following the same policy at the University of California, means that two of the five largest public-university systems in the country will not consider tests in their admissions review. Together they receive more than 24 times more applications than MIT. These policies will lower the number of California students who bother taking the SAT and ACT and will influence decision-making at colleges in all states that enroll California high-school graduates.
Other universities are making similar moves. The University of Georgia system was one of the last state systems to adopt a test-optional policy in 2020 and was one of the first to announce in 2021 that they would reinstate the test requirement. However, in recent weeks, most of its campuses have once again instituted test-optional policies. And during a recent board meeting in which the University of North Carolina system renewed its testing waiver, competition with peer institutions and access were raised as considerations. MIT was not mentioned.
Given the diversity of location, variety of institutions, and number of students in these university systems, their policies will have more impact on student behavior and enrollment than whatever happens at MIT or in the Ivy League. The MIT decision is neither a referendum on test optional nor a canary in the coal mine for higher education.