It is impossible to ignore the dangerous and growing structural inequalities in our society. These have long been central concerns of the left; they have now also become concerns of many elements of the center and right, animating populist sentiments across the spectrum and thrusting themselves into the 2016 presidential election. The various and growing social and economic chasms that separate so many Americans betoken a loss of a common culture, or even the possibility of one, and a loss of access to the dignity of work, affecting huge portions of our society. Although these inequalities have a great many sources, those of us who work in higher education need to consider what contribution we might be making to this state of affairs. Specifically, we should consider the adverse effects, some of them highly ironic, of our society’s commitment to an ideal of meritocracy.
That is a troubling thought. There is no more quintessentially American ideal than the belief that no one’s prospects in life should be determined by the conditions of his or her birth, and that individuals should advance strictly on their merit and not because of any other external advantage. From its beginning, America banished titles of nobility and other hereditary distinctions that had long been characteristic of European aristocratic society. Even if we too often honor that belief in the breach rather than the observance, we do continue to honor it, and periodically redouble our efforts to honor it more fully. But even the most equality-affirming polity has to find a way to select the most talented and effective leaders while remaining true to its professions of egalitarianism. How can that be done?
America likes to see itself as a meritocracy, and college as an engine of social mobility. In reality, writes Richard V. Reeves, “higher education has become a powerful means for perpetuating class divisions.” How did that happen, and what can colleges do about it?
For republican America, the answer had to be sought in what Thomas Jefferson and John Adams called a “natural aristocracy": Those with demonstrable talents and experience, those seen to possess the skills, knowledge, and character requisite for “the best,” would be those most deserving of high standing and high responsibilities.
In theory, rule by merit merely would provide equal opportunity for individuals’ natural ability to shine, and permit individual merit to emerge from even the most unpromising conditions of birth. It would be an aristocracy, meaning rule by the best, the aristoi, but it would accomplish this by observing the democratic canon that advancement must be open to all. In fact, because of its openness to new talent wherever it appears, a true meritocracy could plausibly claim to be more fully the “rule of the best” than any hereditary aristocracy.
But as the historian Joseph F. Kett has argued, there are at least two strikingly different ways in which “merit” has been understood in American history. The founding generation itself thought in terms of what Kett calls “essential merit,” merit that rests on specific and visible individual achievements that in turn reflect an estimable character, quite apart from the individual’s social “rank.” “Merit” was the quality that propelled achievement and revealed “essential character.” Those who did the achieving were known as “Men of Merit,” a term one frequently encountered in the period’s writings and speech, as in this 1778 Fourth of July oration by a South Carolina politician, David Ramsay, an ode to the new nation as a land of opportunity: “All offices lie open to men of merit, of whatever rank or condition; and the reins of state may be held by the son of the poorest men, if possessed of abilities equal to the important station.”
But soon a different way of understanding merit began to emerge, an ideal Kett calls “institutional merit.” This new form of merit was concerned not with questions of character but with the acquisition of specialized knowledge — the knowledge that could be taught in schools, tested in written examinations, and certified by expert-staffed credentialing bodies. This new approach to the demonstration of merit manifested itself, Kett argues, in the proliferation of “precedent-laden legal briefs, peer-reviewed scientific articles, command of the principles of military organization learned in war colleges, and diplomas from [reputable] educational institutions.”
The testability and measurability of this form of merit are of key importance. We see precisely this understanding of the term at work today in the title of our National Merit Scholarship. This is among the most prestigious awards open to high-school students, and colleges and universities compete energetically to attract the attention of such students, and recruit them into their freshman classes. But it is an award based entirely on a student’s getting a high score on a national standardized multiple-choice aptitude examination, with all the advantages and disadvantages that such standards of mass commensurability bring to the educational enterprise.
So institutional merit generally refers to the assessment of promise and aptitude through testing rather than through a record of achievement, and its growing influence — despite the evidence that such testing favors takers with specific cultural and educational backgrounds — has been the guiding spirit behind the proliferation of standardized and quantitative evaluation meant to sort out and rank “objectively” the capacities of otherwise unranked and undifferentiated individuals.
And thus we have come into the reign of what has come to be called “meritocracy.” The term itself is relatively new, dating back to the British sociologist Michael Young’s 1958 dystopian satire, The Rise of the Meritocracy; but the practice it designates is very old, extending back to the Han Dynasty of the second century BC in the use of written examinations to evaluate and rank officials in the imperial civil-service bureaucracy.
We have arrived at a point where the possibility that our meritocracy is incompatible with democracy has become impossible to ignore.
Young understood that such practices were also particularly well suited to the needs of modern society, in which the old ways of assigning place by ascription or inheritance or patronage would no longer serve the purposes of rational functionality and optimal efficiency. In Young’s “fable,” set in the future, a series of educational reforms in the late 20th century had established the principle of “merit” as the primary means of social sorting, and the ultimate criterion for advancement into the courts of political authority. And “merit” above all else meant the possession of a sufficiently high IQ, a high score on a standardized test. Young’s meritocracy was the fulfillment of Plato’s dream of philosopher-kings, engineering the means by which political and cultural power could be turned over to a cognitive elite. An aristocracy, yes, but to all appearances a thoroughly natural one.
But as Young knew would be the case, and as we see happening in our own times, the meritocratic idea has proved highly problematic, particularly in a society that has not abandoned its democratic ideals and aspirations. In Young’s fiction, the “losers” in the meritocratic race, those wretched souls who had been systematically deprived of any plausible excuse for their failures by the impersonal and procedural “objectivity” of the tests by which they were excluded, were naturally consumed with resentment and shame. They eventually engineered a successful revolt in the year 2034.
The rebels were referred to by Young as the “populists,” and they comprised an odd alliance of lower-class men who had been relegated to menial work and upper-class women who had become imprisoned by the imperative of raising high-IQ children. These two very different dissident groups had in common a yearning to free society from its dependence upon “a mathematical measure” of all things — a measure that they believed had the effect of systematically penalizing them. And in the end, they brought the meritocracy down. In Young’s telling, the meritocracy generated its own demise by its internal contradictions, by generating the discontents that made its downfall inevitable.
Could something like this be happening in our own time? The political turbulence affecting both political parties in the 2016 presidential election, as well as the rising antagonisms directed toward the highly meritocratic postnational elites who run the institutions of the European Union, suggests that it might very well be. At a minimum, though, we have arrived at a point where the possibility that our meritocracy might in the long run be incompatible with any meaningful sense of democracy has become impossible to ignore.
It’s an intrinsic problem, anticipated and summed up with great concision by the sociologist Daniel Bell, in a 1972 essay, “On Meritocracy and Equality”:
There can never be a pure meritocracy because high-status parents will invariably seek to pass on their positions, either through the use of influence or simply by the cultural advantages that their children inevitably possess. Thus after one generation a meritocracy simply becomes an enclaved class.
He might well have added that assortative mating practices, which tend to bring members of the cognitive elite together in marriage and family formation, will in the fullness of time only increase the divide between their offspring and the rest of the population. In addition, the families in which such children are raised will likely impart to them the habit of reading, study, learning, and discussion, as well as a fairly uniform set of social skills and approved attitudes, tastes, and political views, thus bestowing upon them a cluster of advantages that those without such advantages will find hard even to comprehend, let alone duplicate. Thus does the meritocratic elite glide into being an enclaved elite, one that can claim with utter sincerity, even though the system of selection is absurdly skewed, that it is still a genuinely meritocratic elite. And so it will seem, for the most part to the young people who after all have generally worked very, very hard to gain entrance into Yale or Harvard. But it will not seem so to those for whom even the formulation of such impossibly remote goals would be inconceivable and, sadly, futile.
Nothing is more stultifying to thought than the forced imposition of an intellectual monoculture.
This growing tendency toward the erection of harder and higher barriers separating the new cognitive elite from the rest of humanity is a variation on the same theme that Young sounded nearly 60 years ago, and it is reflected in a great deal of the most penetrating social commentary of our own time. Charles Murray’s Coming Apart, Robert Putnam’s Our Kids, Bill Bishop’s The Big Sort, Angelo Codevilla’s The Ruling Class, Joel Kotkin’s The New Class Conflict, J.D. Vance’s Hillbilly Elegy: All of these recent books and many others concern themselves with the ways that the clustering of the like-minded, and particularly the self-segregation of the best-off and most flourishing enclave elites, along with their insulation from the diminishing prospects of much of the rest of middle-class and working-class American society, has been a development that has gravely weakened our common life and wrought havoc with our politics.
To blame the concept and practice of meritocracy itself for these large developments surely oversimplifies the matter. But their critical role in providing the metrics that guide our unprecedented way of sorting out and segmenting our population seems beyond dispute. We have to face the uncomfortable fact that meritocracy, while highly democratic in its intentions, has turned out to be colossally undemocratic in its results.
And when one considers the steep decline of opportunity for those Americans who must live outside the magic circle of meritocratic validation — those middle- and working-class Americans who must deal with the steady erosion of unskilled and semi-skilled jobs, the downward pressure on wages and employment caused by the steady export of jobs and steady import of immigrants competing for the diminishing number of low-skill jobs that remain, and the open condescension with which such people’s anxieties and fears are regarded by meritocratic elite culture — it is not surprising that a growing edge of bitterness and anger, even rage, has crept into what passes for our national discourse.
As Murray has observed, “During the past half-century of economic growth, virtually none of the rewards have gone to the working class. … The real family income of people in the bottom half of the income distribution hasn’t increased since the late 1960s.” That state of affairs cannot stand for much longer without eventually provoking a strong reaction, of which what we have already seen is perhaps only a foretaste.
Those who dwell inside the magic circle of validation can still continue to entertain the view that it was not their advantages, but their own meritorious effort, that placed them there.
But in fact many of them do suspect the more complex truth, in ways that gnaw at them, and manifest themselves in surprising ways. A personal anecdote is perhaps not out of order here. I gave a public lecture at Middlebury College in early May, a wonderful experience that also allowed me to spend a lot of time talking with faculty and students about the eruption that occurred on that campus in March when Charles Murray was invited to speak there. I was especially impressed by my conversations with students. In addition to their thoughtful ambivalences on the subject of free speech and its limits, it was not hard to detect a powerful undercurrent of other, more complicated feelings at work in their souls. One of them told me, movingly, that he believed there was a powerful streak of what he called “self-hatred” operating in his fellow students, a sense of overpowering guilt over the advantages they possess, as privileged individuals who have gained entry into this Vermont paradise of talent, intelligence, and storybook beauty. I am sure he is right about that, and it is not surprising that students should feel that way, notwithstanding the hard work that got them there. After all, Middlebury enrolls more students from the top 1 percent of the nation’s income distribution than those from the bottom 60 percent. Those numbers speak for themselves.
What is to be done? Can we recover an understanding of meritocracy as something much looser, something less systematic and less centrally administered, something less obsessed with credentials and aptitudes, but that instead seeks to preserve the simpler aspiration that “all offices lie open” to those willing to seek them?
Furthermore, we should consider whether the colossus of American higher education as it now exists is really such an effective and efficient sorting mechanism for the staffing and administration of a society. Leave aside the question of whether those of us who work in higher education actually want it to be such a sorting mechanism, rather than a ladder of opportunity and a source of world-expanding enlightenment and beauty, and whether its operation as such a mechanism constitutes a violation of its essential character. Ask a simpler question: Does it actually do the job well? Do the results justify the extravagant, even crippling, expense of college and professional education, at a time of extremely scarce entry-level employment for recent graduates, a time when the aggregate student-loan debt in the United States (as of the first quarter of 2017) has reached $1.4 trillion, with roughly eight million debtors in default — surely a ticking time bomb?
Our meritocratic ideal was not always so dysfunctional. There was a time well within the memory of many living Americans when one’s advancement in life was not so heavily determined by the credential of where, or even whether, one attended college. Among the greatest of America’s 20th-century presidents — and one of the most literate and historically informed presidents since the time of the founders — was Harry S. Truman, whose lack of a college degree did not impede his rise to the highest office in the land. By way of contrast, one of the more curious episodes in the early part of the 2016 presidential campaign was the intense controversy over whether Scott Walker, Wisconsin’s governor, should be considered unqualified for the presidency, notwithstanding his extensive experience in public life, because he had never completed college. Such disproportionate concern could only take on such importance in a credentials-mad society.
That less organized and more decentralized America of Truman’s day had many faults, and I do not for a moment wish to ignore them, or romanticize them away. But in fact the worst of its faults was its failure to extend equally to all Americans the opportunities that a Truman enjoyed. That is a fault that only serves to confirm the worthiness of the ideal itself, the ideal of merit earned by accomplishments made possible in an atmosphere of openness and opportunity.
We need to find ways to restore and preserve a less regimented, less class- and status-stratified, less school-sorted, more open-ended America, one less attentive to meritocratic pedigree and more respectful of men and women of all stations and educational levels. Why could we not restore the practice of bringing talented and ambitious young people into professions such as the law through apprenticeships, as was done in the era of the founders, instead of insisting that they expend hundreds of thousands of dollars for a law-school credential that means less and less with each passing year, and only serves to delay their entrance into the work force and the productive life of the community? Why could we not do the same with engineers, accountants, teachers, health-care professionals, and the like? Could not such changes move us back in the direction of a restoration of essential merit?
Critics may say that such suggestions are an exercise in unrealistic nostalgia, and that the needs of our technologically advanced civilization, with its constant drive toward innovation and its dependency on various forms of highly specialized knowledge, dictate the way that our educational institutions must operate, as agencies of work-force preparation. In this view, meritocracy, with all its problems, is baked into our cake. Such a view echoes the technocratic ideal, which envisions the guidance of society by organized and accredited experts, who in turn direct an enveloping web of bureaucrats, engineers, civil servants, doctors, and other professionals and technicians training in the universities and employed by a vastly enlarged and all-embracing state apparatus.
Such an ideal has its virtues, but they exist in enduring tension with the inherent messiness of democracy, which also has its virtues. In this respect, the problem of expert knowledge closely resembles the problem of meritocracy; both need to be challenged and ventilated by forces beyond them. We need and want expert knowledge, in an ever-flowing stream. But we also need to acknowledge that the operation of democratic public opinion provides a legitimate counterforce, in fact an essential counterpoint, to the pronouncements of specialized communities of experts, which even in the best of circumstances may reflect limited perspectives and parochial interests. We need such counterforces for the very same reason we need civilian control of the military.
We also need them because expert knowledge is not always a sure guide, and, like the military, it may need to be directed toward ends larger than itself. Let me give two different examples of this.
The first concerns science. Early in his administration, President Obama promised to “restore science to its rightful place,” and decried the fact that, in the Bush years, “our government has forced what I believe is a false choice between sound science and moral values,” when in fact, “the two are not inconsistent.” Unfortunately, though, it is not hard to think of ways that science has been, and is likely to be, both entirely “sound” and completely oblivious to “moral values.” That was precisely the point of the physicist J. Robert Oppenheimer’s famous admission that the pursuit of the atomic bomb had been a task too “technically sweet” for scientists to resist. “You go ahead and do it,” Oppenheimer confessed, “and you argue about what to do about it only after you have had your technical success.” Given the universality of such all-too-human motivations, it is foolish to imagine that there will not be many more occasions, in laboratories and clinics across the land, in which the “false choice” Obama decried is an all too real one, and human ends are in danger of being subordinated to technically sweet (and career-advancing) means. We do not have to conjure up images of renegade scientists out of James Bond novels; such occasions will turn up in the regular course of respectable scientists’ work. Insight as to “what to do about” technical successes will be needed, and will need to come from elsewhere.
Second is a situation from which we are still emerging, in which the putative experts turned out not to be experts at all. No social-scientific field has been more confident of its scientific accuracy than economics, none has been filled with a greater sense of self-importance. Then came the global financial crisis of a decade ago. “Why,” demanded an exasperated Queen Elizabeth of British economists, “did no one see the crisis coming?” A panel of experts assembled by the British Academy offered the following answer to her question:
Everyone seemed to be doing their own job properly on its own merit. And according to standard measures of success, they were often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction. … [It] was principally a failure of the collective imagination of many bright people.
But there is more. Robert Shiller, a professor of economics at Yale University who had advised the Federal Reserve Bank of New York in the years leading up to the Great Recession, confessed that he had kept quiet about the dangerous housing bubble he thought he saw developing:
While I warned about the bubbles … I did so very gently, and felt vulnerable expressing such quirky views. Deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated.
In other words, the “communities of the competent” sometimes behave more like communities of the conforming. What Shiller describes with admirable honesty is a situation that was rendered intellectually and morally dangerous not by the fact of expert consensus itself, but by the fear — including a self-interested fear of not being fired — that dissenting views could not be safely aired. We see this scenario enacted again and again regarding important issues of great public moment. Nothing is more corrosive of the true spirit of science and truth-seeking than an attitude of bullying dogmatism. Nothing is more stultifying to thought than the forced imposition of an intellectual monoculture.
Experts should have a voice in our democracy, and a resounding and respected one. But they cannot speak ex cathedra or expect to be automatically funded, let alone obeyed, merely for the meritocratic letters after their name, or the meritorious sheepskins on their wall, or the professional associations to which they belong. They also have to persuade, to speak a public language, to bring themselves up (or down) to the level of the democratic bar and make their case patiently and respectfully, in a way that passes muster with their fellow citizens. An impenetrably technocratic top-down solution to social or economic problems that does not meet that standard, and does not seek the informed consent of the governed, is unacceptable — not only because it betrays hubris, but because it betrays the very idea of a democratic republic, however “democratic” its avowed ultimate intent.
We cannot and should not deny the power of specialized knowledge in all spheres of inquiry. That would be madness. But we also should not exaggerate that power, and allow it to colonize our lives. We should not forget that specialized knowledge is always a means, never an end. And we should be careful to preserve room for those free and unaccredited voices, with their barbaric yawps sounding over the rooftops of the world, ungainly, unhousebroken, unvetted, and oblivious to the prevailing consensus. They just might be speaking the truth.
Wilfred M. McClay is the director of the Center for the History of Liberty at the University of Oklahoma, where he also teaches in the departments of Classics and Letters and History. He presented a version of this essay in May at the Conservative-Progressive Summit at Grand Valley State University’s Hauenstein Center.