In August, Yale University announced a new undergraduate program in human rights. It joins numerous other human-rights programs, institutes, and clinics that have spread like kudzu across campuses in the United States and around the world. By one count, the number has increased from one in 1968 to almost 150 in 2000, with most of the growth in the 1990s. The U.N. provides links to more than 300 academic institutions that offer human-rights instruction. These programs raise numerous questions about the role of political advocacy in the university.
What explains the rise of human rights on campuses? It is in part the result of international human-rights law and the worldwide human-rights movement. Most of the major human-rights treaties were ratified in the 1970s and 80s. These treaties helped create an international system of committees, commissions, and courts that churn out reports and opinions. As a result, “human rights” has become an idiom in the diplomatic vernacular of states. Scholars in turn have put it on their research agendas. The use of “human rights” in English-language books has increased 200-fold since 1940, and is used today 100 times as often as terms like “constitutional rights” and “natural rights.”
But the increasing prominence of human rights cannot by itself explain the growth of interest at universities. Something else is going on. One factor is the interest among donors, students, and professors in forging an international image, as shown by the feverish opening of campus colonies in foreign lands. In the era of globalization, a university that pursues traditional academic research in a purely national setting risks looking provincial.
Another factor is the vast ideological appeal of human rights. It’s become the lingua franca for political action, and the always-present temptation for professors and students to use the university as a vehicle for political advocacy has found its motor in human-rights law. Under the guise of teaching and studying human rights, academics can use university resources to engage in political activism.
Some programs are clear about this. The Carr Center for Human Rights Policy, at Harvard, says its mission is “to partner with human rights organizations to help them respond to current and future challenges. … Thus, we seek to expand the reach and relevance of human rights considerations to all who influence their outcomes.” The Institute for the Study of Human Rights, at Columbia, fashions itself a “leader in bridging the academic study of human rights and the worlds of advocacy and public policy.” The director of Yale’s program denies that it is a training program for human-rights advocates.
But these and other university programs frequently form partnerships with law-school human-rights clinics, and these clinics, animated by law schools’ mission to train advocates, are not shy about their agendas. According to its mission statement, Yale Law School’s Schell Center for International Human Rights “promotes activism through summer and postgraduate fellowships … and fosters human rights activities throughout Yale University.” The University of California at Berkeley’s clinic says it “is engaged in cutting-edge research, policy work, and advocacy.” The international human-rights clinic at my institution, the University of Chicago Law School, “works for the promotion of social and economic justice globally, including in the United States.”
What does it mean for law-school clinics, and other university programs, to “promote” human rights? Should the university be involved in such advocacy? Law-school clinics, like clinics in medical school, are designed to give students practical skills that cannot be taught in normal classroom settings. Most legal clinics allow students to participate in the representation of criminal defendants and in other forms of legal advocacy, as well as to draft contracts, fill out legal forms, and conduct interviews of clients. In fulfilling those roles, students both learn skills and help people who cannot afford a lawyer. While many students are motivated by political and ideological goals—many clinics, for example, assist defendants in capital cases because professors and students oppose the death penalty—the projects are rooted in the legal system, which ensures that students learn skills consistent with the pedagogic mission of the law school.
But human-rights clinics are different. They engage in a bewildering array of programs and strategies that have little in common but a left-wing orientation. These include helping undocumented migrants obtain asylum; developing a best-legal-practices guide for responding to domestic violence in Mexico and Guatemala; advocating for public housing in New Orleans after Hurricane Katrina; drafting a petition to the Inter-American Commission on Human Rights after a local government denied dialysis funding to certain immigrants; and writing reports to help Haitian residents in the Dominican Republic who suffer from political repression and discrimination. (These examples are taken from an academic article.)
Other examples include a successful effort to persuade the City Council of Chicago to recognize in a resolution that domestic violence is a human-rights concern; traveling to Congo to help ensure that mining profits are shared with citizens; teaching residents of California’s Central Valley that their rights to housing, water, and political participation have been violated, and that international institutions could be helpful in vindicating them; and issuing a report that argues that laws intended to ban sex-selective abortions in various U.S. states are actually intended to reduce the number of abortions.
In short, a human-rights clinic can do anything it wants, as long as it can argue that its project will (or is intended to) benefit (some) people. In some cases, participants do not even pretend that their projects advance human-rights law—there is no international human right to abortion, and the report (issued by the clinic at the University of Chicago Law School) does not suggest otherwise. But in most cases, a clinic can link its political advocacy to human-rights law because, thanks to the extraordinary proliferation of human rights in treaties and other legal instruments, virtually every facet of human existence is governed by one or another international human right.
How did this come about? A bit of background may be helpful. The core idea in modern human-rights law is that governments are responsible for the well-being of the people living in their territories. While this idea is actually very old, and received a significant boost during the Enlightenment, its embodiment in international law dates back only a few decades. And its development was accompanied by intense controversies—controversies that were never resolved but instead papered over for the sake of diplomatic progress.
The story begins with the Universal Declaration of Human Rights. After World War II, the victorious Allies sought to repudiate Nazism and establish the principle that governments owe obligations to their citizens. But a cleavage immediately opened up. The United States sought to enshrine the liberal democratic values that its Constitution (imperfectly) embodied, but which were utterly inconsistent with the totalitarian regime of the Soviet Union. By contrast, the U.S.S.R. advocated economic and social rights—jobs, income, health care, education, and the like. Europe and developing countries sought something of a middle path. As a compromise, the Universal Declaration contained all these rights but in vague and aspirational terms. It was not regarded as a treaty; instead, it was issued in 1948 by the United Nations General Assembly, which does not possess the authority to make law. The U.S.S.R. and a handful of other countries abstained from the vote.
Negotiations toward a treaty regime then followed two tracks. Political and civil rights were set forth in a treaty that was eventually known as the International Covenant on Civil and Political Rights, while economic and social rights were embodied in what became the International Covenant on Economic, Social, and Cultural Rights. Those treaties went into force in 1976.
Over the following decades, countries negotiated and ratified numerous other treaties that, among other things, banned torture and discrimination against women, and recognized rights for children and disabled people. Not all countries ratified all these treaties. Notably, the United States never ratified the covenant on economic, social, and cultural rights. But nearly all countries ratified nearly all the treaties, leading many scholars to claim that the body of human-rights law was effectively an international bill of rights, binding on all countries, even those that have not ratified some of the treaties.
The treaty regime has given rise to a fantastically complex set of international institutions that monitor countries’ compliance with their human-rights obligations, interpret the law, and propose reform. However, these institutions possess no enforcement power. And as a large body of empirical research has shown, many countries completely disregard their treaty obligations. Indeed, there is little evidence that human-rights treaties have improved the well-being of people or even resulted in respect for the rights in those treaties.
There are a number of reasons for this. Merely by ratifying a treaty, a country with a poor human-rights record does not gain an incentive to improve that record. Domestic institutions like courts rarely enforce treaties even in developed countries, and the courts in countries with poor human-rights records rarely have the independence or power to do so. Treaties may help motivate domestic groups, dissidents, and others to pressure their governments to improve human-rights practices, but those groups are typically well motivated before any treaty is ratified.
The real impetus for complying with a treaty comes from foreign countries—mostly, Western countries—which use aid, diplomatic pressure, and other carrots and sticks. But Western countries have limited resources and other priorities, so they do not, in fact, devote significant effort to forcing other countries to improve their human rights, and indeed do not always respect human rights themselves.
But there is a deeper explanation. The cleavage between the United States and the Soviet Union over political and economic rights was just one of many disagreements about what the human-rights regime should look like. The United States and Europe also differed. Europeans reject the strong emphasis by the United States on political and civil rights—above all, the strict protections that Americans give to free speech. The European view is that speech must be balanced against other considerations, like public morality and order; that is why Europeans are comfortable with hate-speech laws, censorship of Nazi-related advocacy, and laws that expunge search-engine results that violate people’s privacy. At the same time, Europeans believe that the death penalty and America’s other harsh penal practices violate human rights.
Europeans also put a great deal more emphasis on social, economic, and cultural rights than Americans do. And the developing countries go farther than the Europeans. Many of them, led by China, advocate a “right to development,” which allows countries to give priority to economic growth and poverty reduction over political rights. They have also advocated a “right to security,” which gives priority to crime prevention over civil rights. Islamic countries have campaigned for a right against “defamation of religion,” which would bar types of expression that offend religious groups. Indeed, many Islamic countries see no contradiction between human rights and conservative interpretations of Islamic law—and so it is possible for Saudi Arabia to ratify the Convention to Eliminate All Forms of Discrimination Against Women, and at the same time to prohibit women from driving.
As one examines the controversies over human rights, one realizes that the treaties resolve hardly any disagreements at all. There are a number of reasons for this. First, treaties rarely define rights with any specificity; that allows countries to advance narrow or expansive definitions as they please. Second, many rights—like that of freedom of expression—are explicitly limited by other considerations, like public morality and order. Third, the treaties contain hundreds of rights without providing any guidance as to how rights should be traded off. Yet all countries have budget limitations that prevent them from ensuring that all rights are respected, and so they pick and choose. For example, a poor country might realize that trying to prevent the local police from torturing people is very difficult, if not impossible—it would require monitoring and firing police officers, raising salaries, retraining, and so on—and so it would be better to use that money to build schools or medical clinics. It’s hard for outsiders to criticize that choice.
The upshot is that the treaty regime is a failure, but the issue of human rights lives on, indeed flourishes, as “discourse.” Government leaders, bureaucrats, lawyers, advocates, NGOs, dissidents, scholars, and everyone else argue about everything in terms of human rights, confident that any particular policy they seek to advance can be grounded somewhere in the vast nebula of human-rights talk.
It should be clear by now why university programs and law-school clinics that “promote human rights” are pretty much free to advocate any political position—aside from promoting fascism or genocide—because virtually anything is compatible with some subset of human rights. It would be better, or at least more candid, to recognize that human-rights programs are just ways through which students and professors engage in political activism. A human-rights clinic is not merely attempting to enforce the law—as, for example, a law-school immigration clinic does by giving legal advice to noncitizens—though sometimes it does. Human-rights programs exploit just this ambiguity: They represent themselves as enforcing international law when, because international human-rights law is infinitely capacious, they are able to advance whatever political cause inspires them.
And that raises a number of questions. Law students receive credit for their work in human-rights clinics, and no doubt undergraduates receive credit for doing work in human-rights programs that are little different from advocacy. Is it appropriate for the university to award credit to students for engaging in political advocacy? An argument could be made that political advocacy has pedagogical value. That is possible. But if the goal is political, the pedagogical value is secondary, and likely to be compromised.
Similarly, is it appropriate for the university to award teaching credit to professors for engaging in political advocacy? I think nearly everyone would say no. Professors should teach and do research, not corral students for a favored cause.
Finally, there is the question of whether the political advocacy performed in human-rights programs actually does any good. Do students who travel to Congo in order to ensure that mining profits are distributed fairly actually ensure that mining profits are distributed fairly? Do reports on Cambodian garment factories actually improve workplace conditions and wages in those factories? Do advocacy efforts to improve public housing in New Orleans actually result in new housing units, and, if so, is it at the expense of some other impoverished group with a more legitimate demand on the city budget? Will the Chicago City Council’s resolution on domestic violence actually reduce domestic violence? Human-rights websites proudly display reports, articles about meetings with officials, interviews of clients, and recaps of travel abroad to exotic places, but we rarely learn what happens next.
Eric A. Posner is a professor at the University of Chicago Law School. His new book, The Twilight of Human Rights Law, is published by Oxford University Press.