The World’s Best Philosopher of Linguistics

Yesterday while tidying my study I discovered something shocking: The world’s most brilliant, insightful, and prescient philosopher of linguistics died four months ago, and I didn’t know.


I was unwell in March, recovering from minor but painful surgery. Popping opiates like M&M’s, I would fall asleep while reading, and then lie awake in pain all night (my heart still aching from Tricia’s recent death). Yesterday I shifted a pile of papers and uncovered the March 26 issue of The Economist, open at page 61. I had dozed off before reaching the obituary on page 94. Flicking to it now, I learned the sad news. (The Chronicle announced it, like hundreds of other sources; I had missed them all.)

Hilary Putnam’s achievements in a range of distinct disciplines were enormous. He worked in pure mathematics (collaborating on a proof that no algorithmic test exists for integer solvability for exponential Diophantine equations, a major component of the proof that Hilbert’s 10th problem is unsolvable); in logic and computer science (devising the Davis-Putnam algorithm for determining boolean satisfiability); in semantics and the philosophy of language (holism; the Twin-Earth argument for externalism; the argument that model-theoretic semantics does not capture meaning); in philosophy of mind (functionalism and multiple realizability); epistemology (the brain-in-a-vat Gedankenexperiment); and in metaphysics (realism vs. idealized rational acceptability). Overshadowed by all this, his observations on the philosophical problems relating to linguistic science were overlooked. Even well-informed obituaries like this one ignored them.

Over six decades Putnam presented arguments and observations revealing his grasp of the big questions underlying linguistics. He foresaw with amazing clarity the key problems future linguistic science would confront. I have space for only three examples here (please forgive some unexplained technicalities and the absence of a bibliography).

1.  Putnam’s paper at a symposium on applied mathematics in April 1960 (“Some Issues in the Theory of Grammar”) contained a compelling discussion of the inseparability of syntax from semantics and the importance of the fact that we can understand “deviant” sentences, and included this remark: “It is easy to show that any recursively enumerable set of sentences could be generated by a transformational grammar in Chomsky’s sense.” The result he called “easy” was indeed proved by mathematical linguists — nearly a decade later, after years of toil. It is known today as the Peters-Ritchie theorem. Even for those with a good grasp of Turing machines, the reasoning is complex. To Putnam it was immediately obvious. It is an important and worrying result because a theory of grammar that permits a grammar for every recursively enumerable set of sentences is too powerful to yield any testable claims (beyond the truism that languages can be finitely characterized). Not even parsability is guaranteed.

2.  A little-known 1963 paper of Putnam’s (“Probability and Confirmation”) considers the idea of a perfect inductive learning machine, guaranteed to learn the regularities in its environment. He showed (by diagonalization) that such a machine is impossible. The reasoning readily generalizes to the issue of whether there could be a device that could reliably identify the linguistic regularities in the environment to which it was exposed, and thus learn a language perfectly from exposure to examples. Putnam thus anticipated key elements and proof strategies in Mark Gold’s 1967 paper “Language Identification in the Limit,” the most influential work ever published on the mathematical formalization of evidence-based language learning. (Gold’s paper has been misunderstood as showing that language learning without built-in innate assistance is impossible. The truth is more complex: See this discussion.)

3.  In a 1967 paper in Synthese (“The ‘Innateness Hypothesis’ and Explanatory Models in Linguistics”) and a conference paper a few years later (“What is Innate and Why”) Putnam addressed Chomsky’s celebrated claims that human language learning must have an innate basis. He raised essentially all of the issues that are still under discussion today, after 50 years of debate: that the claimed similarities between human languages are either not that surprising or not that clear; that linguistic universals don’t necessarily stem from innate characteristics of the human mind; that the evidence does not establish that language learning is either intractable or independent of general intelligence; that simply asking “What else could account for it?” is not an argument. “Let a complete 17th-century Oxford University education be innate if you like,” he writes; “Invoking ‘Innateness’ only postpones the problem of learning; it does not solve it.”

(Theoretical linguists should consider Putnam’s remark on page 295 of the Piattelli-Palmarini volume: “H1 is indeed simple, but its inverse is horribly complicated.” That is a brilliant insight. I wish I had the space to explain why.)

Hilary Putnam’s death from mesothelioma last March 13 deprived us of a polymathic genius, a philosophical giant. And one consequence is that we will see no more of his insights on linguistic theory. Four months late, I deeply mourn his passing.

Return to Top