It’s a truism that, the more you learn, the more you realize how much you don’t know. That mentality holds in most traditional academic disciplines, where humility is a survival strategy that keeps us from stepping into ankle-breaking intellectual sinkholes known only to experts in unfamiliar fields and subfields and subsubfields.
And then there’s Silicon Valley.
In a recent episode of the New York Times technology podcast Hard Fork, Demis Hassabis, the chief executive of Google DeepMind and a recent Nobel laureate, described his team’s progress toward creating artificial general intelligence. While the majority of the training is happening through Google’s general model, Gemini 2.5, there are many special cases that require specific data and domain expertise. “But underlying it,” he continued, “when you crack one of those areas, you can also put those learnings back into the general model. And then the general model gets better and better. So it’s a very interesting flywheel. And it’s great fun for someone like me, who’s very interested in many things. You get to use this technology and go into almost any field that you find interesting.”
As a committed generalist myself, I respect Hassabis’s breadth of curiosity. But the kind of generalism that Google, OpenAI, Anthropic, and other companies are trying to create reflects an understanding of knowledge particular to business schools and the broader executive culture of which they’re a part. We can see traces of the business episteme in a couple of the buzzwords Hassabis uses.
The first is “flywheel,” a word that entered the executive suite through one of the most widely read business books of the 21st century, Jim Collins’s Good to Great. A flywheel is a device for maintaining a steady flow of energy; picture a large metal wheel that, through its inertial heft, smooths out fluctuations in the force that goes into spinning it. The notion behind the metaphor is that success builds on success; the hardest work is in the early going, when you’re getting the flywheel cranked up. If you can, in Collins’s words, “get the right people on the bus” and build up the right energy, your business will be able to achieve a kind of momentum that’s hard to stop.
The metaphor might hold up in the commercial world, where once you’ve developed a desirable product, shored up your supply chain, figured out a sustainable way to balance revenue and expenses, and built market share, you’re theoretically in for many years of profit. But Hassabis isn’t talking about building a business. He’s talking about educating an artificial intelligence — about learning.
Or, actually, about “learnings,” which is the other word that concerns me. Much of the time, we treat learning as a verb, as in “I’m learning chemistry.” Or we treat it as a “mass noun”: a noun that isn’t really singular or plural, because it’s an abstract concept we can’t pin down to a discrete number of things, as in the phrase “her learning is immense.” Learning, in this sense, can’t be quantified, coded, counted, or put into a spreadsheet.
Making “learning” into a plural noun — i.e. truly a number of things — isn’t entirely new, but there’s a strange flavor to the way Hassabis and his executive brethren use it. The Oxford English Dictionary traces this usage back to the late 14th century, even citing Shakespeare, from Cymbeline: “The king ... puts to him all the learnings that his time could make him the receiver of.” In this sense, “learnings” are lessons or perhaps insights. The word feels archaic, fusty, self-aggrandizing.
Perhaps that’s why it’s found a natural home in the language of business. One finds, on the websites of various management programs, student blogs advertising the “MBA Learnings” that successful graduates can leverage. These are essentially listicles made out of bullet-pointed platitudes (“Failing Fast > Failing > Not Trying”) and inane tautologies (“Feedback is a gift that keeps on giving.”) Cliché masquerading as knowledge.
One of the paradoxes of the Trump administration’s attack on higher education is that these supposed hotbeds of radical leftism have long been intimately woven into the global corporate order. Already in 1918, in The Higher Learning (not Learnings) in America, Thorstein Veblen objected to “the Conduct of Universities by Business Men,” and saw “the pursuit of business, with the outlook and predilections which that pursuit implies” as the organizing ethos of American academe.
This partnership only accelerated through the rise of what would become known as “the science of learning,” an educational philosophy whose roots are inseparable from the rise of artificial intelligence. In 1991, Northwestern University hosted the first conference titled after this concept of applying scientific standards to the conduct of education: The International Conference on the Learning Sciences. This new conference, however, was actually the fifth “International Conference on Artificial Intelligence and Education,” simply retitled to “open the conference to a broader audience” and ultimately to erase the difference between teaching machines and teaching humans.
With the rising cost of college education and the idea that learning could be scientifically optimized, only a fool would pursue the imprecision and poor return on investment of a degree in literature, or history, or political theory. How could you even know that you’d learned anything in such a course of study? What would be your “learnings” if they couldn’t be itemized and assessed?
And so under the guise of entrepreneurship and innovation, an authoritarian style emerged in our knowledge practices, in engineering and business schools, in college presidents’ offices, in corporate training, and in the underlying ideology of the self-styled “accelerationists” developing AI systems they claim will set us all free from labor. Learning simply became capital, a bunch of things that could be accumulated and leveraged into more of themselves. And as learning has been commodified and enclosed by intellectual-property law, we have experienced a gradual but profound shift from learning as the practice of democracy to learning as the power that rules over us and denies our agency as citizens. That is the real flywheel upon which we’re dizzily spinning, right into Silicon Valley’s sci-fi dystopia.
Of course, learning has another history. That would be the history of teachers performing the social knowledge work of helping students learn with one another; the political knowledge work of communities negotiating their way to shared goals. That would be the story of the emotional and cultural labor that helps us make sense of our place in the world. Much of this history happened underground: enslaved people seeking the tools (reading, writing, oratory) of freedom, Indigenous land practices, hedge schools in Ireland … I could go on and on. We don’t have time for that story here. We don’t seem to have time for that story anywhere, anymore.
Under the guise of entrepreneurship and innovation, an authoritarian style emerged in our knowledge practices.
I don’t know what to call the kind of person who would stand in the depths of a valley, look around, and say they’ve achieved a panoramic view as they count up their “learnings.” President? That a man as smart as Hassabis has had his understanding of that complex thing we call knowledge so corrupted by business-speak should terrify us all.
Late in the Succession creator Jesse Armstrong’s new film, Mountainhead, one of the tech CEO protagonists shouts that their recent failures have given them “a ton of big, big learnings!” What have they failed at? They have just screwed up their second attempted murder of their friend and fellow CEO Jeff, and they are resolving to get it right the third time. Mountainhead is above all a satire of the sophistic reasoning Silicon Valley elites employ to justify their pursuit of still more power and wealth, their capacity to represent any kind of social harm as merely a necessary stage in a process (“progress”) that will leave everyone better off.
Here we find the most critical difference between “learning” and “learnings”: Learning is necessarily connected to accumulated human wisdom. In line with the historical mission of universities, learning aims to preserve knowledge, even knowledge that may seem in a given moment to be antiquated or ideologically out of step with prevailing social and cultural mores. Universities have, much to their detriment, failed to hold the line on this principle, acceding to the corporate logic of learnings — sometimes in ways that may appear politically progressive on the surface.
For learnings come and go. They line up behind fashionable ideas and justify absurdities or worse. In contrast to the cumulative approach to knowledge that learning implies, learnings fuel the business logic of “longtermism,” the claim that even if some set of technological or market developments seem to be causing harm in the short term, we simply need to wait for the larger, longer-term benefits (“Failing Fast > Failing > Not Trying,” again and again). Learnings encourage a selective analysis where we only consider evidence in the context of a strategic goal and organize our knowledge practices accordingly.
When you hear university administrators uncritically talk about their learnings — I have — it’s time to sound the alarm. When the MBAs infiltrate our education system, our capacity to understand knowledge in all its unruly, conflicted beauty is at risk.