Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    College Advising
    Serving Higher Ed
    Chronicle Festival 2025
Sign In
First Person

From Technologist to Philosopher

Why you should quit your technology job and get a Ph.D. in the humanities

By Damon Horowitz July 17, 2011
Careers 1-7-11
Brian Taylor for The Chronicle

How does someone become a technologist?

In my case, it happened in college. I was an undergraduate at Columbia University, reading and discussing what were once unrepentantly called “the classics.” I really wanted to understand what the great thinkers thought about the great questions of life, the human condition, the whole metaphysical stew. And the problem was: We didn’t seem to be making much progress.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

How does someone become a technologist?

In my case, it happened in college. I was an undergraduate at Columbia University, reading and discussing what were once unrepentantly called “the classics.” I really wanted to understand what the great thinkers thought about the great questions of life, the human condition, the whole metaphysical stew. And the problem was: We didn’t seem to be making much progress.

The great questions of philosophy have a way of defying easy resolution. Confronting them, we all seemed like such feeble thinkers—students and teachers and dead white males alike. We make mistakes, we are prone to inconsistencies, we equivocate. This was very frustrating to an impatient undergraduate.

Happily, in my case, fate intervened—in the form of my mother telling me, in no uncertain terms, that I should take a computer-science class, because if all else failed, then I could get a job at the phone company.

So in my sophomore year I learned to program a computer. And that was an intoxicating experience.

When you learn to program a computer, you acquire a superpower: the ability to make an inanimate object follow your command. If you have a vision, and you can articulate it in code, you can make it real, summon it forth on your machine. And once you’ve built a few small systems that do clever tasks—like recognizing handwriting, or summarizing a news article—then you think perhaps you could build a system that could do any task. That is, of course, the holy grail of artificial intelligence, “AI.”

To a young undergraduate, frustrated with the lack of rapid progress on tough philosophical questions, AI seemed like the great hope, the panacea—the escape from the frustrations of thinking. If we human beings are such feeble thinkers, perhaps philosophy is best not left to human beings. We could instead just build better thinkers—artificially intelligent machines—and they could answer our questions for us.

Thus I became a technologist. I earned my first graduate degree at the Massachusetts Institute of Technology, then went on to build several start-up companies around my specialization, a branch of AI called “natural language processing,” or, more simply, “getting computers to understand what we are talking about.”

It’s fun being a technologist. In our Internet-enabled era, it is easy for technologists to parlay creative power into societal power: We build systems that ease the transactions of everyday life, and earn social validation that we are “making the world a better place.” Within a few years I had achieved more worldly success than previous generations could have imagined. I had a high-paying technology job, I was doing cutting-edge AI work, and I was living the technotopian good life.

But there was a problem. Over time, it became increasingly hard to ignore the fact that the artificial intelligence systems I was building were not actually that intelligent. They could perform well on specific tasks; but they were unable to function when anything changed in their environment. I realized that, while I had set out in AI to build a better thinker, all I had really done was to create a bunch of clever toys—toys that were certainly not up to the task of being our intellectual surrogates.

ADVERTISEMENT

And it became clear that the limitations of our AI systems would not be eliminated through incremental improvements. We were not, and are not, on the brink of a breakthrough that could produce systems approaching the level of human intelligence.

I wanted to better understand what it was about how we were defining intelligence that was leading us astray: What were we failing to understand about the nature of thought in our attempts to build thinking machines?

And, slowly, I realized that the questions I was asking were philosophical questions—about the nature of thought, the structure of language, the grounds of meaning. So if I really hoped to make major progress in AI, the best place to do this wouldn’t be another AI lab. If I really wanted to build a better thinker, I should go study philosophy.

Thus, about a decade ago, I quit my technology job to get a Ph.D. in philosophy. And that was one of the best decisions I ever made.

ADVERTISEMENT

When I started graduate school, I didn’t have a clue exactly how the humanities investigated the subjects I was interested in. I was not aware that there existed distinct branches of analytic and continental philosophy, which took radically different approaches to exploring thought and language; or that there was a discipline of rhetoric, or hermeneutics, or literary theory, where thinkers explore different aspects of how we create meaning and make sense of our world.

As I learned about those things, I realized just how limited my technologist view of thought and language was. I learned how the quantifiable, individualistic, ahistorical—that is, computational—view I had of cognition failed to account for whole expanses of cognitive experience (including, say, most of Shakespeare). I learned how pragmatist and contextualist perspectives better reflect the diversity and flexibility of our linguistic practices than do formal language models. I learned how to recognize social influences on inquiry itself—to see the inherited methodologies of science, the implicit power relations expressed in writing—and how those shape our knowledge.

Most striking, I learned that there were historical precedents for exactly the sort of logical oversimplifications that characterized my AI work. Indeed, there were even precedents for my motivation in embarking on such work in the first place. I found those precedents in episodes ranging from ancient times—Plato’s fascination with math-like forms as a source of timeless truth—to the 20th century—the Logical Positivists and their quest to create unambiguous language to express sure foundations for all knowledge. They, too, had an uncritical notion of progress; and they, too, struggled in their attempts to formally quantify human concepts that I now see as inextricably bound up with human concerns and practices.

In learning the limits of my technologist worldview, I didn’t just get a few handy ideas about how to build better AI systems. My studies opened up a new outlook on the world. I would unapologetically characterize it as a personal intellectual transformation: a renewed appreciation for the elements of life that are not scientifically understood or technologically engineered.

In other words: I became a humanist.

And having a more humanistic sensibility has made me a much better technologist than I was before. I no longer see the world through the eyes of a machine—through the filter of what we are capable of reducing to its logical foundations. I am more aware of how the products we build shape the culture we are in. I am more attuned to the ethical implications of our decisions. And I no longer assume that machines can solve all of our problems for us. The task of thinking is still ours.

ADVERTISEMENT

For example, at my most recent technology start-up company (called Aardvark), we took a totally new approach to the problem of search. We created what we called a social search engine. When you have a question, we connect you to another person who can give you a live answer. That arose from thinking about the human needs that people have when asking questions. Instead of defining a query as an information-retrieval problem, and returning a list of Web pages, we treat it as an invitation to a human engagement. That humanist approach is largely responsible for Aardvark’s success with users—and for Google’s decision to acquire the company last year, to explore how this perspective might inform other traditional business problems.

So why should you leave your technology job and get a humanities Ph.D.?

Maybe you, too, are disposed toward critical thinking. Maybe, despite the comfort and security that your job offers, you, too, have noticed cracks in the technotopian bubble.

Maybe you are worn out by endless marketing platitudes about the endless benefits of your products; and you’re not entirely at ease with your contribution to the broader culture industry.

ADVERTISEMENT

Maybe you are unsatisfied by oversimplifications in the product itself. What exactly is the relationship created by “friending” someone online? How can your online profile capture the full glory of your performance of self?

Maybe you are cautious about the impact of technology. You are startled that our social-entertainment Web sites are playing crucial roles in global revolutions. You wonder whether those new tools, like any weapons, can be used for evil as well as good, and you are reluctant to engage in the cultural imperialism that distribution of a technology arguably entails.

If you have ever wondered about any of those topics, and sensed that there was more to the story, you are on to something. Any of the topics could be the subject of a humanities dissertation—your humanities dissertation.

The technology issues facing us today—issues of identity, communication, privacy, regulation—require a humanistic perspective if we are to deal with them adequately. If you actually care about one of those topics—if you want to do something more serious about it than swap idle opinions over dinner—you can. And, I would venture, you must. Who else is going to take responsibility for getting it right?

ADVERTISEMENT

I see a humanities degree as nothing less than a rite of passage to intellectual adulthood. A way of evolving from a sophomoric wonderer and critic into a rounded, open, and engaged intellectual citizen. When you are no longer engaged only in optimizing your products—and you let go of the technotopian view—your world becomes larger, richer, more mysterious, more inviting. More human.

Even if you are moved by my unguarded rhapsodizing here, no doubt you are also thinking, “How am I going to pay for this?!” You imagine, for a moment, the prospect of spending half a decade in the library, and you can’t help but calculate the cost (and “opportunity cost”) of this adventure.

But do you really value your mortgage more than the life of the mind? What is the point of a comfortable living if you don’t know what the humanities have taught us about living well? If you already have a job in the technology industry, you are already significantly more wealthy than the vast majority of our planet’s population. You already have enough.

If you are worried about your career, I must tell you that getting a humanities Ph.D. is not only not a danger to your employability, it is quite the opposite. I believe there no surer path to leaping dramatically forward in your career than to earn a Ph.D. in the humanities. Because the thought leaders in our industry are not the ones who plodded dully, step by step, up the career ladder. The leaders are the ones who took chances and developed unique perspectives.

ADVERTISEMENT

Getting a humanities Ph.D. is the most deterministic path you can find to becoming exceptional in the industry. It is no longer just engineers who dominate our technology leadership, because it is no longer the case that computers are so mysterious that only engineers can understand what they are capable of. There is an industrywide shift toward more “product thinking” in leadership—leaders who understand the social and cultural contexts in which our technologies are deployed.

Products must appeal to human beings, and a rigorously cultivated humanistic sensibility is a valued asset for this challenge. That is perhaps why a technology leader of the highest status—Steve Jobs—recently credited an appreciation for the liberal arts as key to his company’s tremendous success with their various i-gadgets.

It is a convenient truth: You go into the humanities to pursue your intellectual passion; and it just so happens, as a by-product, that you emerge as a desired commodity for industry. Such is the halo of human flourishing.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
ADVERTISEMENT
ADVERTISEMENT

More News

Vector illustration of large open scissors  with several workers in seats dangling by white lines
Iced Out
Duke Administrators Accused of Bypassing Shared-Governance Process in Offering Buyouts
Illustration showing money being funnelled into the top of a microscope.
'A New Era'
Higher-Ed Associations Pitch an Alternative to Trump’s Cap on Research Funding
Illustration showing classical columns of various heights, each turning into a stack of coins
Endowment funds
The Nation’s Wealthiest Small Colleges Just Won a Big Tax Exemption
WASHINGTON, DISTICT OF COLUMBIA, UNITED STATES - 2025/04/14: A Pro-Palestinian demonstrator holding a sign with Release Mahmud Khalil written on it, stands in front of the ICE building while joining in a protest. Pro-Palestinian demonstrators rally in front of the ICE building, demanding freedom for Mahmoud Khalil and all those targeted for speaking out against genocide in Palestine. Protesters demand an end to U.S. complicity and solidarity with the resistance in Gaza. (Photo by Probal Rashid/LightRocket via Getty Images)
Campus Activism
An Anonymous Group’s List of Purported Critics of Israel Helped Steer a U.S. Crackdown on Student Activists

From The Review

John T. Scopes as he stood before the judges stand and was sentenced, July 2025.
The Review | Essay
100 Years Ago, the Scopes Monkey Trial Discovered Academic Freedom
By John K. Wilson
Vector illustration of a suited man with a pair of scissors for a tie and an American flag button on his lapel.
The Review | Opinion
A Damaging Endowment Tax Crosses the Finish Line
By Phillip Levine
University of Virginia President Jim Ryan keeps his emotions in check during a news conference, Monday, Nov. 14, 2022 in Charlottesville. Va. Authorities say three people have been killed and two others were wounded in a shooting at the University of Virginia and a student is in custody. (AP Photo/Steve Helber)
The Review | Opinion
Jim Ryan’s Resignation Is a Warning
By Robert Zaretsky

Upcoming Events

07-31-Turbulent-Workday_assets v2_Plain.png
Keeping Your Institution Moving Forward in Turbulent Times
Ascendium_Housing_Plain.png
What It Really Takes to Serve Students’ Basic Needs: Housing
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin