Jonathan Haidt says he knows what’s driving the teen mental-health crisis: addictive, distraction-laden, horror-filled apps like Instagram and TikTok. That’s the thesis of the New York University social psychologist’s book, The Anxious Generation, an instant sensation in parenting circles and a New York Times bestseller this spring.
Then a study appeared in a scientific journal. Chris Ferguson, a psychologist at Stetson University, in Florida, reported that he’d analyzed dozens of experiments and reached a conclusion that undermined Haidt, whom he named. “Reducing social-media time,” he announced on X, formerly Twitter, “has NO impact on mental health.”
Haidt responded by tearing apart what he called Ferguson’s “fundamental flaws.” The science did support logging off, Haidt argued in his Substack in August, citing an analysis of his own, led by a research assistant in the Czech Republic. “Experiments consistently show benefits from reducing social-media use” if they last long enough, Haidt fired back on X.
The war was on.
In the debate over whether technology is harming Generation Z, The Anxious Generation’s recommendations — no social media before 16, no smartphones before high school — are gaining traction with parents (and celebrities like Prince Harry and Oprah Winfrey). Haidt is being cited in legislative efforts to keep minors off social media in Alabama, Florida, and Australia, and schools are banning phones, another favorite reform of his.
But scholars who study how technology affects youth and well-being — topics on which Haidt has done almost no academic research — accuse him of exploiting parents’ fears. Many other researchers have failed to find persuasive evidence that social media is a major factor in mental-health outcomes. The Anxious Generation, critics say, conflates correlation with causation: Just because smartphone use rose over a period of growing teen depression and anxiety, “the great rewiring of childhood” is not necessarily “causing an epidemic of mental illness,” as the book’s subtitle asserts. Mark Zuckerberg, Meta’s CEO, has noted that research “has not shown a causal link,” while lawsuits from dozens of states accuse Facebook and Instagram of making their apps addictive and harmful for children.
Haidt and Ferguson’s clash, in other words, had real-world implications for policy, industry, and parenting. Others jumped into the fray with their own arguments: a graduate student at the University of Connecticut, a Johns Hopkins University psychologist. Their debate — in which all four ultimately revised their positions to some degree — unfolded in journals and newsletters, on blogs and X and preprint servers, demonstrating how, in today’s information ecosystem, peer review never really ends. Every scientist’s voice can be heard.
But the loudest voice in this debate is Haidt, whose book has been a bestseller since it published in March and who has more than 430,000 followers on X. (The other three scientists have, as of this writing, a combined 16,600.) Following his multi-part rebuttal to Ferguson, which was sent to his 100,000-plus newsletter subscribers, Haidt’s skeptics question whether he is acting like an academic who’s committed to transparent dialogue or like a celebrity who’s wielding his academic credentials to hawk a product. “It’s not really the social-media debate anymore, it is The Anxious Generation debate,” Ferguson told me. “You can’t have this debate without immediately thinking about Jonathan Haidt — it’s just him. And I’m always scared whenever any one person becomes the name associated with a research field. It’s never good.”
But Haidt said he was bewildered by the backlash. All he was doing, he told me, was “trying to work everything out in public.”
The Anxious Generation rattles off a barrage of frightening statistics. In the United States, from 2010 to 2021, depression rates rose 145 percent in teen girls and 161 percent in teen boys. Anxiety rates rose 139 percent among young adults during that period, too, and rates of suicide and of emergency-room visits for self-harm increased among adolescents. Haidt cites similar spikes in Canada, the U.K., Australia, New Zealand, and Scandinavia. “There was little sign of an impending mental-illness crisis among adolescents in the 2000s,” he writes. “Then, quite suddenly, in the early 2010s, things changed.”
What changed, Haidt posits, was the birth of the “phone-based childhood.” The years 2010 to 2015 ushered in Instagram and iPhones with front-facing cameras. Facebook and Twitter posts were being liked and retweeted and push-notificationed into virality. By 2016, surveys showed, 79 percent of American teenagers owned a smartphone, and one out of four reported being online “almost constantly” — numbers that have since multiplied. Because young people got sucked into cyberspace, Haidt says, they lost sleep and their ability to concentrate, stopped playing with friends in person, and got addicted to their screens. Girls, he adds, are especially vulnerable to social media’s harms. “The only plausible theory I have found that can explain the international decline in teen mental health,” Haidt concludes, “is the sudden and massive change in the technology that teens were using to connect with each other.”
Leading up to and after The Anxious Generation’s release, Haidt laid out this theory in his newsletter, social-media posts, interviews, and an army of Google Docs that he continually updates and invites others to contribute to. Before then, Haidt, a professor of ethical leadership at NYU since 2011, was best known for his research on the psychology of morality, moral emotions, and political polarization. Ferguson told me that he admires Haidt for warning that free speech is under threat in higher education, as he did in The Coddling of the American Mind, a 2018 book written with the president of the Foundation for Individual Rights and Expression. Haidt also cofounded Heterodox Academy, a nonprofit aiming to foster ideological diversity.
It’s not really the social-media debate anymore, it is The Anxious Generation debate. You can’t have this debate without immediately thinking about Jonathan Haidt.
But on the topic of screen time, Haidt has “gotten far, far ahead of the data,” Ferguson, who studies media’s effects on behavior, wrote this spring.
While the U.S. Surgeon General has warned that social media can “pose a risk of harm” to children, analyses of multiple studies show the correlations between technology use and mental-health problems to be small or mixed. Experiencing stressful life events, having a family history of mental disorders, and having a poor relationship with your parents are all stronger risk factors for depression.
Other factors may also complicate the picture. Gen Z doesn’t consider internal struggles taboo like their elders did, and depression and anxiety rates that are based on self-reports, not clinical diagnoses, could be skewed. So could studies based on self-reported screen time. Changes to medical record-keeping in the mid-2010s could help explain the national rise in suicide attempts — a spike that occurred among virtually all age groups in the U.S. in the 2010s. Teen suicides also did not increase everywhere social media was used that decade; in some European countries, like France, Portugal, and Denmark, they fell among people aged 15 to 19 from 2010 to 2019, according to the World Health Organization. “Social media is not inherently beneficial or harmful to young people,” the American Psychological Association stated last year. “Contrary to the current cultural narrative that social media is universally harmful to adolescents,” concluded a report from the National Academies of Science, Engineering, and Medicine, “the reality is more complicated.”
Researchers on both sides of the aisle agree that tech companies should make their products safer for young users by limiting targeted advertising, verifying their ages, and teaching digital literacy. But critics of Haidt — and of others who contend that smartphones “destroyed a generation” — worry that so much emphasis on the apps’ harms is distracting from other potential causes of teens’ mental distress. It could also deprive them of social media’s benefits: communicating with friends, learning, consuming media, playing games, finding solace in marginalized groups.
Correlational studies, like the ones Haidt frequently cites, don’t directly answer the critical question of whether social media depresses people or depressed people use social media more. Earlier this year, Ferguson thought he’d look at a different kind of evidence to try to get closer to an answer. He rounded up 27 randomized controlled trials that asked participants to modify their social-media use, asked others to use it like normal, and compared markers of their psychological well-being. It was unclear, he added, whether these studies could even show if the experiments caused any mental-health changes.
Nevertheless, his meta-analysis — a study of studies — found the combined evidence for causal effects to be “statistically no different than zero.” Published in Psychology of Popular Media in early May, it went on to be cited by outlets such as The Atlantic. “All right, so this seems like a mark against my book,” Haidt recalls thinking.
Then he and Zach Rausch, his NYU research assistant, came across a lengthy critique of Ferguson’s study by David Stein, the author of a newsletter about suicide rates. “We thought that this was a really important post to get out more widely,” Rausch said, and his Substack series with Haidt would go on to draw “heavily,” in their words, on his arguments.
Haidt told me that he has for years emailed with Stein, who read a few early chapters of The Anxious Generation and is named in the acknowledgments. Haidt described Stein as an independent scholar in the Czech Republic with a background in “math and math education,” though he wasn’t sure if that included a Ph.D. (Stein describes himself as a “former graduate student” on his blog.) Haidt said that they’ve never met, seen each other, or spoken, as Stein is “extremely shy.” This summer, the professor hired Stein as a part-time researcher. “He loves to dig into the details of an analysis and find errors,” Haidt said. He trusts Stein, he continued, because “his judgment seems very good. I mean, he gives us detailed analyses, and we read them, and we say, ‘Wow, you know, this seems right.’ And this is on matters that we understand, about the psychology. On matters of meta-analysis, he’s more expert than we are; I have no expertise in it.”
I emailed Stein with an invitation to talk on Zoom. He declined, saying that statistics can be difficult to discuss, and also that he had a heavy accent and hadn’t spoken much English recently. So I sent over questions about what he did for work, whether he was affiliated with any universities, where he lived in the Czech Republic, which degrees he held, and where he’d learned about statistics. He said that he was also a math tutor, but otherwise did not provide answers. Instead, he likened my questioning to “the investigations by the House Committee on Un-American Activities.”
Another time, I asked Haidt and Rausch if they wanted to respond to a criticism someone had raised. My email was forwarded to Stein, who then accused me of not being an impartial journalist. “David has a correspondence style that I disapprove of,” Haidt told me, when I asked if Stein spoke for him. “I think it is always important to be civil and to give people the benefit of the doubt. I have asked David to do this as well. David works for us, in terms of the analyses he has done for us, but he does not speak for us.”
In an August 29 post on their Substack, After Babel, Haidt and Rausch, with Stein’s help, started laying out their objections to Ferguson’s work. They found fault with the way he combined types of experiments that asked participants to do, in their view, drastically different tasks. Some had directed people to use social media in a lab for a few minutes, and the pair argued that whatever mental-health effects they produced shouldn’t be blended with those from experiments where people took a break from the apps for days or weeks. And the latter did show benefits, at least when looked at a certain way. Cutting back for up to a week led to declines in mental health — but doing so for at least two yielded “consistent” benefits. This makes sense, Haidt and Rausch wrote, since “withdrawal symptoms” from “any addictive substance or activity” frequently last “up to two weeks.”
Matthew B. Jané, a University of Connecticut graduate student who studies quantitative psychology, and meta-analyses specifically, saw that re-analysis on X. “I thought it was begging me to respond,” he told me. On his blog that day, he wrote about the faults he saw in both camps. He thought that Ferguson had improperly bundled all mental-health-related measures into a single measure, an issue also raised by the other side. (Ferguson said that isolating the measures individually could have distorted the findings.)
But Haidt and Rausch lacked “adequate statistical rigor,” Jané wrote. They hadn’t weighted bigger studies more heavily than smaller ones, nor had they preregistered their analysis, as Ferguson had, meaning they hadn’t documented ahead of time how they planned to do it. Jané also noted that the results could look quite different with slight tweaks to the parameters. When the cut-off point was set at longer than two weeks — rather than two or more — the average mental-health benefit was only half as big. It was even smaller at longer than three weeks, Jané reported.
You can torture the data so much, you end up finding the result you want.
He wondered why the After Babel crew had chosen the threshold that they had. Haidt’s source for the two-week minimum was a 2022 interview with a psychiatrist. But he makes a different recommendation in The Anxious Generation, where he writes, citing research he’d gathered, that people “need at least three weeks for their brains to reset and get past withdrawal cravings.” Selectively analyzing data after publication is risky, Jané told me: “You can torture the data so much, you end up finding the result you want. And of course, the cut point they used was the optimal cut point for their desired results.”
Asked to respond to that concern, Rausch said, “I think that’s a fair critique.” Stein said by email that analyzing the data as he did “had nothing to do with knowing ahead any results.”
And Haidt replied: “We didn’t play around with any numbers. As for do you put the cutoff at one or two weeks? Okay, fine, that was a judgment call.”
But in his opinion, people were reading too much into a rough draft. “This wasn’t intended to be a preregistered meta-analysis,” he told me. “This was, ‘We’re working through it. Look, here’s a series of posts. We’re digging into this. We’re trying to understand it.’ And so for people to criticize us for not having done everything preregistered in the first exploratory post? I don’t understand what the whole conflict is about.”
Haidt has been mounting a defense along these lines since his re-analysis first drew scores of scathing comments on social media. “We did NOT offer a meta-analysis,” he protested in early September, in response to Jané and others. But Jané said that wasn’t a valid excuse: A leading medical-research group defines a meta-analysis as “the statistical combination of results from two or more separate studies.” Haidt’s work “is, by definition, a meta-analysis,” he wrote — even if it used “obsolete and unacceptable methods.” Jané’s assessment at the time indicated that experiment length seemingly had little to no bearing on the size of the mental-health effects.
Jané also objected to a footnote in one of Haidt’s Substack posts that described the type of analysis done by Ferguson. It began by stating that he’d “carried out a ‘random-effects model meta-analysis’ that is meant to approximate the size of a single effect based on data from a collection of experiments measuring this single effect.” That is not an accurate definition, Jané said. Haidt and Rausch referred my questions about this to Stein, who defended it, saying in part that “this footnote was not a ‘definition.’” At one point, he copied two statisticians into our exchange. When I asked one of them, Dean Eckles of the Massachusetts Institute of Technology, for his thoughts, he wrote, “This statement, at minimum, is very misleading.” And Andrew Gelman, of Columbia University, said by email, “I don’t think this is a good definition of a random-effects meta-analysis.”
Haidt’s announcement of his re-analysis rebutting Ferguson’s data got more than 475,000 views on X. It also got labeled with an advisory generated by users: “As many statistics experts have pointed out, this analysis is incorrect. If done correctly, the analysis shows that reducing social media has no effect.” (It was later toned down.) On September 1, according to emails I reviewed, Stein emailed Jané — whose criticisms were going viral, too — and blamed him for the content of the advisory, which linked to Jané’s blog.
“I did not write the community note (nor do I have any say in whether it gets posted or not),” Jané replied, adding that he did not endorse it, either.
Stein then accused him of writing a post that was “partly an advertisement of your R coding skills for sale.” (On his website, Jané lists his services as a statistics consultant.) “Do you serious[ly] wish to personally profit from a massive misinformation campaign?” Stein wrote. “Especially one that, ultimately, may help corporate tech delay reforms that might help save kids’ lives?”
Jané, who says that his graduate-school salary is about $25,000 a year and that his earnings from consulting total $5,000, told me that he blocked Stein. Stein told me his remarks were justified, since Jané had also said he was “happy” that the community note linked to his blog.
On September 10, Haidt and Rausch updated their Substack post to incorporate some of Jané’s critiques, adding, for instance, indicators of how precise their estimates of the mental-health effects were. But they also fired off new criticisms. Ferguson had used incorrect participant numbers from a handful of studies, and according to Rausch and Haidt, he’d agreed to fix some — but not all — of them. (“Ferguson said he would correct: NO,” read a table they published.) Ferguson’s meta-analysis therefore “should not be used to argue that there’s no experimental evidence” that reducing social-media use affects mental health, Stein wrote.
But Ferguson believes that Haidt and Rausch mischaracterized their exchange, which he shared with me, and which he said took place as he was about to travel at the start of the semester. “Because of the time constraints I don’t have a chance to look back through them,” Ferguson wrote on August 31, replying to a query from Rausch about some of the studies. “When I took a quick look yesterday, they looked right. I’d say, as before, use what you think is best. It’s possible to have disagreements about this.” They didn’t ask him to further clarify, and he never said he wouldn’t correct these errors, he told me, so he feels like “they manufactured some kind of conflict that never really happened.”
Haidt said that he had not mischaracterized their correspondence. He said the table was one result of a longer exchange in which Ferguson failed to provide information that Haidt and Rausch asked for and declined to correct other alleged errors.
Ferguson has since posted a spreadsheet with fixed data and, as of October, was going through Psychology of Popular Media’s process of issuing formal notices about the corrections. Some of them were pointed out by Haidt, others were not, and they don’t change the overall findings, he told me. “If I’m wrong, then I want that to be known,” he said. “It’s about the data, it’s about the science, it’s not about me or my ego.”
But he questions whether his counterpart is abiding by the same principles. Haidt had also chastised Ferguson for using studies he believes fell outside the analysis’ scope and ignoring others that were supposedly key (since they were on Haidt’s crowd-sourced Google Doc). Ferguson said he’d simply included those that met his pre-registered criteria. “By happy coincidence,” Ferguson said, “he zeroes in on every study that doesn’t support his personal belief and has an argument for why they should be removed.”
Haidt said he was surprised to be accused of not adhering to the science, when the same could be said of Ferguson. “What are the odds,” he wrote, “that a dozen errors, randomly made, would all pull the results in a single direction?”
It’s a collision between a knowledge-discovery process, which is necessarily messy, and a narrative-generation process, which is unencumbered by critique.
He told me that he and Rausch are now starting to do a meta-analysis of their own for an eventual journal article, and Stein is compiling and helping analyze data for it. I asked why they hadn’t done it sooner — before Ferguson did his, before The Anxious Generation came out. Haidt said they couldn’t have, as they’d been months behind on the book. “A true meta-analysis requires a very careful, systematic search,” he said, “and we just have been incredibly busy.”
Andrew Przybylski, a University of Oxford professor of human behavior and technology, sees Haidt and Ferguson as talking past each other. (Przybylski, who has published studies with Ferguson, has himself found little evidence linking digital technology to psychological harm and has sparred with Haidt over those findings.) “It’s a collision,” he told me, “between a knowledge-discovery process, which is necessarily messy, and a narrative-generation process, which is unencumbered by critique.”
Put simply: Haidt is “telling a story with Google Docs. Hundreds of academics who aren’t being listened to are doing science, Chris [Ferguson] among them.”
Haidt called this critique “disingenuous,” noting that Przybylski and researchers like him have been cited by media outlets from The New York Times to The Guardian. “They dominated the narrative from 2019 through 2023,” he said by email. “Their influence is still strong.”
And Haidt defended his style of developing and communicating ideas. “The standard academic way of working here is completely not up to the task of understanding what today’s technology is doing to kids,” he said, “so we have been trying to work a different way.” They “couldn’t have written the book” without the Substack and Google Docs, which he called a much-needed effort to assemble “the relevant evidence, on all sides,” in a rapidly changing field. “Having three people skim the paper, it doesn’t provide that much of a check,” he told me of traditional peer review, “whereas putting your work out and having half of stats Twitter critique it is actually providing a pretty good check on what we’re doing.”
I asked Haidt if he ever worries about putting out wrong or misleading ideas. “I say things I think are based in the data that Zach and I have found,” he replied.
Mid-discourse, a re-analysis of Ferguson’s meta-analysis — yes, another one — went up online. Johannes Thrul, an associate professor of mental health at the Johns Hopkins University, had started working on it before Haidt went to town on Substack, and in so doing unknowingly enlisted in the statistical skirmish of the season.
According to the not-yet-published preprint, when just the abstinence experiments were analyzed, those lasting less than one week resulted in “significantly worse mental-health outcomes,” and those one week or longer led to “significant improvements.” Reduction programs should last at least one week to be effective, though an ideal length may be around three, Thrul’s team reported.
Haidt took a victory lap. “Researchers and journalist[s] should stop saying that the evidence is ‘just correlational,’” he declared on X. “It is just not right to tell young people that quitting or reducing social media will not have any effect on their mental health, according to the academic literature.”
But Thrul didn’t endorse that headline. “I would have probably been a little bit more cautious in communicating what our re-analysis tells us,” he said. “To me, it gives us some indication that there are potentially benefits of reducing or curbing social media for a period of time.” Still, he noted, these possible benefits shouldn’t be overstated. “The effect sizes are small, no matter what,” he said.
Thrul’s paper also divided up the experiments by length — a variable that Haidt that endorsed separating, but that Jané had not. Thrul, whose background is in addiction research, said that one week seemed like a “feasible” amount of time to outlast withdrawal symptoms. Unlike Haidt, however, he acknowledged that there is no widely accepted standard for such a period with social media: The closest existing studies he could find dealt with pornography use. “This is an evolving and relatively novel field,” he told me.
Thrul said he submitted his paper to Psychology of Popular Media in July. By the time it was accepted, Haidt’s team had identified the errors in Ferguson’s data — data that Thrul had unwittingly analyzed. As of late October, he told me he’d submitted updates to reflect the corrected data, and that none of the significant results had changed.
Jané was unimpressed with Thrul’s re-analysis, too, arguing it made “many of the same mistakes” that Haidt had. On his blog, he criticized a figure in the paper for failing to indicate how precise the estimates of the mental-health effects were. The next day, Thrul uploaded a version with that information — though he said he’d informed the journal of that fix before Jané’s post.
But Jané, too, was revising. With other researchers, he created a “living meta-analysis” to show whether the latest science, analyzed correctly (in their view), proved that quitting social media reduced anxiety and depression. At first, Jané reported that the average effect size was 0.11 in the first week, then shrank by about half at the four-week mark. “This should be the first site that Haidt, Rausch, and Stein refer to before they make causal claims,” he wrote. His website was applauded by scientists engrossed by the drama.
But the code turned out to contain a major error, which Stein quickly identified. All the effect sizes then roughly doubled: a win for Team Anxious Generation.
“This is the thing about David. He’s eccentric and sometimes off-putting in his communication,” Haidt wrote to me. “But he is extremely careful, detail-oriented, and tenacious.” Stein expressed annoyance that Jané’s corrected graph was getting a fraction of the online attention of the original.
At the same time, Jané did not correct every mistake. One of his posts — the one questioning Haidt’s two-week cut-off — incorrectly reported some data, as Ferguson pointed out to him. When I asked Jané about it, he didn’t commit to fixing it; he said he now considered the post to be “kind of obsolete.”
All the claims and corrections made for a dizzying dispute, and Ferguson told me that he has felt unable to fully defend himself in the public sphere. Until the journal finalizes his corrections, “I’m kind of stuck where I can’t say much.”
Haidt, who is unbound by those constraints, found the exchange productive. “We’re all better off for engaging with each other in a civil and constructive way,” he wrote. And the takeaway, as he sees it, is that there’s a clear right way and wrong way to interpret the research at hand. Scholars like Ferguson “are telling parents and the public, with great confidence, that there is ‘no evidence’ of harm; they are saying that this is just another groundless moral panic,” Haidt said by email. “Don’t worry about your kids’ screen use, don’t fret if your 12-year-old daughter is spending five hours a day on Instagram and TikTok. These statements are false.”
But the other scientists in this micro-skirmish say that the studies in contention don’t answer the million-dollar question: Does social media cause mental distress? At best, they ask a much narrower one. As Jané put it: In a world where most everyone is using social media, “if you were to reduce your social-media consumption for a few weeks, do you see changes in depression and anxiety?”
That’s still an important inquiry. But given the studies’ limits, Jané admitted recently that he doubted whether it was worth analyzing them all over again. Although he’d created his meta-analysis with valiant intentions (“How I Plan to Fix This” was the title of his post), he told me a few weeks later: “I don’t think a meta-analysis is going to fix these studies.”
By mid-October, Jané had taken it down. Due to unspecified “concerns about data quality,” he wrote, his team was starting over and focusing on the basics: getting and re-calculating the raw data, finding studies, writing a manuscript. The science was going to take time.