“As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven’t unlocked the mystery of the three pounds of matter that sits between our ears.”
—President Obama, April 2, 2013
The human brain contains roughly 86 billion neurons and trillions, perhaps hundreds of trillions, of intricate interconnections among those neurons. There are hundreds, maybe thousands of different kinds of cells within the brain. And—after nearly two centuries of research—exactly zero convincing theories of how it all works.
Why is it so hard to figure out how the brain functions, and what can we do to face the challenges?
The time to address these questions is now; the quotation above from the president came as he announced a projected 12-year project known as the BRAIN Initiative, and a few months earlier Europe announced big steps of its own, a 1.2-billion-euro effort to simulate the human brain. China, Japan, and a number of nations are also planning major investments. There is real reason to believe that the field is on the verge of a number of methodological breakthroughs: Soon we will be able to study the operation of the brain in unprecedented detail, yielding orders of magnitude more data than the field has ever seen before.
And that is a good thing. On virtually any account, neuroscience needs more data—a lot more data—than it has.
To begin with, we desperately need a parts list for the brain. The varied multitude of cells in the human brain have names like “pyramidal cells,” “basket cells,” and “chandelier cells,” based on their physical structures. But we don’t know exactly how many cell types there are—some, like Cajal-Retzius cells (which play a role in brain development) are quite rare. And we know neither what all these different cell types do nor why there are so many. Until we have a fuller understanding of the parts list, we can hardly expect to understand how the brain as a whole functions.
Detailed wiring diagrams are also crucial; the parts alone certainly won’t suffice. We need to know which neurons hook up to which others, and how. As Sebastian Seung and his co-authors have recently showed, even fine-grained details, like the exact locations of neural synapses on the recipient cell bodies, and between particular subtypes of cells, can be critical.
Also indispensable will be detailed information about the distributions of many individual molecules within individual neurons (and in the synaptic connections between them), governing how neurons connect to one another, store information, and convey a wide diversity of chemical messages to their neighbors.
Finally, we need to understand how the dynamic activity of neural circuits unfolds over time, in response to real-world inputs. In each new circuit that we try to unravel, we may need comprehensive maps of the detailed interactions among genes, molecules, wiring, neural activity, and behavior.
All of which is made more challenging by the intricate and difficult-to-apprehend nature of the brain itself.
The good news is that the Obama BRAIN Initiative, alongside private efforts like the Allen Institute for Brain Science and the Howard Hughes Medical Institute’s Janelia Research Campus, is poised to collect data of exactly those sorts. Allen aims to deliver a complete wiring diagram of a cubic millimeter of mouse cortex, a multistep process that currently relies on sophisticated electron microscopy and machine-learning methods. Advances in optical microscopy at Janelia are yielding large-scale dynamic maps of neural activity. Other projects are developing tools for perturbing brain circuits and watching how they respond. Eventually, one hopes, we will be able to gather similar data from humans, in noninvasive ways. (For now, most high-resolution techniques in humans are restricted to post-mortem tissue, while techniques on living brains are restricted to animals like flies, zebrafish, mice, and nonhuman primates, a strong but not infallible guide to some aspects of human brain function.)
The work will begin—but not end—with the construction of a robust infrastructure for data analysis. Soon there will be exabytes (billions of gigabytes) of data, detailing what vast numbers of neurons do, in real time, as brains process information and guide action. To deal with the data flow, neuroscience will need to take a cue from Google and Amazon, spreading data analyses across large clusters of computers, with software that allows a single investigator, or a team, to marshal armies of computers in pursuit of a particular goal. Logistically, neuroscience needs standards that allow labs to share and integrate data collected at a vast range of scales, tying together data about individual molecules with data about complex circuits containing billions of cells.
But once we have all the data we can envision, there is still a major problem: How do we interpret it? A mere catalog of data is not the same as an understanding of how and why a system works.
The problem is much more difficult than it might initially appear. In the best case, individual neurons are assigned clearly defined “roles.” In the late 1950s and early 1960s, David Hubel and Torsten Wiesel famously discovered neurons in the visual cortex that selectively encode whether a line is vertical, horizontal, or diagonal. Edvard and May-Britt Moser and John O’Keefe won a Nobel Prize in 2014 for identifying and characterizing neurons that encode an animal’s spatial position, which may provide a neural basis for navigation. But these clear examples may be the exception rather than the rule, especially when it comes to complex processes like forming a memory or deciding how to behave.
For one thing, the connection between neural circuits and behavior can be far less straightforward than it sometimes seems. To take a simple example, noticing in the lab that some neurons seem to be active every time a zebrafish sees a moving pattern, we might conclude initially that those neurons are encoding something related to visual processing. But when we take into account that the same stimulus also causes the animal to swim, it may turn out that some of the “motion-detection” neurons are actually “swimming-induction” neurons. The picture is complicated further when we realize that the swimming is modulated by other aspects of the behavioral state of the animal, controlled by still other sets of neurons.
Even when we can confidently identify which circuits are involved in a particular brain function, the brain, like a hydra, is constantly changing and adapting. Broca’s area, for example, is traditionally thought of as the seat of language, but there are well-documented cases of children’s recovering linguistic function even after the entire left hemisphere has been removed.
When we do know that some set of neurons is typically involved in some task, we can’t safely conclude that those neurons are either necessary or sufficient; the brain often has many routes to solving any one problem. The fairy tales about brain localization (in which individual chunks of brain tissue correspond directly to abstract functions like language and vision) that are taught in freshman psychology fail to capture how dynamic the actual brain is in action.
One lesson is that neural data can’t be analyzed in a vacuum. Experimentalists need to work closely with data analysts and theorists to understand what can and should be asked, and how to ask it. A second lesson is that delineating the biological basis of behavior will require a rich understanding of behavior itself. A third is that understanding the nervous system cannot be achieved by a mere catalog of correlations. Big data alone aren’t enough.
Across all of these challenges, the important missing ingredient is theory. Science is about formulating and testing hypotheses, but nobody yet has a plausible, fully articulated hypothesis about how most brain functions occur, or how the interplay of those functions yields our minds and personalities.
Theory can, of course, take many forms. To a theoretical physicist, theory might look like elegant mathematical equations that quantitatively predict the behavior of a system. To a computer scientist, theory might mean the construction of classes of algorithms that behave in ways similar to how the brain processes information. Cognitive scientists have theories of the brain that are formulated in other ways, such as the ACT-R framework invented by the cognitive scientist John Anderson, in which cognition is modeled as a series of “production rules” that use our memories to generate our physical and mental actions.
The challenge for neuroscience is to try to square high-level theories of behavior and cognition with the detailed biology and biophysics of the brain. As the philosopher Ned Block said to us in an email, “There is no way to understand the brain without theory at the psychological level that tells you what brain circuits are doing.”
By way of example, Block pointed to the role that theory played in understanding aspects of color vision. The perception of color reflects the function of at least two opponent systems, red/green and yellow/blue, as proposed by the physiologist Ewald Hering in the 19th century on the basis of behavioral experiments, and further developed by L.M. Hurvich and Dorothea Jameson in the 1950s. These ideas helped guide a search in the brain, at which point at least a partial neural basis for opponent processes was eventually discovered in the lateral geniculate nucleus. As Block put it, this example shows the importance of a “‘co-evolution’ of neuroscience and psychology, in which hypotheses are developed on the basis of both behavioral experiments and neuroscientific data.”
In the words of a terrific June 2014 report to the National Institutes of Health director, Francis Collins, a key goal of the BRAIN Initiative should be to “produce conceptual foundations for understanding the biological basis of mental processes through development of new theoretical and data analysis tools (emphasis added).” We can’t simply ask, What does this bundle of neurons do? We have to ask, how might the brain work in principle, what clues can we gather from psychology and behavior, and what kinds of experiments could we develop in order to test theories of brain function?
In our view, theory has been relatively understudied in neuroscience, and psychology has been mentioned too infrequently. Lip service has been given, but aggressive steps are necessary. For example, in the first round of 56 NIH grants (awarded at the end of September), the emphasis was primarily on the important goal of developing new techniques (such as combinations of optics and ultrasound that could yield improved ways to measure dynamic patterns of neural activity at high speed) and on the parts list (a crucial “census” of cell types). But no section specifically focused on theory, or on bridging among brain, behavior, and cognition; in the abstracts of the grants, the word “theory” was mentioned only once.
As William Newsome, a prominent Stanford neuroscientist who has championed stronger theory development, said in an email to one of us (Marcus), part of the problem is structural. NIH, the largest source of scientific funds for research in the United States, is basically set up to give big grants to experimentalists. The review process that decides which grants to award is heavily biased toward research that is only moderately risky. Building new theories is a trapeze act, and most theorists fall off. Because building new theories is risky, theorists almost never get funded, except as an appendage to experimentalists. But theorists are comparatively cheap—they don’t need millions of dollars of equipment—and even if most theories don’t pan out, the rewards for having the right one are enormous.
Figuring out how to foster theory development isn’t easy. To be successful, theorists need to have broad interdisciplinary training, bridging between the empirical nitty-gritty of neuroscience and more abstract characterizations of computation, behavior, and mental life, such as those found in psychology. Neuroscience needs to welcome mathematicians, engineers, computer scientists, cognitive psychologists, and anthropologists into the fold.
As a start, the NIH should give funds to young scientists who want to broaden their training, and should more actively promote collaborations between theorists and experimentalists. A small joint program of the NIH and the National Science Foundation, Collaborative Research in Computational Neuroscience, is budgeted at $5-million to $20-million and should be significantly expanded, potentially multiplying the payoffs for empirical brain mapping manyfold over the long term. The NSF’s new Integrative Strategies for Understanding Neural and Cognitive Systems program is another useful step. Two new institutes, in London and Paris, are also steps in the right direction, one focused largely on theory, the other on interactions between theory and experiment. But for now the balance, fieldwide, remains heavily skewed toward experiment over theory.
Of course, theoretical neuroscience already exists as an academic discipline, but arguably one that has been inwardly directed, strongly influenced by physics but less so by psychology, computation, linguistics, and evolutionary biology. We need institutional mechanisms that support young scholars, encourage them to transcend disciplinary boundaries, and enable them to integrate neuroscience with neighboring disciplines. We need a culture within neuroscience that embraces intellectual interactions rather than dwelling almost exclusively on empirical data gathered from the bottom up.
One option, among many, is that the NSF and NIH could fund small centers geared to bring researchers from several disciplines under the same roof, rather than spread across large universities, perhaps tied to support for graduate students that would allow young scientists to develop rich training in multiple disciplines.
Neuroscience has been around for roughly two centuries, but progress remains painfully slow. We still don’t know how the brain works, and our categories for analyzing things like brain injuries and mental illness range from vague to antediluvian. As the neurosurgeon Geoffrey Manley, at the University of California at San Francisco, recently pointed out at a White House-sponsored meeting, traumatic brain injuries are still sorted into just three categories: “mild,” “moderate,” and “severe”; the field still has no reliable way of being more precise in predicting whether someone is likely to fully recover cognitively from a severe concussion.
With over half a billion people around the world suffering from debilitating brain disorders, whether depression, schizophrenia, autism, Alzheimer’s disease, or traumatic brain disorder, it is no exaggeration to say that significant progress in neuroscience could fundamentally alter the world. But getting there will require more than just big data alone. It will require some big ideas, too.
Gary Marcus is a professor of psychology and neuroscience at New York University. Adam Marblestone is a research scientist at the Massachusetts Institute of Technology. Jeremy Freeman is a group leader at the Janelia Research Campus. Marcus and Freeman are co-editors of the forthcoming The Future of the Brain: Essays by the World’s Leading Neuroscientists (Princeton University Press).