Extra Human Function Research Paper

Human enhancement is at least as old as human civilization. People have been trying to enhance their physical and mental capabilities for thousands of years, sometimes successfully – and sometimes with inconclusive, comic and even tragic results.

Up to this point in history, however, most biomedical interventions, whether successful or not, have attempted to restore something perceived to be deficient, such as vision, hearing or mobility. Even when these interventions have tried to improve on nature – say with anabolic steroids to stimulate muscle growth or drugs such as Ritalin to sharpen focus ­– the results have tended to be relatively modest and incremental.

But thanks to recent scientific developments in areas such as biotechnology, information technology and nanotechnology, humanity may be on the cusp of an enhancement revolution. In the next two or three decades, people may have the option to change themselves and their children in ways that, up to now, have existed largely in the minds of science fiction writers and creators of comic book superheroes.

Both advocates for and opponents of human enhancement spin a number of possible scenarios. Some talk about what might be called “humanity plus” – people who are still recognizably human, but much smarter, stronger and healthier. Others speak of “post-humanity,” and predict that dramatic advances in genetic engineering and machine technology may ultimately allow people to become conscious machines – not recognizably human, at least on the outside.

This enhancement revolution, if and when it comes, may well be prompted by ongoing efforts to aid people with disabilities and heal the sick. Indeed, science is already making rapid progress in new restorative and therapeutic technologies that could, in theory, have implications for human enhancement.

It seems that each week or so, the headlines herald a new medical or scientific breakthrough. In the last few years, for instance, researchers have implanted artificial retinas to give blind patients partial sight. Other scientists successfully linked a paralyzed man’s brain to a computer chip, which helped restore partial movement of previously non-responsive limbs. Still others have created synthetic blood substitutes, which could soon be used in human patients.

One of the most important developments in recent years involves a new gene-splicing technique called “clustered regularly interspaced short palindromic repeats.” Known by its acronym, CRISPR, this new method greatly improves scientists’ ability to accurately and efficiently “edit” the human genome, in both embryos and adults.

To those who support human enhancement, many of whom call themselves transhumanists, technological breakthroughs like these are springboards not only to healing people but to changing and improving humanity. Up to this point, they say, humans have largely worked to control and shape their exterior environments because they were powerless to do more. But transhumanists predict that a convergence of new technologies will soon allow people to control and fundamentally change their bodies and minds. Instead of leaving a person’s physical well-being to the vagaries of nature, supporters of these technologies contend, science will allow us to take control of our species’ development, making ourselves and future generations stronger, smarter, healthier and happier.

The science that underpins transhumanist hopes is impressive, but there is no guarantee that researchers will create the means to make super-smart or super-strong people. Questions remain about the feasibility of radically changing human physiology, in part because scientists do not yet completely understand our bodies and minds. For instance, researchers still do not fully comprehend how people age or fully understand the source of human consciousness.

There also is significant philosophical, ethical and religious opposition to transhumanism. Many thinkers from different disciplines and faith traditions worry that radical changes will lead to people who are no longer either physically or psychologically human.

Even minor enhancements, critics say, may end up doing more harm than good. For instance, they contend, those with enhancements may lack empathy and compassion for those who have not chosen or cannot afford these new technologies. Indeed, they say, transhumanism could very well create an even wider gap between the haves and have-nots and lead to new kinds of exploitation or even slavery.

Given that the science is still at a somewhat early stage, there has been little public discussion about the possible impacts of human enhancement on a practical level. But a new survey by Pew Research Center suggests wariness in the U.S. public about these emerging technologies. For example, 68% of Americans say they would be “very” or “somewhat” worried about using gene editing on healthy babies to reduce the infants’ risk of serious diseases or medical conditions. And a majority of U.S. adults (66%) say they would “definitely” or “probably” not want to get a brain chip implant to improve their ability to process information.

And yet, perhaps ironically, enhancement continues to captivate the popular imagination. Many of the top-grossing films in recent years in the United States and around the world have centered on superheroes with extraordinary abilities, such as the X-Men, Captain America, Spiderman, the Incredible Hulk and Iron Man. Such films explore the promise and pitfalls of exceeding natural human limits.

HUMAN ENHANCEMENT IN POPULAR CULTURE

 

Not only is enhancement unquestionably part of today’s cultural zeitgeist, questions about humanity’s quest to move beyond natural limits go back to our earliest myths and stories. The ancient Greeks told of Prometheus, who stole fire from the gods, and Daedalus, the skilled craftsman, who made wings for himself and his son, Icarus. In the opening chapters of Genesis, the Hebrew Bible depicts a successful incident of human enhancement, when Adam and Eve ate the fruit from the tree of the knowledge of good and evil because the Serpent told them it would make them “like God.”

Of course, while Adam and Eve gained a new awareness and self-understanding, their actions also led to their expulsion from paradise and entry into a much harder world full of pain, shame and toil. This theme – that hidden dangers may lurk in something ostensibly good – runs through many literary accounts of enhancement. In Mary Shelley’s “Frankenstein” (1818), for instance, a scientist creates a new man, only to eventually die while trying to destroy his creation.

Whether these fears surrounding human enhancement are real or unfounded is a question already being debated by ethicists, scientists, theologians and others. This report looks at that debate, particularly in light of the diverse religious traditions represented in the United States. First, though, the report explains some of the scientific developments that might form the basis of an enhancement revolution.

On Feb. 25, 2014, President Barack Obama met with Army officials and engineers at the Pentagon to discuss plans to create a new super armor that would make soldiers much more dangerous and harder to kill. The president joked that “we’re building ‘Iron Man,’” but Obama’s jest contained more than a kernel of truth: The exoskeleton, called the Tactical Assault Light Operator Suit (TALOS), does look vaguely like the fictional Tony Stark’s famous Iron Man suit. The first prototypes already are being built, and if all goes as planned, American soldiers may soon be much stronger and largely impervious to bullets.

A little more than a year later and an ocean away, scientists with the United Kingdom’s National Health Service (NHS) announced that by 2017, they plan to begin giving human subjects synthetic or artificial blood. If the NHS moves ahead with its plans, it would be the first time people receive blood created in a lab. While the ultimate aim of the effort is to stem blood shortages, especially for rare blood types, the success of synthetic blood could lay the foundation for a blood substitute that could be engineered to carry more oxygen or better fight infections.

In April 2016, scientists from the Battelle Memorial Institute in Columbus, Ohio, revealed that they had implanted a chip in the brain of a quadriplegic man. The chip can send signals to a sleeve around the man’s arm, allowing him to pick up a glass of water, swipe a credit card and even play the video game Guitar Hero.

Roughly around the same time, Chinese researchers announced they had attempted to genetically alter 213 embryos to make them HIV resistant. Only four of the embryos were successfully changed and all were ultimately destroyed. Moreover, the scientists from the Guangzhou Medical University who did the work said its purpose was solely to test the feasibility of embryo gene editing, rather than to regularly begin altering embryos. Still, Robert Sparrow of Australia’s Monash University Centre for Human Bioethics said that while editing embryos to prevent HIV has an obvious therapeutic purpose, the experiment more broadly would lead to other things. “Its most plausible use, and most likely use, is the technology of human enhancement,” he said, according to the South China Morning Post.

As these examples show, many of the fantastic technologies that until recently were confined to science fiction have already arrived, at least in their early forms. “We are no longer living in a time when we can say we either want to enhance or we don’t,” says Nicholas Agar, a professor of ethics at Victoria University in Wellington, New Zealand, and author of the book “Humanity’s End: Why We Should Reject Radical Enhancement.” “We are already living in an age of enhancement.”

The road to TALOS, brain chips and synthetic blood has been a long one that has included many stops along the way. Many of these advances come from a convergence of more than one type of technology – from genetics and robotics to nanotechnology and information technology. These technologies are “intermingling and feeding on one another, and they are collectively creating a curve of change unlike anything we humans have ever seen,” journalist Joel Garreau writes in his book “Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies – and What It Means to Be Human.”

The combination of information technology and nanotechnology offers the prospect of machines that are, to quote the title of Robert Bryce’s recent book on innovation, “Smaller Faster Lighter Denser Cheaper.” And as some futurists such as Ray Kurzweil argue, these developments will occur at an accelerated rate as technologies build on each other. “An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense ‘intuitive linear’ view,” writes Kurzweil, an American computer scientist and inventor whose work has led to the development of everything from checkout scanners at supermarkets to text-reading machines for the blind. “So we won’t experience 100 years of progress in the 21st century – it will be more like 20,000 years of progress (at today’s rate).”

GENETIC EDITING AND ENGINEERING

In the field of biotechnology, a big milestone occurred in 1953, when American biologist James Watson and British physicist Francis Crick discovered the molecular structure of DNA – the famed double helix – that is the genetic blueprint for life. Almost 50 years later, in 2003, two international teams of researchers led by American biologists Francis Collins and Craig Venter succeeded in decoding and reading that blueprint by identifying all of the chemical base pairs that make up human DNA.

Finding the blueprint for life, and successfully decoding and reading it, has given researchers an opportunity to alter human physiology at its most fundamental level. Manipulating this genetic code – a process known as genetic engineering – could allow scientists to produce people with stronger muscles, harder bones and faster brains. Theoretically, it also could create people with gills or webbed hands and feet or even wings – and, as Garreau points out in his book, could lead to “an even greater variety of breeds of humans than there is of dogs.”

In recent years, the prospect of advanced genetic engineering has become much more real, largely due to two developments. First, inexpensive and sophisticated gene mapping technology has given scientists an increasingly more sophisticated understanding of the human genome.

The second important development involves the powerful new gene editing technology known as CRISPR. While gene editing itself is not new, CRISPR offers scientists a method that is much faster, cheaper and more accurate. “It’s about 1,000 times cheaper [than existing methods],” says George Church, a geneticist at Harvard Medical School. “It could be a game changer.” CRISPR is so much more efficient and accurate than older gene-editing technology because it uses each cell’s immune system to target and splice out parts of its DNA and replace them with new genetic code.

CRISPR is already dramatically expanding the realm of what is possible in the field of genetic engineering. Indeed, on June 21, 2016, the U.S. government announced that it had approved the first human trials using CRISPR, in this case to strengthen the cancer-fighting properties of the immune systems of patients suffering from melanoma and other deadly cancers. “CRISPR’s power and versatility have opened up new and wide-ranging possibilities across biology and medicine,” says Jennifer Doudna, a researcher at the University of California at Berkeley and a co-inventor of CRISPR.

According to Doudna and others, CRISPR could provide new treatments or even cures to some of today’s most feared diseases – not only cancer, but Alzheimer’s disease, Parkinson’s disease and others.

An even more intriguing possibility involves making genetic changes at the embryonic stage, also known as germline editing. The logic is simple: alter the gene lines in an embryo’s eight or 16 cell stage (to, say, eliminate the gene for Tay-Sachs disease) and that change will occur in each of the resulting person’s trillions of cells – not to mention in the cells of their descendants. When combined with researchers’ growing understanding of the genetic links to various diseases, CRISPR could conceivably help eliminate a host of maladies in people before they are born.

But many of the same scientists who have hailed CRISPR’s promise, including Doudna, also have warned of its potential dangers. At a National Academy of Sciences conference in Washington, D.C., in December 2015, she and about 500 researchers, ethicists and others urged the scientific community to hold off editing embryos for now, arguing that we do not yet know enough to safely make changes that can be passed down to future generations.

Those at the conference also raised another concern: the idea of using the new technologies to edit embryos for non-therapeutic purposes. Under this scenario, parents could choose a variety of options for their unborn children, including everything from cosmetic traits, such as hair or eye color, to endowing their offspring with greater intellectual or athletic ability. Some transhumanists see a huge upside to making changes at the embryonic level. “This may be the area where serious enhancement first becomes possible, because it’s easier to do many things at the embryonic stage than in adults using traditional drugs or machine implants,” says Nick Bostrom, director of the Future of Humanity Institute, a think tank at Oxford University that focuses on “big picture questions about humanity and its prospects.”

But in the minds of many philosophers, theologians and others, the idea of “designer children” veers too close to eugenics – the 19th- and early 20th-century philosophical movement to breed better people. Eugenics ultimately inspired forced sterilization laws in a number of countries (including the U.S.) and then, most notoriously, helped provide some of the intellectual framework for Nazi Germany’s murder of millions in the name of promoting racial purity.

There also may be practical obstacles. Some worry that there could be unintended consequences, in part because our understanding of the genome, while growing, is not even close to complete. Writing in Time magazine, Venter, who helped lead the first successful effort to sequence the human genome, warns that “we have little or no knowledge of how (with a few exceptions) changing the genetic code will effect development and the subtlety associated with the tremendous array of human traits.” Venter adds: “Genes and proteins rarely have a single function in the genome and we know of many cases in experimental animals where changing a ‘known function’ of a gene results in developmental surprises.”

A BETTER BRAIN?

For many transhumanists, expanding our capacities begins with the organ that most sets humans apart from other animals: the brain. Right now, cognitive enhancement largely involves drugs that were developed and are prescribed to treat certain brain-related conditions, such as Ritalin for attention deficit disorder or modafinil for narcolepsy. These and other medications have been shown in lab tests to help sharpen focus and improve memory.

But while modafinil and other drugs are now sometimes used (off label) to improve cognition, particularly among test-cramming students and overwhelmed office workers, the improvements in focus and memory are relatively modest. Moreover, many transhumanists and others predict that while new drugs (say, a specifically designed, IQ-boosting “smart pill”) or genetic engineering could result in substantially enhanced brain function, the straightest and shortest line to dramatically augmenting cognition probably involves computers and information technology.

As with biotechnology, information technology’s story is littered with important milestones and markers, such as the development of the transistor by three American scientists at Bell Labs in 1947. Transistors are the electronic signal switches that gave rise to modern computers. By shrinking the electronic components to microscopic size, researchers have been able to build ever smaller, more powerful and cheaper computers. As a result, today’s iPhone has more than 250,000 times more data storage capacity than the guidance computer installed on the Apollo 11 spacecraft that took astronauts to the moon.

One of the reasons the iPhone is so powerful and capable is that it uses nanotechnology, which involves “the ability to see and to control individual atoms and molecules.” Nanotechnology has been used to create substances and materials found in thousands of products, including items much less complex than an iPhone, such as clothing and cosmetics.

Advances in computing and nanotechnology have already resulted in the creation of tiny computers that can interface with our brains. This development is not as far-fetched as it may sound, since both the brain and computers use electricity to operate and communicate. These early and primitive brain-machine interfaces have been used for therapeutic purposes, to help restore some mobility to those with paralysis (as in the example involving the quadriplegic man) and to give partial sight to people with certain kinds of blindness. In the future, scientists say, brain-machine interfaces will do everything from helping stroke victims regain speech and mobility to successfully bringing people out of deep comas.

Right now, most scientists working in the brain-machine-interface field say they are solely focused on healing, rather than enhancing. “I’ve talked to hundreds of people doing this research, and right now everyone is wedded to the medical stuff and won’t even talk about enhancement because they don’t want to lose their research grants,” says Daniel Faggella, a futurist who founded TechEmergence, a market research firm focusing on cognitive enhancement and the intersection of technology and psychology. But, Faggella says, the technology developed to ameliorate medical conditions will inevitably be put to other uses. “Once we have boots on the ground and the ameliorative stuff becomes more normal, people will then start to say: we can do more with this.”

Doing more inevitably will involve augmenting brain function, which has already begun in a relatively simple way. For instance, scientists have been using electrodes placed on the head to run a mild electrical current through the brain, a procedure known as transcranial direct-current stimulation (tDCS). Research shows that tDCS, which is painless, may increase brain plasticity, making it easier for neurons to fire. This, in turn, improves cognition, making it easier for test subjects to learn and retain things, from new languages to mathematics. Already there is talk of implanting a tDCS pacemaker-like device in the brain so recipients do not need to wear electrodes. A device inside someone’s head could also more accurately target the electrical current to those parts of the brain most responsive to tDCS.

According to many futurists, tDCS is akin to an early steam train or maybe even a horse-drawn carriage before the coming of jumbo jets and rockets. If, as some scientists predict, full brain-machine interface comes to pass, people may soon have chips implanted in their brains, giving them direct access to digital information. This would be like having a smartphone in one’s head, with the ability to call up mountains of data instantly and without ever having to look at a computer screen.

The next step might be machines that augment various brain functions. Once scientists complete a detailed map of exactly what different parts of our brain do, they will theoretically be able to augment each function zone by placing tiny computers in these places. For example, machines may allow us to “process” information at exponentially faster speeds or to vividly remember everything or simply to see or hear better. Augments placed in our frontal lobe could, theoretically, make us more creative, give us more (or less) empathy or make us better at mathematics or languages. (For data on whether Americans say they would want to use potential technology that involved a brain-chip implant to improve cognitive abilities, see the accompanying survey, see U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities.)

Genetic engineering also offers promising possibilities, although there are possible obstacles as well. Scientists have already identified certain areas in human DNA that seem to control our cognitive functions. In theory, someone’s “smart genes” could be manipulated to work better, an idea that almost certainly has become more feasible with the recent development of CRISPR. “The potential here is really very great,” says Anders Sandberg, a neuroscientist and fellow at Oxford University’s Future of Humanity Institute. “I mean scientists are already working on … small biological robots made up of small particles of DNA that bind to certain things in the brain and change their chemical composition.

“This would allow us to do so many different things,” Sandberg adds. “The sky’s the limit.”

In spite of this optimism, some scientists maintain that it will probably be a long time before we can bioengineer a substantially smarter person. For one thing, it is unlikely there are just a few genes or even a few dozen genes that regulate intelligence. Indeed, intelligence may be dependent on the subtle dance of thousands of genes, which makes bioengineering a genius much harder.

Even if scientists find the right genes and “turn them on,” there is no guarantee that people will actually be smarter. In fact, some scientists speculate that trying to ramp up intelligence – whether by biology or machines – could overload the brain’s carrying capacity. According to Martin Dresler, an assistant professor of cognitive neuroscience at Radboud University in the Netherlands, some researchers believe that “evolution forced brains to develop toward optimal … functioning.” In other words, he says, “if there still was potential to optimize brain functioning by adding certain chemicals, nature would already have done this.” The same reasoning could also apply to machine enhancement, Dresler adds.

Even the optimistic Sandberg says that enhancing the brain could prove more difficult than some might imagine because changing biological systems can often have unforeseen impacts. “Biology is messy,” he says. “When you push in one direction, biology usually pushes back.”

THE FUTURE OF BLOOD

Given the brain’s importance, cognitive enhancement might be the holy grail of transhumanism. But many futurists say enhancement technologies will likely be used to transform the whole body, not just one part of it.

This includes efforts to manufacture synthetic blood, which to this point have been focused on therapeutic goals. But as with CRISPR and gene editing, artificial blood could ultimately be used as part of a broader effort at human enhancement. It could be engineered to clot much faster than natural human blood, for instance, preventing people from bleeding to death. Or it could be designed to continuously monitor a person’s arteries and keep them free of plaque, thus preventing a heart attack.

Synthetic white blood cells also could potentially be programmed. Indeed, like virtually any computer, these cells could receive “software updates” that would allow them to fight a variety of threats, such as a new infection or a specific kind of cancer.

Scientists already are developing and testing nanoparticles that could enter the bloodstream and deliver medicine to targeted areas. These microscopic particles are a far cry from synthetic blood, since they would be used once and for very specific tasks – such as delivering small doses of chemotherapy directly to cancer cells. However, nanoparticles could be precursors to microscopic machines that could potentially do a variety of tasks for a much longer period of time, ultimately replacing our blood.

It’s also possible that enhanced blood will be genetically engineered rather than synthetically made. “One of the biggest advantages of this approach is that you would not have to worry about your body rejecting your new blood, because it will still come from you,” says Oxford University’s Sandberg.

Regardless of how it is made, one obvious role for enhanced or “smart” blood would be to increase the amount of oxygen our hemoglobin can carry. “In principle, the way our blood stores oxygen is very limited,” Sandberg says. “So we could dramatically enhance our physical selves if we could increase the carrying capacity of hemoglobin.”

According to Sandberg and others, substantially more oxygen in the blood could have many uses beyond the obvious benefits for athletes. For example, he says, “it might prevent you from having a heart attack, since the heart doesn’t need to work as hard, or it might be that you wouldn’t have to breathe for 45 minutes.” In general, Sandberg says, this super blood “might give you a lot more energy, which would be a kind of cognitive enhancement.”

(For data on whether Americans say they would want to use potential synthetic blood substitutes to improve their own physical abilities, see the accompanying survey, U.S. Public Wary of Biomedical Technologies to ‘Enhance’ Human Abilities.)

HYPE OR PARADIGM SHIFT?

So where is all of this new and powerful technology taking humanity? The answer depends on who you ask.

Having more energy or even more intelligence or stamina is not the end point of the enhancement project, many transhumanists say. Some futurists, such as Kurzweil, talk about the use of machines not only to dramatically increase physical and cognitive abilities but to fundamentally change the trajectory of human life and experience. For instance, Kurzweil predicts that by the 2040s, the first people will upload their brains into the cloud, “living in various virtual worlds and even avoiding aging and evading death.”

Kurzweil – who has done more than anyone to popularize the idea that our conscious selves will soon be able to be “uploaded” – has been called everything from “freaky” to “a highly sophisticated crackpot.” But in addition to being one of the world’s most successful inventors, he has – if book sales and speaking engagements are any indication – built a sizable following for his ideas.

Kurzweil is not the only one who thinks we are on the cusp of an era when human beings will be able to direct their own evolution. “I believe that we’re now seeing the beginning of a paradigm shift in engineering, the sciences and the humanities,” says Natasha Vita-More, chairwoman of the board of directors of Humanity+, an organization that promotes “the ethical use of technology to expand human capacities.”

Still, even some transhumanists who admire Kurzweil’s work do not entirely share his belief that we will soon be living entirely virtual lives. “I don’t share Ray’s view that we will be disembodied,” says Vita-More, who along with her husband, philosopher Max More, helped found the transhumanist movement in the United States. “We will always have a body, even though that body will change.”

In the future, Vita-More predicts, our bodies will be radically changed by biological and machine-based enhancements, but our fundamental sensorial life – that part of us that touches, hears and sees the world – will remain intact. However, she also envisions something she calls a whole-body prosthetic, which, along with our uploaded consciousness, will act as a backup or copy of us in case we die. “This will be a way to ensure our personal survival if something happens to our bodies,” she says.

Others, like Boston University bioethicist George Annas, believe Kurzweil is wrong about technological development and say talk of exotic enhancement is largely hype. “Based on our past experience, we know that most of these things are unlikely to happen in the next 30 or 40 years,” Annas says.

He points to many confident predictions in the last 30 or 40 years that turned out to be unfounded. “In the 1970s, we thought that by now there would be millions of people with artificial hearts,” he says. Currently, only a small number of patients have artificial hearts and the devices are used as a temporary bridge, to keep patients alive until a human heart can be found for transplant.

More recently, Annas says, “people thought the Human Genome Project would quickly lead to personalized medicine, but it hasn’t.”

Faggella, the futurist who founded TechEmergence, sees a dramatically different future and thinks the real push will be about, in essence, expanding our consciousness, both literally and figuratively. The desire to be stronger and smarter, Faggella says, will quickly give way to a quest for a new kind of happiness and fulfillment. “In the last 200 years, technology has made us like gods … and yet people today are roughly as happy as they were before,” he says. “So, I believe that becoming a super-Einstein isn’t going to make us happier and … that ultimately we’ll use enhancement to fulfill our wants and desires rather than just make ourselves more powerful.”

What exactly does that mean? Faggella can’t say for sure, but he thinks that enhancement of the mind will ultimately allow people to have experiences that are quite simply impossible with our current brains. “We’ll probably start by taking a human version of nirvana and creating it in some sort of virtual reality,” he says, adding “eventually we’ll transition to realms of bliss that we can’t conceive of at this time because we’re incapable of conceiving it. Enhancing our brains will be about making us capable.”

A TALE OF TWO HUXLEYS

To some degree, the ideas and concepts behind human enhancement can be traced to biologist and author Julian Huxley. In addition to being one of the most important scientific thinkers of the mid-20th century, Julian also was the brother of Aldous Huxley, author of the famous scientific dystopian novel “Brave New World.

The novel is set in a future where, thanks to science, virtually no one knows violence or want. But this brave new world also is a sterile place, where people rarely feel love, where children are “decanted” in laboratories and families no longer exist, and where happiness is chemically induced. Although there is an abundance of material comforts in this fictional world, the things that people traditionally believe best define our humanity and make life worth living – love, close relationships, joy – have largely been eliminated.

In contrast with his brother Aldous, Julian Huxley was a scientific optimist who believed that new technologies would offer people amazing opportunities for self-improvement and growth, including the ability to direct our evolution as a species. No longer, he said, would a person’s physical and psychological attributes be subject to the capricious whims of nature.

In his 1957 essay “Transhumanism” (a term Julian Huxley coined), he laid out his ideas, writing that “the human species can, if it wishes, transcend itself – not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity.” Once man took hold of his biological destiny, he “would be on the threshold of a new kind of existence, as different from ours as ours is from that of Pekin[g] man,” Julian Huxley wrote, referring to the name given to the 750,000-year-old fossils of one of our prehistoric ancestors.

A COST TO SOCIETY?

But like Julian’s brother Aldous Huxley, those who oppose radical enhancement say the road to transcending humanity is paved with terrible risks and dangers, and that a society that embraces enhancement might lose much more in the bargain than it gains. “I think that the enhancement imperative, where we’re going to overcome all limitations including death, seems to me to be a kind of utopianism that we’ll have to break a lot of eggs to realize,” says Christian Brugger, a professor of moral theology at St. John Vianney Theological Seminary in Denver.

According to Brugger and other opponents of radical enhancement, those “broken eggs” might include increased social tensions – or worse – as the rich and privileged gain access to expensive new enhancement treatments long before the middle class or poor and then use these advantages to widen an already wide gap between rich and poor. “The risks here of creating greater inequalities seem to be obvious,” says Todd Daly, an associate professor of theology and ethics at Urbana Theological Seminary in Champaign, Ill. “And I’m not convinced that people who get these enhancements will want to make sure everyone else eventually gets them too, because people usually want to leverage the advantages they have.”

For some thinkers, concerns about inequality go much further than merely widening the existing gap between rich and poor. They believe that radical enhancement will threaten the very social compact that underpins liberal democracies in the United States and elsewhere. “The political equality enshrined in the Declaration of Independence rests on the empirical fact of natural human equality,” writes social philosopher Francis Fukuyama in his 2002 book “Our Posthuman Future.” He adds: “We vary greatly as individuals and by culture, but we share a common humanity.”

Brugger of St. John Vianney Theological Seminary agrees. “Right now, there is a common equality because we are all human,” he says. “But all of this changes once we start giving some people significantly new powers.”

Boston University’s Annas shares these concerns. “I think at some point it will be inevitable that enhanced people will see the unenhanced as subhuman,” he says. “[Enhanced] people would probably assume they had the right to rule us, and the rest of us might try to kill them, ending in a lot of dead and hurt people.”

Supporters of human enhancement say the goal is not to create a race of superhumans but to use technological tools to improve humanity and the human condition. Indeed, they say, it is an extension of what humans have been doing for millennia: using technology to make life better. “I don’t believe in utopias and I don’t believe in perfection,” says Vita-More, adding that: “For me, enhancement is a very practical way to give us new options to make our lives better. It’s that simple.”

A good example, Vita-More says, is cognitive enhancement. “By giving people increased memory and problem-solving skills, cognitive enhancement will help us be more creative by giving us the ability to put more things together in new ways,” she says. “It will make us better problem solvers.”

Those who support human enhancement also deny that these developments will make social inequalities dramatically worse. New technologies are often socially disruptive and can have a negative impact on certain vulnerable populations, they say. But the problem of inequality is essentially, and will remain, a political one.

“The core Luddite mistake is to point to a social problem and to say that if we add new technologies the problem will get worse,” says James Hughes, executive director of the Institute for Ethics and Emerging Technologies, a pro-enhancement think tank. “But the way to cure the problem in this case is to make the world more equal, rather than banning the technology.”

Human enhancement is just as likely, or even more likely, to mitigate social inequalities than to aggravate them, says Oxford University’s Bostrom, a leader in the transhumanist movement.  “The enhancement project could allow people who have natural inequalities to be brought up to everyone else’s level,” he says.

Hughes, Bostrom and others also dispute the idea put forth by Fukuyama and Brugger that enhancement could displace the sense of common humanity that has undergirded the democratic social contract for centuries. First, they point out that the history of the modern West has been one of an ever-expanding definition of full citizenship. “The set of individuals accorded full moral status by Western societies has actually increased, to include men without property or noble descent, women and non-white peoples,” Bostrom writes. In addition, supporters of enhancement say, the notion that there will be a distinctive species of enhanced individuals who will try to enslave their unenhanced brothers and sisters might make for good science fiction, but it is not likely to happen. Instead, they say, there will be many different types of people, with different types of enhancements. “It seems much more likely that there would be a continuum of differently modified or enhanced individuals, which would overlap with the continuum of as-yet-unenhanced humans,” Bostrom writes, adding that today there are very different types of people (very tall to very short, very intelligent to intellectually disabled, etc.) who manage to live side by side as moral and legal equals.

Finally, transhumanists and other supporters say, history shows that as people gain more control over their lives, they become more empathetic, not less. “Today we have more health, more intelligence and more lifespan than we did 100 years ago, and we’re more compassionate and more empathetic today then we were then,” Hughes says, pointing to a 2011 book by Harvard University psychology professor Steven Pinker, “The Better Angels of Our Nature: Why Violence Has Declined.” The book makes the case that as human society has grown richer and more sophisticated, it also has become less violent. “The more ability we have as individuals, the better we become,” Hughes adds.

A COST TO SELF?

Critics of enhancement question whether people really will be happier if enhancement projects are allowed to come to fruition. According to these critics, philosophers have long held that true happiness does not come from enhanced physical prowess or dramatically longer life, but from good character and virtuous living. “Happiness is found in marriages, in families, in neighborhoods … in people who are willing to sacrifice and suffer for others,” Brugger says. “None of these are promised by enhancement.”

“Happiness also is found in limits, says Agar of Victoria University. “There are things that I value and am proud of in my life, like my recent book,” he says. “But how can I value the writing of my book if I’ve been cognitively enhanced, and doing such a thing becomes much easier?”

But supporters contend that life still will be meaningful and challenging in a world where enhancement is widespread. “The things that have to do with human character and virtue and those things that make life meaningful will not change as a result of human enhancement, just like they haven’t changed as our society has changed,” says Ted Peters, a professor of systematic theology at Pacific Lutheran Theological Seminary in Berkeley, California. “As long as we are still human, these things will be important.”

Furthermore, an enhanced life will still contain challenges and limits, just different ones, says Ronald Cole-Turner, a professor of theology and ethics at Pittsburgh Theological Seminary, which is associated with the Presbyterian Church (U.S.A.). “The challenges of life will still be there, they may just be different and harder,” he says. “The goal posts will have moved further down the field, that’s all.”

TRANSHUMANISM AND FAITH TRADITIONS

Because human enhancement is still largely an issue for the future, it has not yet attracted a lot of attention in American religious communities. There is, for instance, no official teaching or statement on human enhancement or transhumanism that has come directly from any of the major churches or religious groups in the United States. However, some theologians, religious ethicists and religious leaders have started to think about the implications of human enhancement in light of their traditions’ teachings, offering a sense of how their churches or religions might respond to radical human enhancement if it became possible.

All of the Abrahamic faiths – Judaism, Christianity and Islam – share the belief that men and women have been created, to some extent, in God’s image. According to many theologians, the idea that human beings in certain ways mirror God make some, but not all, religious denominations within this broad set of connected traditions wary of using new technologies to enhance or change people, rather than heal or restore them.

The Roman Catholic Church, through its large network of educational and other institutions, already has begun formulating an argument against enhancement, based in part on the idea that God’s plan for humanity includes limits and that life’s limits are the very forces that create virtuous, wise and ultimately happy people. “Courage, fidelity, fortitude, generosity, hope, moderation, perseverance, are all cultivated in response to limitations of circumstance and nature,” says John Haldane, a Catholic philosopher who teaches at the University of St. Andrews in Scotland.

Catholics actively support medical and technological advances that can restore someone to health, says Brugger. “But the dividing line for the church is the line between therapy and enhancement.”

Concerns about crossing that line already have been expressed by Catholic-affiliated organizations. In 2013, for instance, the church-affiliated International Science and Life Congress met in Madrid and issued a declaration that warned that “new human species, artificially manipulated” would create “a real danger to human life as we know it.”

Conservative evangelical Protestant churches also are likely to be wary of treatments or technologies that enhance, rather than heal, people, says Daly of Urbana Theological Seminary. “I think most [evangelical] churches will warn against this,” he says. “I think a lot of evangelical leaders and pastors will see this as unwise and will call on people to avoid it.” Indeed, Albert Mohler, one of the intellectual leaders of evangelical Christianity’s largest U.S. denomination, the Southern Baptist Convention, already has, calling it “a new form of eugenic ideology.”

According to Daly and others, evangelicals’ opposition to enhancement would be based in part on the notion that man should not “play God.” According to Daly, “when we attempt to be something more than human, are we running the risk of trying to become, in some ways, like God, as did Adam and Eve?” He adds, “This is an important issue for Christians that, I think, will help drive the debate for us.”

Opposition also would be likely from the Church of Jesus Christ of Latter-day Saints, which teaches that the body is sacred and thus must not be altered. While small enhancements that do not overtly change the body might be acceptable to Mormon leaders, more significant enhancements would probably be “seen as a problem by the church,” says Steven Peck, a bioethicist at Brigham Young University in Provo, Utah.

The Hindu tradition probably would approach human enhancement as a potentially dangerous development as well, although for different reasons than Christian churches, says Deepak Sarma, a professor of South Asian religions and philosophy at Case Western Reserve University in Cleveland. Enhancement is troubling, he says, because it could be used to alleviate suffering, which is necessary to work off bad karma (debt from bad deeds and intents committed during a person’s past lives). Viewed in this light, Sarma says, Hindus could see enhancement as keeping someone from cleansing themselves of these misdeeds from their past lives.

In Islam, according to Sherine Hamdy, an associate professor of anthropology at Brown University, human enhancement would be viewed with concern by some scholars and leaders and embraced by others. Supporters might see new enhancements as a way to help the Muslim world catch up with the West or “at least not get left further behind,” she says. Others would oppose enhancements out of a desire “not to change what God has created.”

Other churches and religious traditions, however, probably would not be opposed or even divided on the issue, scholars say. For instance, mainline Protestant denominations, such as the Presbyterian, Episcopalian or Methodist churches, are unlikely to attempt to prevent their members from taking advantage of new enhancements, says Cole-Turner of Pittsburgh Theological Seminary, which serves a student body made up mostly of mainline Protestants. “I see no clear expression of opposition coming from any of the [mainline] denominations over this, because they will not view it as threatening,” he says. “What you might see instead are efforts to assure fair distribution of these benefits, so that can we mitigate any injustices or inequalities that might be caused by this.”

According to Lutheran theologian Peters, many mainline churches will view enhancement positively because they will see aspects of it as attempts to improve human well-being and alleviate suffering. “I think they will see much of this for what it is: an effort to take advantage of these new technologies to help improve human life,” he says.

Similarly, Buddhists would largely accept and even embrace human enhancement because it could help them become better Buddhists, says Hughes, who is an advocate for transhumanism as well as a Buddhist and a former Buddhist monk. Enhancement that extends life and makes people more intelligent “would be seen as good because you’d have more time to work on enlightenment and … you could be more effective in helping others,” he says.

Jewish leaders and thinkers from all of the faith’s major movements probably would favor developments that improve cognitive ability or physical strength. “Most Jewish bioethicists have no qualms about enhancement, and they see it as extension of the command [in Jewish law] to ‘improve the world,’” says Hava Tirosh-Samuelson, the director of Jewish studies at Arizona State University. “So long as the improvement alleviates or prevents suffering, it is inherently good and … they tend to endorse it.”

In spite of intense disagreements about the utility and morality of trying to “improve” humanity, many thinkers on both sides of the debate share the belief that if just some of the dreams of today’s transhumanists are realized, human society will change and change significantly. These changes, if they occur, will upend some social norms and possibly religious norms as well. And they will force churches and many other institutions (both religious and secular) to adjust to a new reality. For the first time in human history, the biggest material changes in our society may not be occurring outside of ourselves, in the fields, factories and universities that have shaped human civilization, but inside our bodies – in our brains and muscles and arteries, and even in our DNA.

Abstract

Neuroscientists have become used to a number of “facts” about the human brain: It has 100 billion neurons and 10- to 50-fold more glial cells; it is the largest-than-expected for its body among primates and mammals in general, and therefore the most cognitively able; it consumes an outstanding 20% of the total body energy budget despite representing only 2% of body mass because of an increased metabolic need of its neurons; and it is endowed with an overdeveloped cerebral cortex, the largest compared with brain size. These facts led to the widespread notion that the human brain is literally extraordinary: an outlier among mammalian brains, defying evolutionary rules that apply to other species, with a uniqueness seemingly necessary to justify the superior cognitive abilities of humans over mammals with even larger brains. These facts, with deep implications for neurophysiology and evolutionary biology, are not grounded on solid evidence or sound assumptions, however. Our recent development of a method that allows rapid and reliable quantification of the numbers of cells that compose the whole brain has provided a means to verify these facts. Here, I review this recent evidence and argue that, with 86 billion neurons and just as many nonneuronal cells, the human brain is a scaled-up primate brain in its cellular composition and metabolic cost, with a relatively enlarged cerebral cortex that does not have a relatively larger number of brain neurons yet is remarkable in its cognitive abilities and metabolism simply because of its extremely large number of neurons.

If the basis for cognition lies in the brain, how can it be that the self-designated most cognitively able of animals—us, of course—is not the one endowed with the largest brain? The logic behind the paradox is simple: Because brains are made of neurons, it seems reasonable to expect larger brains to be made of larger numbers of neurons; if neurons are the computational units of the brain, then larger brains, made of larger numbers of neurons, should have larger computational abilities than smaller brains. By this logic, humans should not rank even an honorable second in cognitive abilities among animals: at about 1.5 kg, the human brain is two- to threefold smaller than the elephant brain and four- to sixfold smaller than the brains of several cetaceans (1, 2). Nevertheless, we are so convinced of our primacy that we carry it explicitly in the name given by Linnaeus to the mammalian order to which we belong—Primata, meaning “first rank,” and we are seemingly the only animal species concerned with developing entire research programs to study itself.

Humans also do not rank first, or even close to first, in relative brain size (expressed as a percentage of body mass), in absolute size of the cerebral cortex, or in gyrification (3). At best, we rank first in the relative size of the cerebral cortex expressed as a percentage of brain mass, but not by far. Although the human cerebral cortex is the largest among mammals in its relative size, at 75.5% (4), 75.7% (5), or even 84.0% (6) of the entire brain mass or volume, other animals, primate and nonprimate, are not far behind: The cerebral cortex represents 73.0% of the entire brain mass in the chimpanzee (7), 74.5% in the horse, and 73.4% in the short-finned whale (3).

The incongruity between our extraordinary cognitive abilities and our not-that-extraordinary brain size has been the major driving factor behind the idea that the human brain is an outlier, an exception to the rules that have applied to the evolution of all other animals and brains. A largely accepted alternative explanation for our cognitive superiority over other mammals has been our extraordinary brain size compared with our body size, that is, our large encephalization quotient (8). Compared with the trend for brain mass to increase together with body mass across mammalian species in a fashion that can be described mathematically by a power law (9), the human species appears to be an outlier, with a brain that is about sevenfold larger than expected from its body mass compared with mammals as a whole (10), or threefold larger than expected compared with other primates (2), although how we came to be that way has not been well accounted for in the literature.

Why should a larger-than-expected brain bring about larger cognitive abilities? That notion is based on the idea that an “excess brain mass,” relative to the brain mass necessary to operate the body, would endow the behavior of more encephalized animals with more complexity and flexibility (11). The most encephalized species should also be the most cognitively able, and that species, finally, was our own.

However, the notion that higher encephalization correlates with improved cognitive abilities has recently been disputed in favor of absolute numbers of cortical neurons and connections (12), or simply absolute brain size (13). If encephalization were the main determinant of cognitive abilities, small-brained animals with very large encephalization quotients, such as capuchin monkeys, should be more cognitively able than large-brained but less encephalized animals, such as the gorilla (2). However, the former animals with a smaller brain are outranked by the latter in cognitive performance (13).

It remains possible that the source of incongruence between our cognitive abilities and brain size is an unwarranted comparison of species across orders. Such comparisons are based on the notion, implicit in most comparative studies to date, that different brains are just scaled-up or scaled-down versions of a common basic plan, such that larger brains always have more neurons than smaller brains and two brains of a similar size always have comparable numbers of neurons. However, this notion is in disagreement with the observation that animals of similar brain size but belonging to different mammalian orders, such as the cow and the chimpanzee (both at about 400 g of brain mass), or the rhesus monkey and the capybara (at 70–80 g of brain mass), may have strikingly different cognitive abilities and behavioral repertoires. Thus, either the logic that larger brains always have more neurons is flawed or the number of neurons is not the most important determinant of cognitive abilities. The appealing alternative view that total connectivity, gauged from the total number of synapses in the brain, should be a direct determinant of brain processing capabilities runs into the same difficulty. Although this possibility remains to be examined systematically, the few pieces of evidence available in the literature suggest that synaptic density is constant across species (14⇓⇓–17). If that is indeed the case, the total numbers of brain synapses would be simply proportional to brain size and the differences in cognitive abilities between brains of a similar size would, again, be left unaccounted for.

On the other hand, it is possible that the relationship between brain size and number of brain neurons is determined by rules that have varied in evolution, and visual examination of brain sizes in the mammalian radiation does suggest that large brains appeared several times independently in most of the mammalian orders (Fig. 1). In this scenario of independent evolution of large brains in different mammalian orders, not all mammalian brains are necessarily built as larger or smaller versions of the same plan, with proportionately larger or smaller numbers of neurons. This scenario leaves room for similarly sized brains across orders, such as the cow and the chimpanzee brains, to contain very different numbers of neurons, just as a very large cetacean brain might contain fewer neurons than a gorilla brain. In that case, size comparisons between the human brain and nonprimate brains, larger or smaller, might simply be inadequate and uninformative, and our view of the human brain as an outlier, an extraordinary oddity, may have been based on the mistaken assumption that all brains are made the same.

Fig. 1.

Large brains appear several times in the mammalian radiation. Example species are illustrated for each major mammalian group. The mammalian radiation is based on the findings of Murphy et al. (18) and Kaas (19). Brain images are from the University of Wisconsin and Michigan State Comparative Mammalian Brain Collections (www.brainmuseum.org).

Here, I will explore the different relationships that apply across mammalian orders between brain structure size and numbers of neuronal cells (i.e., their order- and structure-specific neuronal scaling rules); the shared relationships across orders between brain structure mass and numbers of nonneuronal cells and nonneuronal cell density (i.e., their shared nonneuronal scaling rules); the concerted scaling across mammalian brains of numbers of neurons in the cerebral cortex and cerebellum, despite the increase in relative size of the former in larger brains; the constraints imposed by the primate neuronal scaling rules on cortical connectivity; the relationship between brain metabolism and number of neurons; and, finally, how humans compare with other mammals in these aspects, and what that recent evidence implies about human brain evolution.

Not All Brains Are Made the Same: Neuronal Scaling Rules

Testing the possibility that large brains have evolved as different functions of their numbers of neurons across mammalian orders became possible when we determined the numbers of cells that compose the brain of over 30 species belonging to three mammalian orders (20). These studies were made possible by the development of the isotropic fractionator, an unbiased nonstereological method created in our laboratory that provides cell counts based on suspensions of free nuclei derived from tissue homogenates from whole brains divided into anatomically defined regions (21).

Applying the isotropic fractionator, we found that the proportionality between brain mass and number of brain neurons (i.e., the neuronal scaling rule for the brains of a group of animals) is different across brain structures and mammalian orders (reviewed in 20) (Fig. 2). In rodents, variations in brain size outpace variations in the number of brain neurons: Rodent brains vary in mass as a power function of the number of brain neurons raised to a large exponent of 1.5 (22, 23) (Fig. 2, Upper Left). In primates and insectivores, in contrast, brain size increases linearly as a function of its number of neurons, or as a power function with an exponent of ∼1.0 (24⇓⇓–27) (Fig. 2, Upper Left). This means that a 10-fold increase in the number of neurons in a rodent brain results in a 35-fold larger brain, whereas in a primate or insectivore, the same increase results in a brain that is only 10- or 11-fold larger (28). Different neuronal scaling rules also apply separately to the cerebral cortex, cerebellum, and rest of the brain across mammalian orders (Figs. 2, Upper and 3A). This happens as the rate of variation in neuronal density with increasing structure size differs across brain structures and mammalian orders (Fig. 3B), indicating that average neuronal size varies rapidly with numbers of neurons in some and slowly or not at all in others (20). For instance, the cerebral cortex grows across rodent species as a power function of its number of neurons with a large exponent of 1.7 (23), which means that a 10-fold increase in the number of cortical neurons in a rodent leads to a 50-fold increase in the size of the cerebral cortex. In insectivores, the exponent is 1.6, such that a 10-fold increase in the number of cortical neurons leads to a 40-fold larger cortex. In primates, in contrast, the cerebral cortex and cerebellum vary in size as almost linear functions of their numbers of neurons (24, 26), which means that a 10-fold increase in the number of neurons in a primate cerebral cortex or cerebellum leads to a practically similar 10-fold increase in structure size, a scaling mechanism that is much more economical than in rodents and allows for a much larger number of neurons to be concentrated in a primate brain than in a rodent brain of similar size (Fig. 3A).

Fig. 2.

Comparison of allometric exponents for total brain mass, cerebral cortex mass, cerebellar mass, and the rest of the brain mass as a function of numbers of neurons (Upper) or nonneuronal cells (Lower). Exponents, given at the base of the radiation of each individual group (Glires, Primata/Scandentia, and Eulipotyphla), are illustrated by the intensity of the shading. Data are from studies by Herculano-Houzel and her colleagues (22⇓⇓⇓⇓–27); exponents are from a study by Herculano-Houzel (20).

Fig. 3.

Shared nonneuronal scaling rules and structure- and order-specific neuronal scaling rules for mammalian brains. Each point represents the average values for one species (insectivores, blue; rodents, green; primates, red; Scandentia, orange). Arrows point to human data points, circles represent the cerebral cortex, squares represent the cerebellum, and triangles represent the rest of the brain (excluding the olfactory bulb). (A) Clade- and structure-specific scaling of brain structure mass as a function of numbers of neurons. Allometric exponents: cerebral cortex: 1.699 (Glires), 1.598 (insectivores), 1.087 or linear (primates); cerebellum: 1.305 (Glires), 1.028 or linear (insectivores), 0.976 or linear (primates); rest of the brain: 1.568 (Glires), 1.297 (insectivores), 1.198 (or 1.4 when corrected for phylogenetic relatedness in the dataset, primates). (B) Neuronal cell densities scale differently across structures and orders but are always larger in primates than in Glires. Allometric exponents: cerebral cortex: −0.424 (Glires), −0.569 (insectivores), −0.168 (primates); cerebellum: −0.271 (Glires), not significant (insectivores and primates); rest of the brain: −0.467 (Glires), not significant (insectivores), −0.220 (primates). (C) Mass of the cerebral cortex, cerebellum, and rest of the brain varies as a similar function of their respective numbers of nonneuronal cells. Allometric exponents: cerebral cortex: 1.132 (Glires), 1.143 (insectivores), 1.036 (primates); cerebellum: 1.002 (Glires), 1.094 (insectivores), 0.873 (primates); rest of the brain: 1.073 (Glires), 0.926 (insectivores), 1.065 (primates). (D) Average density of nonneuronal cells in each structure does not vary systematically with structure mass across species. Power functions are not plotted so as not to obscure the data points. Allometric exponents are from a study by Herculano-Houzel (20); data are from studies by Herculano-Houzel and her colleagues (22⇓⇓⇓⇓–27).

Shared Scaling Rules: Nonneuronal Cells

In contrast to the structure- and order-specific neuronal scaling rules, the numerical relationship between brain structure mass and the respective number of nonneuronal cells seems similar across all structures and species analyzed so far, spanning about 90 million years of evolution (Figs. 2, Lower and 3C): The larger a structure is, the more nonneuronal cells it has, in a nearly linear manner, such that nonneuronal cell density does not vary systematically with structure size (Fig. 3D). This implies that glial and endothelial cells have not been free to vary in size as mammalian brains evolve, a finding suggesting that the functions of these cells must be tightly regulated, allowing very little room for changes in evolution (20).

Shared Scaling Rules: Cerebral Cortex and Cerebellum

Larger brains possess larger cerebral cortices and cerebella but with a slightly faster increase in the size of the former compared with the latter, such that over five orders of magnitude, larger brains possess relatively larger cerebral cortices, whereas the relative size of the cerebellum fails to increase with brain size (7). If the size of these structures were similar functions of their numbers of neurons, relatively larger cerebral cortices should hold increasingly larger percentages of brain neurons across species. Based on this implicit assumption, the discrepancy in the scaling of relative cerebral cortical and cerebellar size in larger brains has been used as an argument favoring the functional importance of relative neocortex expansion in brain function and evolution (3, 29, 30).

Strikingly, we found that the increase in relative size of the cerebral cortex in larger brains does not reflect a relatively larger number of cortical neurons compared with the whole brain, or with the cerebellum. Larger cortices do have larger numbers of neurons, of course (Fig. 3A); however, and in contrast to the increasing volumetric preponderance of the cerebral cortex in larger mammalian brains, numbers of neurons in the cerebral cortex increase coordinately and linearly with numbers of neurons in the cerebellum across mammalian species of different orders (Fig. 4A), regardless of how much the cerebral cortex comes to dominate brain size (Fig. 4B). This coordinated scaling happens with a relatively stable numerical preponderance of about four neurons in the cerebellum to every neuron in the cerebral cortex, even though these structures change in size following different cellular scaling rules across rodents, primates, and Eulipotyphla (31) (insectivores; Fig. 4A). This is illustrated by the finding that in most mammalian species examined so far, including humans, the cerebral cortex contains about 20–25% of all brain neurons, regardless of its relative size [which can reach 82% of the brain in humans (31)]. Thus, for a variation in brain size of five orders of magnitude, the ratio between numbers of cerebral cortical and cerebellar neurons varies relatively little and does not correlate with brain size. This is a strong argument against neocorticalization (in what concerns numbers of neurons) and in favor of the coordinated increase in numbers of neurons across the cortex and cerebellum related to the behavioral and cognitive (not only sensorimotor) functions that corticocerebellar circuits mediate as brain size increased on multiple, independent occasions in evolution. The coordinated addition of neurons to cerebral cortex and cerebellum thus argues for coordinated corticocerebellar function and a joint evolution of the processing abilities of the two structures (32⇓–34), a view also supported by the concerted increase in size of the prefrontal cerebral cortex, prefrontal inputs to the corticopontine system, and prefrontal-projecting cerebellar lobules in primates (33, 34). The issue then becomes accounting for how the cerebral cortex increases in size faster than the cerebellum as both gain neurons coordinately. As examined next, this differential scaling is probably related to how connectivity through the underlying white matter scales in the two structures, one of which carries massive long-range connections across cerebral cortical areas both within and across the hemispheres that are essential for the operation of associative networks (35), whereas the other is mostly composed of centrifugal and centripetal connections, with associative connections mostly restricted to the gray matter of the cerebellum (36). As a result, the cerebral subcortical white matter gains volume faster than the cerebellar white matter in larger brains (36, 37), because overall neuronal size (including dendrites and axonal arborizations) increases faster in the cerebral cortex than in the cerebellum, as both gain neurons coordinately.

Fig. 4.

Coordinated scaling of the number of neurons in the cerebral cortex and cerebellum of mammals. (A) Number of neurons in the cerebellum covaries with the number of neurons in the cerebral cortex across all species in a way that can be described as a linear function of slope 4.2 (P < 0.0001, r2 = 0.995). (B) Increased relative cortical mass does not reflect an increased relative number of brain neurons. Each point represents the average values for one species (insectivores, blue; rodents, green; primates, red; Scandentia, orange). Circles represent relative mass and relative number of brain neurons in the cerebral cortex; squares represent relative values for the cerebellum. All Spearman correlation P values are >0.2. Data are from studies by Herculano-Houzel and her colleagues (22⇓⇓⇓⇓–27). h, human data points.

Cerebral Cortex Expansion, Gyrification, and Connectivity

Even if expanding without gaining relatively more of the total number of brain neurons, the mammalian cerebral cortex does vary in size over five orders of magnitude, albeit as different functions of its number of neurons across mammalian orders (20). Cortical expansion is commonly envisioned as occurring laterally, through the increase of the number of progenitor cells in the subventricular zone and the consequent addition of radial columns containing a constant number of neurons across species (38). A number of models of cortical expansion in evolution assume such a uniform distribution of neurons across species, based on the initial findings of Rockel et al. (39) of a constant number of ∼147,000 neurons beneath 1 mm2 of cortical surface of various mammalian species. A second common assumption in evolutionary models of cortical expansion is that a constant fraction of cortical neurons sends axons into the white matter, that is, cortical connectivity does not scale with brain size (37, 40, 41), although some models predict a decrease in cortical connectivity through the white matter in larger cortices (42⇓⇓–45).

Contrary to the expectation of a uniform number of neurons beneath a given cortical surface across species (39), cortical expansion in primates occurs with at least a threefold variation in these numbers across species (46). Moreover, cortical connectivity through the white matter (i.e., the fraction of gray matter neurons that sends or receives an axon through the white matter) indeed decreases as the cortex gains neurons (47). Larger primate cortices increase in size proportionally to the number, N, of neurons in the gray matter, of which a decreasing fraction (proportional to N0.841) sends axons into the white matter. Given the average axonal length in the primate white matter to increase with N0.242, and given our inference that the average axonal diameter does not change appreciably with N (47), we predict that the volume of the white matter should increase with N1.114, which is close to the scaling exponent obtained experimentally (47). The expansion of both the gray and white matter of the brains of primates thus occurs with a decreasing connectivity fraction and a largely invariant average axonal diameter in the white matter, which might also explain the increasing gyrification of larger cortices through the increasing tension of axons coursing in the white matter (reviewed in 48).

A decrease in long-range connectivity, favoring local connectivity, in larger primate brains is expected from the nearly linear increase in cortical size as the brain gains neurons, given that, all things being equal (including connectivity), cortical volume should increase with its number of neurons raised to the power of 4/3. A decrease in connectivity in larger primate brains is compatible with the view that the cerebral cortex displays among its neurons the connectivity properties of a small-world network, that is, a network in which distance between nodes (neurons) is small, with mostly local connectivity and only a relatively small number of long-range connections (49). Evidence that the cortex is connected and functions as a small-world network at the neuronal level has been found recently (50, 51), even though the cerebral cortex may be densely connected at the level of functional areas (52). There is converging evidence that the cerebral cortex also scales as a small-world network at the neuronal level, growing through the addition of nodes that are densely interconnected locally (through horizontal connections in the gray matter) but only sparsely interconnected globally, through long fibers (in the white matter), which still guarantees fast global communication (43, 53–55). A decrease in neuronal connectivity is indeed an expected feature of growing small-world networks (56).

Human Brain as a Scaled-Up Primate Brain

Despite common remarks in the literature that the human brain contains 100 billion neurons and 10- to 50-fold more glial cells (e.g., 57⇓–59), no references are given to support these statements; to the best of my knowledge, they are none other than ballpark estimates (60). Comparing the human brain with other mammalian brains thus required first estimating the total numbers of neuronal and nonneuronal cells that compose these brains, which we did a few years ago (25). Remarkably, at an average of 86 billion neurons and 85 billion nonneuronal cells (25), the human brain has just as many neurons as would be expected of a generic primate brain of its size and the same overall 1:1 nonneuronal/neuronal ratio as other primates (26). Broken down into the cerebral cortex, cerebellum, and rest of the brain, the neuronal scaling rules that apply to primate brains also apply to the human brain (25) (Fig. 3 A and C, arrows). Neuronal densities in the cerebral cortex and cerebellum also fit the expected values in humans as in other primate species (Fig. 3B), and the ratio between nonneuronal and neuronal cells in the whole human brain of 1:1 (not 10:1, as commonly reported) is similar to that of other primates (25). The number of neurons in the gray matter alone of the human cerebral cortex, as well as the size of the subcortical white matter and the number of nonneuronal cells that it contains, also conforms to the rules that apply to other primates analyzed (47). Most importantly, even though the relative expansion of the human cortex is frequently equated with brain evolution, which would have reached its crowning achievement in us (61), the human brain has the ratio of cerebellar to cerebral cortical neurons predicted from other mammals, primate and nonprimate alike (Fig. 4A). Therefore, the observed compliance of the human brain to the same neuronal scaling rules that apply to nonhuman primates [including great apes (62)] makes the human brain simply a scaled-up primate brain: In what regards its number of neurons, our brain cannot be considered extraordinary in the sense of being an outlier.

Human Advantage

Observing that the human brain is a scaled-up primate brain in its number of neuronal and nonneuronal cells is not to say that the human brain is not at an advantage compared with other mammals. What needs to be considered is that the human cognitive advantage over other animals may reside simply in the total number of brain neurons (28, 63), and this may be the consequence of humans being primates and, among these, the species with the largest brain (64). Because of the different proportionality between brain size and number of brain neurons between primates and rodents, a primate brain contains more neurons than a similarly sized rodent brain (20). For instance, the human brain has about sevenfold more neurons than the 12 billion neurons that a hypothetical rodent brain of 1.5 kg would be expected to have, according to the neuronal scaling rules that apply to rodent brains (22, 23, 28). Moreover, the primate advantage in numbers of brain neurons compared with a similarly sized rodent brain becomes increasingly larger with increasing brain size. Although direct measurements of numbers of neurons are not yet available for whole elephant and whale brains, one can speculate on how those numbers might differ depending on the particular neuronal scaling rules that apply. Hypothetically, if cetacean brains scaled similar to primate brains [which is unlikely, given their steep decrease in neuronal density with increasing brain size (1)], a whale brain of 3.65 kg would be predicted to have a whopping 212 billion neurons. In contrast, if cetacean brains scaled similar to rodent brains [which is a more likely scenario, given the very low neuronal densities in cetacean and elephant brains (1)], that same brain would only hold about 21 billion neurons, which is fewer than the 28 and 33 billion neurons that we have predicted for the chimpanzee and gorilla brains, respectively (28, 62).

Compared with other primates, the human brain is therefore not exceptional in its number of neurons, nor should it be considered an evolutionary outlier. If absolute brain size is the best predictor of cognitive abilities in a primate (13), and absolute brain size is proportional to number of neurons across primates (24, 26), our superior cognitive abilities might be accounted for simply by the total number of neurons in our brain, which, based on the similar scaling of neuronal densities in rodents, elephants, and cetaceans, we predict to be the largest of any animal on Earth (28).

Scaling of Glia/Neuron Ratios and Metabolism

Although neurons are generally considered the most important cell type for the generation of cognition, the role of glial cells in brain physiology is more and more recognized (65). One parameter traditionally considered a functionally relevant indicator of the neuron/glia relationship is the ratio between numbers of glial and neuronal cells in brain tissue (the G/N ratio). The G/N ratio used to be considered to increase uniformly with brain size, which would be uniformly accompanied by larger neurons (66, 67). Instead, as could be expected from the uniform nonneuronal scaling rules but structure- and order-specific neuronal scaling rules, we found that the nonneuronal/neuronal ratio (which serves as an approximation of the maximal G/N ratio) does not increase homogeneously with increasing brain size or increasing size of brain structures, as originally thought (Fig. 5A). However, the G/N ratio increases in a strikingly homogeneous manner with decreasing neuronal density across brain structures in all mammalian species examined so far, which indicates that the G/N ratio does indeed accompany average neuronal size (reviewed in 20) (Fig. 5B). The finding that glial cells are not nearly as numerous in the human brain as once believed is therefore highly significant: It shows that the human brain, like that of every other mammal observed so far, obeys the same uniform scaling relationship between the G/N ratio and neuronal density (64). Such a universal relationship between G/N ratios and neuronal size, conserved across brain structures and species over 90 million years of evolution, suggests that this ratio reflects a functionally fundamental and evolutionarily conserved aspect of brain morphology (20).

Fig. 5.

G/N ratio scales differently across structures and orders with structure mass, but scales homogeneously with neuronal density. Each point represents the average other cell/neuron ratio (which approximates the G/N ratio) and structure mass (A) or neuronal density (B) in the cerebral cortex (circles), cerebellum (squares), or rest of brain (triangles) of a species. Notice that in contrast to the scattered distribution across species and structures in A, data points are aligned across species and structures in the lower plot, suggesting that it is smaller neuronal densities (i.e., larger average neuronal cell size), rather than larger structure mass, that is accompanied by a larger G/N ratio. Data are from studies by Herculano-Houzel and her colleagues (22⇓⇓⇓⇓–27).

The increased G/N ratio with increased neuronal size is traditionally believed to reflect an increased metabolic need of larger neurons (68). Once numbers of neurons composing different rodent and primate brains were available, it became possible to estimate how the average metabolic cost per neuron scales with brain size and neuronal density. Contrary to expectations, dividing total glucose use per minute in the cerebral cortex or whole brain (69) by the number of brain neurons revealed a remarkably constant average glucose use per neuron across the mouse, rat, squirrel, monkey, baboon, and human, with no significant relationship to neuronal density and, therefore, to average neuronal size (70). This is in contrast to the decreasing average metabolic cost of other cell types in mammalian bodies with increasing cell size (71⇓–73), with the single possible exception of muscle fibers (74). The higher levels of expression of genes related to metabolism in human brains compared with chimpanzee and monkey brains (75, 76) might therefore be related not to an actual increase in metabolism per cell but to the maintenance of average neuronal metabolism in the face of decreasing metabolism in other cell types in the body.

That the average energetic cost per neuron does not scale with average neuronal cell size has important physiological implications. First, considering the obligatory increased cost related to a larger surface area (68), the evolution of neurons with a constant average energetic cost regardless of their total cell size implies that the relationship between larger neuronal size and a larger G/N ratio must not be related to increased metabolic needs, as usually assumed. Instead, we have proposed that this relationship ensues simply from the invasion during early development of a parenchyma composed mostly of neurons of varying sizes (in different brain structures and species) by glial cells of relatively constant size across structures and species (70). Second, the constant average energetic cost per neuron across species implies that larger neurons must compensate for the obligatory increased metabolic cost related to repolarizing the increased surface area of the cell membrane. This compensation could be implemented by a decreased number of synapses and/or decreased rates of excitatory synaptic transmission (69). Synaptic homeostasis and elimination of excess synapses [e.g., during sleep (77)], the bases of synaptic plasticity, might thus be necessary consequences of a tradeoff imposed by the need to constrain neuronal energetic expenditure (70).

Another consequence of a seemingly constant metabolic cost per neuron across species is that the total metabolic cost of rodent and primate brains, and of the human brain, is a simple, linear function of their total number of neurons (70) (Fig. 6), regardless of average neuronal size, absolute brain size, or relative brain size compared with the body. At an average rate of 6 kcal/d per billion neurons (70), the average human brain, with 86 billion neurons, costs about 516 kcal/d. That this represents an enormous 25% of the total body energetic cost is simply a result of the “economical” neuronal scaling rules that apply to primates in comparison to rodents, and probably to other mammals in general: For a similar brain size, more neurons will be found in a primate brain than in possibly any other mammalian brain (28, 63). It is intriguing to consider, therefore, that our remarkable cognitive abilities, at a remarkable relative energetic cost, might be mostly the result of a very large number of neurons put together in a not extraordinary fashion but, instead, according to the same evolutionary scaling rules that apply to other primates.

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *