Pocket worthyStories to fuel your mind

The Case for Radically Enhancing Humanity

Genetically engineering a smarter population to fend off existential catastrophe isn’t the worst idea, but it has some caveats.

Slate

Read when you’ve got time to spare.

38b4be20-0481-442f-bff0-b4ee69702be8.jpeg

Photo illustration by Slate. Images by Orla/iStock; Irina_Iglina/iStock.

Our species has made Earth its home for about 2,000 centuries, but there are strong reasons for believing that the current century is the most dangerous. The question is whether the threat level today will continue to grow, stay the same, or shrink. Some monitors of human progress are hopeful about the third possibility. They believe that if humanity survives the next century or so, the risk of existential disaster will decline, perhaps to an all-time low. One could characterize this bottleneck hypothesis as arising, in part, from a mismatch between the value rationality of our ends and the instrumental rationality of our means. That is, humanity is acquiring the capacity to construct, dismantle, and rearrange the physical world in unprecedented ways, yet we may lack the morality and foresight to ensure future human flourishing.

It’s a pressing concern for scholars of existential risk. Co-founder of the Centre for the Study of Existential Risk at Cambridge Martin Rees has argued that “our choices and actions could ensure the perpetual future of life (not just on Earth, but perhaps far beyond it, too). Or in contrast, through malign intent, or through misadventure, twenty-first century technology could jeopardize life’s potential, foreclosing its human and posthuman future.” He adds that “what happens here on Earth, in this century, could conceivably make the difference between a near eternity filled with ever more complex and subtle forms of life and one filled with nothing but base matter.”

Given these stakes, some scholars maintain that taking direct, targeted action to prevent existential catastrophe is the optimum way to maximize our chances of achieving the “OK” outcome for the future of humanity. In other words, we ought to start thinking (and fast) about existential-risk-mitigation macrostrategies—widely used tools that directly aim to minimize specific existential risks.

While proposals like cognitive or moral enhancements to help create a less doomsday-prone population are quite speculative (and on occasion, a little trollish), they’re beginning to gain actual ground in the field of existential-risk study. I would argue that our predicament this century—the 2000s—is sufficiently dire that we should consider a wide range of possibilities, including ones that may initially appear sci-fi but could, if they work, have a significant positive impact on our collective adventure into the shadows and mists of things unknown.

But what are the actual prospects for proposals like cognitive enhancement? Cognitive enhancement is any process or entity that augments the core capacities of the information-processing machine located between our ears. There are two general versions of cognitive enhancements: conventional and radical. The former is in widespread use today: examples include caffeine, fish oil, mindfulness meditation, even education—which essentially provides better mental software to run on the “wetware” of our brains. The result is not just a greater capacity for intellection but changes to the central nervous system itself—e.g., learning to read permanently alters the way the brain processes language.

Radical cognitive enhancements are those that would produce much more significant changes in cognition, and many are still in the research phase. One such intervention that has gained the attention of existential-risk scholars is iterated embryo selection. This process involves collecting embryonic stem cells from donor embryos, then making the stem cells differentiate into sperm and ovum (egg) cells. When a sperm and ovum combine during fertilization, the result is a single cell with a full set of genes, called the zygote. Scientists could then select the zygotes with the most desirable genomes and discard the rest. The selected zygotes then mature into embryos, from which embryonic stem cells can be extracted and the process repeated. If we understand the genetic basis of intelligence sufficiently well, we could specify selection criteria that optimize for general intelligence. The result would be rapid increases in IQ, a kind of eugenics but without the profoundly immoral consequences of violating people’s autonomy. According to a paper by philosophers Nick Bostrom and Carl Shulman, selecting one embryo out of 10, creating 10 more out of the one selected, and repeating the process 10 times could result in IQ gains of up to 130 points—a promising method for creating superbrainy offspring in a relatively short period of time.

Some philosophers argue that cognitive enhancements could have “a wide range of risk-reducing potential,” as Bostrom puts it, leading him to argue that “a strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.” At first glance, this seems right: Surely being smarter would make us less likely to do something dumb like destroy ourselves? But let’s take a closer look at how this tactic might influence the probability of existential disaster.

For cognitive enhancements to mitigate the risk of agential terror—or the risk that humanity will get snuffed out by members motivated by a malicious intent to harm others—such enhancements would have to interfere with some aspect of the agent’s motivations or intentions. There are reasons for thinking that they could do precisely this. Consider apocalyptic terrorists first. Individuals of this sort are inspired by Manichaean belief systems according to which those outside one’s religious clique—the infidels, the reprobates, the damned—are perceived as the unholy enemies of all that is good in the universe. This harsh division of humanity into two distinct groups is facilitated in part by a failure to understand where others are coming from, to see the world from another person’s perspective. Cognitive enhancements could theoretically achieve this end by enabling people to gain greater knowledge of other cultures, political persuasions, religious worldviews, and so on. In fact, this line of reasoning leads the bioethicist John Harris to defend the use of cognitive enhancements for moral purposes. As he writes, “I believe that education, both formal and informal, and cognitive enhancement are the most promising means of moral enhancement that are so far foreseeable.” The reason is that, he claims, the aversion that some people have toward other belief communities, other races, sexualities, and so on is not a “brute” reaction in the way that one’s fearful reaction to snakes or spiders might be. By correcting the fallacious inferences of extremists, cognitive enhancements could promote more rationality, open-mindedness, and religious moderation.

With regard to the threat of misguided ethicists or ecoterrorists, it is not a lack of psychometric intelligence, abstract reasoning, or veridical beliefs that make these agents risky.

More generally speaking, Steven Pinker’s “Escalator of Reason” hypothesis states that the observed decline in global violence since the second half of the 20th century has been driven by rising average IQs in many regions of the world, a phenomenon called the “Flynn effect.” The most important concept here is that of “abstract reasoning,” which Pinker identifies as being “highly correlated” with IQ. In his words, “abstraction from the concrete particulars of immediate experience … is precisely the skill that must be exercised to take the perspectives of others and expand the circle of moral consideration.” It follows that insofar as cognitive enhancements can extend the intellectual gains of the Flynn effect, they could produce morally superior individuals and therefore a morally superior world.

Thus, one might suppose that cognitive enhancements could also mitigate the threat of idiosyncratic actors, many of whom suffer from a marked lack of empathy. If only such individuals were more intelligent, if only they had higher IQs, perhaps they would be less likely to engage in homicidal acts—which weapons of total destruction such as nuclear weapons, biotechnology, nanotechnology, or artificial intelligence could soon scale up into omnicidal acts.

But many idiosyncratic actors actually exhibit above-average intelligence and have been fairly well educated. One encounters similar problems with respect to the potential threat of misguided ethicists and ecoterrorists: It is not, or at least not obviously, a lack of psychometric intelligence, abstract reasoning, or veridical beliefs that make these agents risky. Ted Kaczynski is a Harvard-educated mathematician who wrote about the perils of modern megatechnics eloquently enough to influence people like Bill Joy, the co-founder of Sun Microsystems and author of an influential neo-Luddite manifesto published in Wired. However ghastly his crimes were, the Unabomber was not lacking IQ points. Complicating the situation even more is the fact that empirical science unambiguously affirms that the globe is warming and the biosphere wilting due to human activity. Our species really has been a monstrously destructive force in the world—perhaps the most destructive since cyanobacteria flooded the atmosphere with oxygen about 2.3 billion years ago. Thus, the problem with ecoterrorists like the Gaia Liberation Front isn’t (generally speaking) that they harbor “false beliefs” about reality. Quite the opposite: Their ideologies are often grounded on solid scientific evidence. Nor are they unable to “set aside immediate experience” or “detach oneself from a parochial vantage point,” as Pinker puts it.

If anything, cognitive enhancements could intensify the threat of actors with malicious intent by amplifying knowledge about how to kill more people, the ubiquity of human suffering, and anthropogenic environmental destruction. Another major concern: Cognitive enhancements would likely increase the rate of technological development, thereby shortening the segment of time between the present and when large numbers of people could have access to a doomsday button. So cognitive enhancements appear to be a mixed bag as a person-engineering approach to mitigating agential terror.

What about the effect of cognitive enhancements on agential error—the risk that humans eradicate themselves by mistake? In that case, it appears that cognitive enhancements could provide a partial solution. For example, consider that higher IQs are positively correlated with a range of desirable outcomes, such as better health, less morbidity, and a lower probability of premature death. One explanation for this correlation is that more intelligent people are less prone to making the sort of cognitive mistakes that can compromise one’s health or put one’s life in danger. As one study articulates this hypothesis, “Both chronic diseases and accidents incubate, erupt, and do damage largely because of cognitive error, because both require many efforts to prevent what does not yet exist (disease or injury) and to limit damage not yet incurred (disability and death) if one is already ill or injured.”

Insofar as this argument is sound, the positive effects of cognitive enhancements on agential error may be especially pronounced as the world becomes more socially, politically, and technologically complex. Although cognitive enhancements could worsen some types of terror agents, the evidence—albeit indirect—suggests that a population of cognitively enhanced cyborgs would be less susceptible to accidents, mistakes, and errors, and therefore less likely to inadvertently self-destruct in the presence of weapons of total destruction.

Pitchstone Publishing

More broadly, it seems plausible to say that a smarter overall population would increase humanity’s ability to solve a wide range of global problems. Consider Bostrom’s calculation that a 1 percent gain in “all-round cognitive performance … would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited … [it] would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists.” Although I noted above that accelerating the pace of science could have disadvantages, it might also put humanity in a better position to neutralize a number of existential risks. For example, superior knowledge about supervolcanoes, infectious diseases, asteroids, comets, climate change, biodiversity loss, particle physics, geoengineering, emerging technologies, and agential risks could lead to improved responses to these threats.

Radical enhancements could also expand our cognitive space, potentially allowing us to acquire knowledge about dangerous phenomena in the universe that unenhanced humans could never know about, in principle. In other words, there could be any number of existential risks looming in the cosmic shadows to which we, stuck in our Platonic cave, are cognitively closed. Perhaps we are in great danger right now, but we can only know this if we understand a Theory T. The problem is that understanding Theory T requires us to grasp a single Concept C that falls outside our cognitive space. Only after we recognize a risk can we invent strategies for avoiding it.

On the other hand, the whole strategy of “direct work” to keep existential risks at bay has its detractors. According to philosopher Nick Beckstead, there could be “a much broader class of actions which may affect humanity’s long-term potential”—a class that includes “very broad, general, and indirect approaches to shaping the far future for the better, rather than thinking about very specific risks and responses.”

Beckstead’s concept is closely related to the idea of path dependence in the social sciences, according to which past events can constrain, in crucial and long-lasting ways, the space of future development. For example, the adoption of the QWERTY keyboard layout in the late 1800s, which was introduced to prevent typing too fast and jamming mechanical typewriters of the time, set technology on a path that would be exceptionally difficult to deviate from today. (Just imagine billions of people having to relearn how to type on a new keyboard layout.) Similarly, there could be decisions that humanity makes today that have long-term consequences for how civilization will evolve in the coming centuries. Perhaps lifting people out of poverty in the developing world will have cascading effects that yield a new generation of philosophers and scientists who devise novel methods for reducing existential risks. Although solving global poverty wouldn’t directly reduce existential risks, it could change the configuration of subsequent societies in a way that puts humanity in a better position to survive and thrive. The same can be said about any number of possible actions: There could be a vast array of microstrategies that are capable of changing our world trajectory in subtle but critical ways. Suffice it to say that people who care about promoting education, morality, science, democracy, humanitarianism, and wisdom could very well have profound positive effects on future civilization.

And of course, there could also be some combination of macrostrategies (compounded with microstrategies) that produce net positive results. Perhaps the use of advanced cognitive enhancements by our spacefaring descendants could obviate the risk of extraterrestrial militarization, which, as political scientist Daniel Deudney has argued, could increase as a result of another popular future humanity-saving option: space colonization. Or maybe differential technological development controlled by a smarter, more existential risk–aware population could lead to the successful creation of a human-friendly superintelligence whose value system aligns with ours or is evolutionarily constrained by some metavalue that conduces to human well-being. It could very well be a configuration of these strategies implemented in parallel that produce the right synergistic effect to squeeze us through the bottleneck.

Phil Torres is an author and scholar whose work focuses on the existential threats to humanity and civilization. His latest book is
Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks, from which this article was adapted.

How was it? Save stories you love and never lose them.


Logo for Slate

This post originally appeared on Slate and was published September 18, 2018. This article is republished here with permission.

Want more Slate in your life?

Get the daily newsletter.