sdecoret / Shutterstock
Just a few years ago Yuval Noah Harari was an obscure Israeli historian with a knack for playing with big ideas. Then he wrote Sapiens, a sweeping, cross-disciplinary account of human history that landed on the bestseller list and remains there four years later. Like Jared Diamond’s Guns, Germs and Steel, Sapiens proposed a dazzling historical synthesis, and Harari’s own quirky pronouncements—“modern industrial agriculture might well be the greatest crime in history”— made for compulsive reading. The book also won him a slew of high-profile admirers, including Barack Obama, Bill Gates, and Mark Zuckerberg.
In his new book, 21 Lessons for the 21st Century, Harari offers a grab bag of prognostications on everything from new technology to politics and religion. Although he’s become a darling of Silicon Valley, Harari is openly critical of how Facebook and other tech companies exploit our personal data, and he worries that online interactions are replacing actual face-to-face encounters. Much of the book speculates on the revolutionary impact of artificial intelligence. If computer algorithms can know you better than you know yourself, is there any room left for free will? And where does that leave our politics?
Harari is a rapid-fire conversationalist who seems to have an opinion about everything. He’s remarkably self-assured and clearly enjoys the role of provocateur. We began by agreeing that something feels very different about this moment in history. We are on the precipice of a revolution that will change humanity for either our everlasting benefit or destruction—it’s not clear which. “For the first time in history,” Harari said, “we have absolutely no idea how the world will look in 30 years.”
Photo by Daniel Naber / Wikimedia.
What’s different about this moment in history?
What’s different is the pace of technological change, especially the twin revolutions of artificial intelligence and bioengineering. They make it possible to hack human beings and other organisms, and then re-engineer them and create new life forms.
How far can this technology go in changing who we are?
Very far. Beyond our imagination. It can change our imagination, too. If your imagination is too limited to think of new possibilities, we can just improve it. For billions of years, all of life was confined to the organic realm. It didn’t matter if you were an amoeba, a Tyrannosaurus rex, a coconut, a cucumber, or a Homo sapiens. You were made of organic compounds and subject to the laws of organic biochemistry. But now we’re about to break out of this limited organic realm and start combining the organic with inorganic bots to create cyborgs.
One of the most important forces in history is human stupidity. We should never underestimate human stupidity.
What worries you about the new cyborgs?
Experiments are already under way to augment the human immune system with an inorganic, bionic system. Millions of tiny nanorobots and sensors monitor what’s happening inside your body. They could discover the beginning of cancer, or some infectious disease, and fight against these dangers for your health. The system can monitor not just what goes wrong. It can monitor your moods, your emotions, your thoughts. That means an external system can get to know you much better than you know yourself. You go to therapy for years to get in touch with your emotions, but this system, whether it belongs to Google or Amazon or the government, can monitor your emotions in ways that neither you nor your therapist can approach in any way.
Are you saying computer algorithms can break down personal data we may not even be aware of?
Yes. Fear and anger and love, or any human emotion, are in the end just a biochemical process. In the same way you can diagnose flu, you can diagnose anger. You might ask somebody, “Why are you angry? What are you angry about?” And they will say, “I’m not angry about anything, what do you want?” But this external system doesn’t need to ask you. It can monitor your heart, your brain, your blood pressure. It can have a scale of anger and it can know you are now a 6.8 on a scale of 1-to-10. Combining this with enormous amounts of data collected on you 24 hours a day can provide the best healthcare in history. It can also be the foundation of the worst dictatorial regimes in history.
You write, “Who owns the data owns the future.” What do you mean?
Previously in history the most important resource was land. Then it was machines. Now data is the most important resource. Politics is becoming the struggle to control data, and the future belongs to those who monopolize the data. One of the biggest political questions of our era is, How do you regulate the ownership of data?
What’s the big fear here?
It means an external system can know you better than you know yourself. It can predict your choices and decisions. It can manipulate your emotions, and it can sell you anything, whether a product or a politician.
So why not just give ourselves over to the computers? Let them make decisions for us. Maybe we’ll be happier.
In some cases this would be a very good idea. It starts with things like trusting who is driving the car. At present, more than 1 million people worldwide are killed each year in traffic accidents. That’s more than the number of people who die from war and crime and terrorism put together. And almost all traffic accidents are because of humans making bad decisions. If we switch and let the algorithms, the computers, drive the vehicles, it won’t reduce traffic accidents to zero, but it is likely to reduce maybe 90 percent of traffic accidents and save hundreds of thousands of lives.
So when is giving authority to algorithms a bad idea?
When you start giving algorithms the authority to decide what to study, where to live, who to marry, who to vote for. The algorithm makes recommendations and it’s up to you to decide whether to follow the recommendation or not. In many cases people will follow the recommendations because they realize from experience the algorithms make better choices. The recommendations may never be perfect, but they don’t have to be. They just have to be better, on average, than human beings. That’s not impossible because human beings very often make terrible mistakes, even in the most important decisions of their lives. This is not a future scenario. Already we give algorithms authority to decide which movies to see and which books to buy. But the more you trust the algorithm, the more you lose the ability to make decisions yourself. After a couple of years of following the recommendations of Google Maps, you no longer have a gut instinct of where to go. You no longer know your city. So even though theoretically you still have authority, in practice this authority has been shifting to the algorithm.
What happens to our notion of humanity if Hamlet just takes out his cellphone and asks Siri what to do?
The fear we usually hear about artificial intelligence is it will allow the robots to take over. They will gain autonomy and become our masters. But you’re saying something quite different. You’re saying we will give in to the machines.
Right. We will shift authority to them. In most science-fiction movies, the robots rebuild themselves and try to kill the humans. And the humans must fight back and destroy the robots. This is a very comforting myth. It tells humans that nobody can do a better job than you. If you rely on the robots, it will end in disaster. The far more frightening scenario is that the robots will make better decisions than us. Then the question is, “What is human life all about?” For thousands of years we have constructed this idea of human life as a drama of decision-making. Life is a road with many intersections, and every couple of days or months or years, you need to make decisions.
This is what our ethical systems are based on. This is to some degree what religion is based on.
Exactly. You make good decisions, you go to heaven. You make bad decisions, you go to hell. And this is everything from Shakespeare plays to silly Hollywood comedies. Whom should they marry? Should I go to war or make peace? What happens to our notion of humanity if Hamlet just takes out his smartphone and asks Siri what to do?
How’s this technological shift affecting political systems? The dominant political ideology, liberal democracy, sure seems under threat right now.
What’s happened is we now lack a good story to understand where are we headed. Even if some of us are still committed to the ideology of liberal democracy, it has lost its power of explanation. In the 1990s and 2000s, it made some forceful prophecies about where we are heading. Liberal democracy and free-market capitalism are going to spread around the world and all the countries will become like the United States or Denmark. But now the liberal story no longer has a clear vision for the future. Nobody has a clear vision for the future. Nobody on either the right or the left has any meaningful vision for how humankind and the world will look in 2050. Even those who believe in the liberal story as an ethical system can no longer pretend its prophecies are true. The world doesn’t behave as the liberal story expected it to behave. Those, especially, who believe in liberal democracy don’t understand what the hell is happening.
I don’t understand what the hell is happening. Liberal democracy seems to be the recipe for success, and yet it doesn’t seem to be working out that way for a whole bunch of countries.
Yes. As more autocratic leaders take control, a lot of democratic countries are struggling. But it should be emphasized that the world in the last 30 years, under the hegemony of liberal democracy and liberalism, has been in the best shape ever, at least for human beings. The world was never so prosperous and peaceful as in the last 30 years. So to a large extent the liberal system has delivered. But people are losing faith in it. And one of the main reasons is that the rise of the new technologies mean that more people are being left behind. And it’s not just the technologies but the new scientific insights. The liberal story is based on the ideal and the notion of free will, that the free will of individual humans is the ultimate source of authority in the world. But science is now telling us there is no such thing as free will. It’s a myth.
People certainly make choices all the time. But we are not free to choose our desires.
There’s a lot of debate about that. A lot of scientists and philosophers are not willing to give up that idea of free will.
As private individuals, maybe. But if you look at the scientific journals, articles, and experiments, science doesn’t even understand the meaning of free will. We don’t know of any free processes in nature. We know only two types of processes in nature. We know of deterministic processes and random processes. And the combination of the two gives us probabilistic processes. But randomness isn’t freedom and probability isn’t freedom. People certainly have a will. People certainly have desires. People certainly make choices all the time. But we are not free to choose our desires. We are not free to choose what to will. Our desires are shaped by both nature and culture in ways that are beyond the understanding and control of individuals.
Are you saying this loss of individual control is feeding the breakdown of democracy?
Well, two things are happening at the same time. First of all, we’ve gained the ability to hack human beings. If you believe in free will, you will say this is impossible. Nobody can know me better than I know myself. Nobody can predict my choices or manipulate my desires, because they are a reflection of my free will, of my free spirit. But then you become the easiest person to hack and to manipulate.
Are you talking about the political groups that have used Facebook to sway political opinion?
That was just a tip of the iceberg. Or a wake up call. But, yes, the way that Cambridge Analytica and all these companies and bots behaved is they hacked humans. They got to know your existing hatreds and fears and biases. And once you know what are the biases and existing weaknesses of a particular person, you can work on that.
Fear and hate are primal emotions. Are you saying if you can tap primal emotions, you can control the political discourse?
Yes. If you want to destroy the ability to have a meaningful public conversation, you discover the fears and hatreds of people and magnify them. And you do it on the individual basis. You can’t just show the same story to everybody because different people have different weaknesses. So maybe they discover you have a built-in bias against immigrants. They will show you a fake news story about an immigrant gang-raping local women. They know you will believe it because you already have this bias. But your neighbor, she has a very different bias. She’s in favor of immigration. But her bias is she thinks everybody who opposes immigration is a fascist, racist idiot. So they show her a different story about a gang of right-wing fanatics who kill immigrants. And she will believe this as easily as you believed the story about the immigrants raping local women. It plays to her existing fears and hatreds and weaknesses. Now, if you think people make decisions out of completely free will, you will say this is impossible. But then you are very easy prey to such manipulation.
You said two things were feeding the breakdown of democracy. What’s the second thing?
The second thing is the future is leaving more people behind. Much of the resentment in the world today, especially in the U.S., is not about present-day difficulties. It’s about people looking to the future and realizing the future doesn’t need them. The center of so many political and social struggles in the 20th century was exploitation—the elites exploiting the working class. But in the 21st century, the big fear is not that the elites exploit us. The big fear is that the elites don’t need us. We are becoming irrelevant. To a large extent, this fear is justified. Many people will become irrelevant to the economy, to the political system. So they are trying to use their political power before it’s too late.
If your crystal ball is accurate, what can we do?
We can definitely regulate the new technologies. We can make sure they are used for good and not evil purposes. We need to make sure the big-data algorithms are serving us, the individual people, and not just serving the interests of the corporation or government. At present we see more AI systems that are used to monitor individuals in the service of governments and corporations. But the technology itself can be used to monitor the corporations and the governments in the service of the individuals. Most of the efforts give the government or corporations the ability to monitor us. But there is no technical problem in reversing the direction of the surveillance.
We could also just simply unplug.
I would definitely recommend that everybody unplug for at least an hour or two every day and for longer periods during the year. I completely unplug and practice two hours of meditation every day. Every year I go for longer retreats of 30 or 60 days of complete disconnection from all phones, computers, devices. I’m a happier and calmer person because of that; I have greater peace of mind. It also helps me in my job of seeing the world better. Because what really comes between you and the world, makes everything so blurred and hard to understand, are your own weaknesses, your own pre-existing biases and fears. If you don’t know these biases, don’t know your fears and hatreds and cravings, it will be extremely difficult to understand the world. If your mind is all over the place and you have a hard time focusing it for any length of time, you will never be able to go deep into any question. Meditation helps me to focus my mind. It helps me to get to know my weaknesses and biases.
Are you an optimist or pessimist about where we’re headed?
I will summarize my view of the world in three simple statements. Things are better than ever before. Things are still quite bad. Things can get much worse. This adds up to a somewhat optimistic view because if you realize things are better than before, this means we can make them even better. We are not stuck in the same miserable position for all of history. There are things we can do to improve the situation. But there is nothing inevitable about it. I’m not a believer that science and technology will inevitably create a better world. Science and technology guarantee only one thing. And this thing is power. Humankind is going to become more powerful. But what to do with this power? Here we have all kinds of options. If you look back in history, sometimes people use power very wisely, and very often they misuse their power. One of the most important forces in human history is human stupidity. We should never underestimate human stupidity. When you combine the limitless resource of human stupidity with amazing new powers that humankind will gain in the 21st century, this can be a recipe for disaster.
Steve Paulson is the executive producer of Wisconsin Public Radio’s nationally syndicated show To the Best of Our Knowledge. He’s the author of Atoms and Eden: Conversations on Religion and Science. You can subscribe to TTBOOK’s podcast here.