A line up of the usual suspects.
You’ve probably heard the news: AI is going to take your job. Wait, no: It’s going to create a new job for you. AI is going to kill us all! Wait, no it’s not. AI is already totally smarter than us at, like, all the smart things. But that probably doesn’t matter? Neural networks. Machine learning. Deep learning. OMG. WTF. HELP.
Take a deep breath. Let’s consider for a second what we talk about when we talk about AI. Because I’m not sure many of us really know—or, at the very least, we’re not talking/arguing/worrying about the same things.
“You are right to be confused,” says Harvard computer scientist Leslie Valiant, because the terms artificial intelligence and machine learning “are suddenly being used interchangeably in the popular press.” Even Trevor Darrell, a leading artificial-intelligence researcher at UC Berkeley who’s also part of a DARPA-funded project on (wait for it) “explainable AI” admits that “there is no precise distinction—they overlap greatly.”
After surveying half a dozen leading experts, I was able to triangulate a reasonable answer as to what the hell these words actually mean:
Artificial intelligence (AI) is the general label for a field of study—specifically, the study of whatever might answer the question “What is required for a machine to exhibit intelligence?”
If that doesn’t sound very satisfying, the experts don’t disagree. “At this point, AI is an aspirational term reflecting a goal,” Darrell says. What he means is that “AI” isn’t, technically speaking, a thing. It’s not in your phone. It isn’t going to eat the world or do anything to your job. It’s not even an “it” at all: It’s just a suitcase word enclosing a foggy constellation of “things”—plural—that do have real definitions and edges to them. All the other stuff you hear about—machine learning, deep learning, neural networks, what have you—are much more precise names for the various scientific, mathematical, and engineering methods that people employ within the field of AI.
But what’s so terrible about using the phrase “artificial intelligence” to enclose all that confusing detail—especially for all us non-PhDs? The words “artificial” and “intelligent” sound soothingly commonsensical when put together. But in practice, the phrase has an uncanny almost-meaning that sucks adjacent ideas and images into its orbit and spaghettifies them. When Google CEO Sundar Pichai says the company is now “AI-first,” does that mean he’s “summoning the demon,” like Elon Musk calls AI? Is he hinting at an automation-uber-alles rebooting of capitalism itself, with universal basic income as the fig leaf? Or is he just talking about incrementally improving the consumer-facing products and services we’re already familiar with—whose version of “artificial intelligence” falls somewhere between “be as smart as a puppy” and “some subset of a cockroach’s brain”?
This may all seem like semantic hair-splitting. But George Orwell was on to something when he famously wrote that “the slovenliness of our language makes it easier for us to have foolish thoughts.” Using the same science-fictional shorthand to describe anything from self-driving cars to slightly better ad targeting surely inhibits basic comprehension. But its potential for seeding both magical thinking and abject confusion about real economic, social, and political changes also seems bottomless. If the experts don’t really know what they talk about when they talk about AI, is it any wonder that you and I don’t, either?
It therefore might be worth our while to apply a bit more effort when referring to “AI”-ish subjects. At the very least, we might want to avoid the word “intelligence” when referring to software, because nobody really knows what it means. For example, Google’s Go-playing computer system was “smart” enough to beat the world’s best human players—but if you try to get it to generalize what it “learned” about Go to any other domain, you’ll find it’s dumber than a houseplant. Even Alan Turing, the genius who mathematically defined what a computer is, considered the question of defining intelligence too hard; his eponymous Turing test dodges it, essentially saying “intelligence is as intelligence does.”
So what should we call “AI”, if not that? Orwell suggests that the cure for words that cloud our thinking is better words: simpler ones, crisper ones. Some commentators suggest merely using “software”; personally, I think “automation” does the trick. Instead of priming our minds with visions of inchoate software-spirits possessed of strange powers and inscrutable intentions, being more conscious of the words we choose might allow us to more clearly grasp the technologies around us.