Pocket worthyStories to fuel your mind

The Father of the Emoticon Chases His Great White Whale

Dr. Scott Fahlman invented a playful keyboard shortcut that is now used more than six billion times a day. But he hopes to be remembered for something a bit more substantial than a smiley face.

Narratively

Read when you’ve got time to spare.

eiBolVazTmyAcSeg5LI4_Header.jpg

Illustrations by Thomas Howes

Every year on September 19, Dr. Scott Fahlman passes out smiley face cookies. He stands outside Pittsburgh’s Carnegie Mellon University where he’s been a computer science professor for nearly forty years, and plays campus celebrity. On the same date in 1982, Fahlman invented the smiley-face emoticon. Now, his brainchild, which he simply calls “the smiley,” gets its own university-sponsored birthday party, complete with a table of cookies inscribed with :-) in chocolate frosting.

“Smiley was not available for comment at the time of this writing,” reads a CMU press release, “but has been seen around campus wearing a party hat and seems to be enjoying the limelight. <:->:-

The school graciously embraces having such a lighthearted and fun event associated with their serious, world-class computer science department.

“It’s an occasion for 64,000 selfies with me and J. Random Student,” says Fahlman, sixty-seven. He signs autographs, helps give out cookies and sell smiley novelty T-shirts while wearing one himself. As could only be expected, he accepts everyone’s picture requests, poses — and smiles.

Few inventions are so universally popular as the emoticon — an estimated six billion are sent every day —or as prescient — when the emoticon was created in 1982, the world’s best-selling computer was the month-old Commodore 64, so named for its then cutting-edge sixty-four kilobytes of RAM. But its birth was less an epiphanic moment than an office joke.

Fahlman invented the smiley when his CMU colleagues were having trouble recognizing sarcasm on an electronic bulletin board. The boards were a precursor to today’s Internet forums and included “flame wars,” heated debates between users. The need for a “joke marker” arose after a series of posts speculating about various things that could happen in a free-falling elevator. Would a pigeon in the elevator keep flying? Would a lit candle go out? What would a puddle of mercury do? It was “tech-nerd humor,” explains Fahlman. But the whole thing went off the rails when some users misinterpreted the messages as real elevator safety warnings. Back-up tape recovered in 2002 shows the original conversation thread:

17-Sep-82 10:58    Neil Swartz at CMU-750R      Elevator posts

Apparently there has been some confusion about elevators and such.  After talking to Rudy, I have discovered that there is no mercury spill in any of the Wean hall elevators.  Many people seem to have taken the notice about the physics department seriously…

Days of discussion followed about which character on a standard English keyboard is the funniest. “%,” “*,” and the now-famous “#” were all contenders, as well as “&,” since, according to Keith Wright, then a research programmer, “‘&’ looks … like a jolly fat man in convulsions of laughter.” A noted artificial intelligence researcher, Fahlman ended the debate. Staring at his keyboard, his revelation was to try a combination of characters, and to make use of a horizontal line of text by building a picture you’d have to look at sideways. He tossed off the message containing the first emoticon in ten minutes. He didn’t proofread it, just posted it. The smiley — and its frowny face sibling, always given short shrift — were born at exactly 11:44 am:

19-Sep-82 11:44 Scott E Fahlman  :-)
From: Scott E Fahlman

I propose that the following character sequence for joke markers:

:-)

Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use:

:-(

Fahlman became the “father of the emoticon.” He was a relatively new computer science professor, then thirty-four years old. He never thought his little joke would take the world by storm.

cWix7QdIRhWdEBScvs76_spot12028129.jpg

“Sometimes I know how Dr. Frankenstein felt,” he said in a recent talk he gave on creating an “international brand.” The comparison might be true — if the monster weren’t so underwhelming to this Frankenstein.

“I say much funnier and more interesting things [than] that in my average message,” he says about the day he typed out the smiley. “It was nothing special.”

Throughout his career as an AI researcher Fahlman has worked toward revolutionizing human-computer interaction, but perhaps his most lasting contribution is inadvertently creating a pint-sized staple of digital communication. Fahlman has spent forty years trying to achieve what he once did in ten minutes: invent something useful and enduring to humanity. His smallest idea might be his biggest, but he has so many other big ideas.

***

Fahlman became a computer scientist the day John Glenn orbited the earth: February 20, 1962. Fahlman once toyed with the idea of being an astronaut himself, but on that day when the entire country was excited for Glenn, Fahlman realized something.

“He’s cargo,” Fahlman says. “The computer’s flying.”

He wrote his first computer program when he went to MIT for college three years later, and wound up studying in the AI lab there for twelve years. After finishing his bachelor’s degree in electrical engineering and computer science, he spent four years on his master’s thesis. He could have placed a bow on that research and turned it in as his dissertation, but instead worked another four years before earning his doctorate. He had started building an AI system called NETL. Upon completion, it would solve the problem of how to represent human knowledge to a machine.

He was pulling all-nighters. Chunks of his hair were falling out. But it was also the most creative time in his life, comprised of the work that laid the foundation for his entire career.

***

Fahlman isn’t one to mince words about his research. He speaks authoritatively, and enjoys elaborately explaining things. But there’s also an impish quality about him, befitting the man who invented the emoticon. He has searching eyes, a gnomish beard, and a perpetual smirk as if he knows something nobody else in the room does. He loves to speak in analogies, hypotheticals, and conceptual metaphors — “examples” he calls all of them — such that talking to him can feel like playing a riddle game.

Sitting in his corner office at CMU and I ask him why the English language is so difficult for a machine to understand. He tells me the story of the Three Little Pigs. Gearing up for it, he pauses, reclines slightly in his desk chair, and grins as if I’ve taken some bait.

“Now, the wolf decides to deceive the pig,” Fahlman emphasizes. “Well, this is something that philosophers and psychologists and linguists have grappled with for a long time: What does it mean to deceive someone?”

His computer is behind him, the screen saver cycling through travel photos he’s taken. In another life, he might have been a photographer. Computer science journals line his bookshelves.

“So, the wolf takes this action, and if it works, the pig believes the wolf has gone away, opens the door, and — pork chops for dinner. If it doesn’t work, the pig says, ‘Oh, I’ve heard that one before, he’s still out there,’ [and] doesn’t open the door.”

The fairy tale turns out to be an example of the role multiple viewpoints play in our understanding of reality — something that is extremely challenging for machines to grasp. To understand the story of the Three Little Pigs, one must comprehend what each of the characters knows. For a machine to replicate this understanding, it needs to recognize these individual sets of characters’ knowledge. It needs a single set to represent the pig’s knowledge of the world, another for the wolf’s, along with a representation of the mental state the wolf is trying to get the pig into — the deception. These contexts can be layered infinitely, and Fahlman is building a system that contains them.

***

In many ways, fame is fun for a computer scientist. “It’s been a ride,” Fahlman says about watching the emoticon take off. He recently had a boardroom named after him at Pittsburgh’s swanky new Hotel Monaco — though the hotel’s management didn’t tell him beforehand; he heard about it from friends at the dog park. Fahlman has been featured in The New York Times, Wired, Buzzfeed and ABC News, among others, and participated in a live segment on NPR’s “Talk of the Nation.” He even appeared in the Italian men’s edition of Vogue, L’Uomo.

“I mean, I’m a really fashionable guy, right?” he asks while wearing a denim button-up, jeans, and brown Dockers loafers. (The picture in L’Uomo Vogue shows Fahlman in a bomber jacket with frozen mist rising around him. It was bitterly cold the day of the shoot and Fahlman told the photographer he thought the mist would look cool.)

But there are things his fame won’t give him. He never tried to copyright the smiley, so he hasn’t made any money from it — not even when the Pittsburgh-based restaurant chain Eat’n Park trademarked a now regionally famous “smiley cookie” five years after the emoticon’s invention. Save for CMU’s public relations efforts, the smiley hasn’t exactly helped Fahlman professionally.

Scott Fahlman poses for a photo in 1984. Photo courtesy of Scott Fahlman.

“Obviously, when you’re up for promotion,” Fahlman says, “[the university is] absolutely not looking at that.”

Fahlman also readily admits that he did not invent the emoticon all on his own. There’s evidence that both the idea, and the thing itself, existed prior to his office antics. A century ago, copy-setters at a satirical magazine jokingly created typographical “studies in passion and emotions” designed to look like faces. Fahlman only credits himself for the smiley’s “turn your head to one side principle” and its particular colon-hyphen-parenthesis combination of characters.

Like all viral ideas, the smiley arose from a lucky confluence of time and place. Fahlman isn’t even convinced that it was unique to the computer nerd culture at Carnegie Mellon. It might’ve cropped up at Stanford or MIT.

“I think that could’ve happened anywhere,” he says.

Still, though Fahlman may not have originated the emoticon, it took hold with him and became a global phenomenon. He was its vessel. In a way, it chose him.

“It’s a fun thing,” Fahlman says. “It’s not like I’m famous for having [driven] into a school bus full of children or something. [The smiley is] not a bad thing to be famous for. Not what I would’ve chosen, but so be it.

“What’s amusing is that [after] a forty-year career working on AI — I could solve AI and I know what the first line of my obit would be.”

***

For Fahlman, it always comes back to “the problem.” He uses the phrase constantly. Focusing on the problem, putting a dent in the problem, solving the problem. What is the problem? It’s not a single thing, but the questions posed by AI as a whole. How does one represent knowledge? What is understanding? How do people learn? What do we mean when we say something? These are all parts of the problem. All by way of trying to replicate human intelligence in a machine: AI’s ultimate goal.

Among other things, Fahlman has spent the last decade trying to finish the project he started as graduate student at MIT, building a knowledge base called Scone. Fahlman finds acronyms cumbersome, but explains that “Scone” stands for Scalable Ontology Engine, or Scott’s Ontology Engine “depending on what mood [he’s] in.”

At its core, Scone is an effort to teach a computer the knowledge humans intuitively understand. The way to do this is to create a body of knowledge by painstakingly entering into a computer a series of statements about the world. Fahlman’s favorite example is “Clyde the Elephant.” To represent to Scone what an elephant is, a programmer enters: “an elephant is an animal…” “an elephant has four legs…” “an elephant is gray…” Now the computer understands the concept of “elephant,” such that when any new individual elephant is mentioned, it can look to its own knowledge and draw inferences about it, much like humans can. When Clyde is added to Scone, he looks like this:

(new-indv {Clyde} {elephant})

The computer checks Clyde for consistency with information already in the knowledge base. In knowing that Clyde is an elephant, the computer can infer that Clyde is gray, is an animal, and has four legs, making the kind of higher-level conceptual connections people take for granted. Because of its potential to reason through different alternatives, which could improve human-robot cooperation on complex tasks, Scone’s research is currently funded by the Office of Naval Research. The Defense Advanced Research Projects Agency (DARPA), Cisco Systems, and Google have also supported his work.

How a knowledge base such as Scone differs from other AI technology is simple, but profound: it strives for broad human understanding. In recent years, AI has been successful at producing systems with the opposite goal: a more narrow, but superhuman intelligence. Deep Blue, the chess-playing computer that outmatched reigning world chess champion Gary Kasparov in 1997, and Watson, the supercomputer on Jeopardy! that beat two champions in 2011, are examples of these machines. Arguably the most ubiquitous AI is a search engine that can browse and index the entire Internet and return the most relevant results per request.

What’s passing for human intelligence in computers most of the time is advanced computation, manipulation, and matching of symbols with no understanding of them. Siri? It’s “a parlor trick,” Fahlman says.

He laments that AI as a field has largely moved away from its original goal. Since the mid 1980s, the information explosion combined with the exponential growth in computing power has generated huge commercial interest in developing specialized AI technologies. An influx of money shifted research focus away from creating a general intelligence to specific statistical learning that could be used by industry.

wBMPosLoQ1cLze0UUClA_spot22028129.jpg

The shift has been staggeringly profitable. Google, by virtue of its search engine algorithms, has become one of the biggest commercial successes of the last fifteen years — in any market, not just the technology sector. “Big data”— systems that specialize in data management and analysis — is estimated to be worth more than one hundred billion dollars and growing at almost ten percent each year. Why would the AI field return to building a general intelligence now?

“Just a few of us stubborn old guys and a few of our idealistic young students are still trying to work on it,” Fahlman says. He won’t abide parlor tricks; he’s a purist.

Early in his career, Fahlman considered leaving academia and making a million dollars. One plan was to leave, cash in on the AI boom of which he was wary, but then come back. The money would’ve allowed him to return to research and “be able to do this right.”

“Everyone craves funding that’s patient,” he says. But ultimately he thought running a company would have led to other problems — maintaining profitability, corporate backbiting — too large a distraction from the problem.

In the midst of vast wealth creation, Fahlman is still driven by the idea of a grand mystery. “The goal of understanding and replicating human-like intelligence remains as one of the great intellectual challenges of mankind,” he wrote in a journal article, “one of the last great mysteries.” It’s the same problem Fahlman’s been drawn to over the better part of the last five decades.

“I stubbornly and fanatically want to get at the heart of the monster,” he says.

While all CMU faculty websites have a CV and links to academic papers, Fahlman’s homepage also includes his musings about the emoticon — an essay titled “Smiley Lore :-)” — and a favorite quotations page featuring Friedrich Nietzsche, Abraham Lincoln, Alan Turing, George Orwell, Miss Piggy and himself, twice. He also keeps a blog called Knowledge Nuggets with writings on AI and Scone for a lay audience. His informal writing style and use of amusing examples have apparently gotten him in trouble in academia. “Most of my peer referees HATE this,” he wrote to me in an email, “and that probably explains why I’ve published fewer papers.”

But much of what he does is evangelizing. In Fahlman’s view, people need to be talking about his vision of AI, lest we all forget the field’s original goal.

“If you really solve the problem, it would be an intelligence,” he stresses.

The stakes are high for him. Anything short of a full intelligence and we’ve not only strayed from AI’s core mission, but we’ve also missed out on an opportunity to understand ourselves. Fahlman believes that to successfully replicate human intelligence requires a comprehensive understanding of the way that intelligence functions. If a replica truly works, there can be no “carpet you’re sweeping stuff under,” as with existing AI. Thus, his method is not solely concerned with how to produce the best facsimile of the human mind: it’s also about gaining broader insight into it.

“If we can build [an intelligence], that’s a pretty good way of saying we’ve understood what the hell’s going on when [a] kid’s reading a story,” Fahlman says. “I want to get the human-like part. I want to get the part that’s missing.”

***

This September, the emoticon turns thirty-three. Fahlman’s funding for Scone runs out three months later, and after that, his future is uncertain. He could retire, but he doesn’t know if he will. He has plans, a multi-point checklist. He wants Scone to be widely accessible. In the works is a Scone tutorial book so everyone can learn to use it. In his vision, if someone wants to write an adventure game, she should be able to pull Scone’s tutorial book off the shelf, run it, and have it understand everything she tells it: elves carry bows and arrows, dwarfs carry axes, but a bow-and-arrow can kill you from farther away. Eventually, Scone could complement all kinds of systems: medical databases, help desks, corporate files. It should also be able to understand us when we talk to it in English — another AI challenge called natural language processing that has vexed researchers for sixty years. Fahlman casually adds that he has to solve natural language, too.

“When you reach a certain age you have to start counting backwards,” he says, explaining that a person can’t take on five-year-long projects forever. “Because then it would take another three or four for me to get famous, and then, am I too old to enjoy it?”

He might not know what to do with it, the fame, in the end. The point is to keep building, to prove he’s on the right track — and always has been. Others will jump in. All he needs to do is solve the problem enough.

Rachel Wilkinson is a writer living in Pittsburgh and the nonfiction editor of Twelfth House Journal. She recently completed an MFA at the University of Pittsburgh.

How was it? Save stories you love and never lose them.


Logo for Narratively

This post originally appeared on Narratively and was published July 7, 2015. This article is republished here with permission.

Vist narratively.com to discover more articles about ordinary people with extraordinary stories.

Visit Narratively