How to Survive the Robot Takeover

AI is coming for your job — and only a humanities degree can save you.

I’d called Norman to discuss the recent backlash against the digital revolution, fueled by everything from social-media filter bubbles and Russian election hacking to dark predictions that artificial intelligence could destroy the world. In addition to writing influential books, including “The Design of Everyday Things,” he pioneered “human-centered design” and brought it to his work at Apple Computer and Hewlett-Packard, among other organizations. Who better than Norman, the menschy grandfather of humanistic design, to make sense of the digital world’s ever-growing list of maladies?

The core issue, he says, is that companies often lead with the technology, ignoring the humans it’s supposed to serve. The resulting products “make people do things they’re bad at. They’re bad at repetitive tasks. They’re bad at paying attention when nothing is happening.”

Norman’s work centers on user experience, not so much technology’s broad social effects. But his principle of putting people first applies to all the ways technology meets human needs — or doesn’t.

He cites passenger monitoring of self-driving cars, an important but monotonous task that becomes hazardous when the supervising passenger’s attention drifts from the road — a factor in last year’s fatal crash of a partly autonomous Tesla.

The Tesla tragedy symbolized the danger, in an increasingly technologized world, of miscalibrating human responsibilities. In this case, a person was asked to perform a task a machine can arguably do better. In other situations, human involvement is essential. How best to integrate people and machines in pursuit of any given goal is emerging as the crucial question as AI rapidly expands into every level of industry and society.

About half of all the activities that the world’s workforce is currently paid to do could be automated with existing technologies, according to a McKinsey Global Institute report released earlier this year. A survey of machine-learning experts conducted by scholars from Oxford and Yale concluded there’s a 50-50 chance that machines will “accomplish every task better and more cheaply than human workers” within the next 45 years.

Better. More cheaply. Every task.

And this isn’t just about robots conquering assembly lines. One of the study’s more startling predictions: By 2049, AI will write its first New York Times bestseller. The World Economic Forum turned the study into a video titled “This is when robots will overtake humans.”

One solace for the human race: Predictions about the technology future have a terrible track record. The popular conversation about AI is driven more by hype than scientific reality. No serious computer scientist believes humans are about to be replaced wholesale. The real question right now is whether we’re building machines that treat us like humans, and accentuate our unique strengths as a species, or ones that punish us when we don’t act like machines. This is a challenge that can’t be met with a simple app. It’s more complicated and interesting — just like us humans.

 

The media is one area where we see this process playing out. “I hadn’t even thought of the term ‘humanistic technology,’” says Joey Marburger, director of product for the Washington Post, when I called him about it. “But this is what I do. My job and my team’s job is to produce and design humanistic technology for news.”

In that capacity, he has played a key role in the paper’s renaissance. When Jeff Bezos bought the company in 2013, Marburger had been working there for three years, developing products for an organization with scant resources and little tolerance for risk. “If we had to do something like redesign our iPad app, it would take like eight months to really get consensus on it. And it had to succeed because if it didn’t work, it would be a big impact on the business. … We were basically strapped to product mediocrity.”

With his deep pockets and customer-first philosophy, Bezos turned this around. Marburger’s formerly cautious team became a hard-charging R&D shop, free to try out wild new ideas.

“Jeff came here and it was, ‘Why are we not experimenting more? Yes, hire more journalists, we need that.’ But his laser focus was like, ‘Joey, we’re going to build products.’”

From the start, the human factor was central. In classic Amazon style, even the language changed, with the word “users” jettisoned in favor of “readers.” “As soon as you say ‘user,’ it takes the human piece out of it,” says Marburger.

In the final month before the presidential election, the product team conducted an experiment. They created a Facebook messenger bot called Feels that allowed readers to track their emotional response to the election and compare it to that of others. Each day for 30 days, the bot asked the same simple question: “How is the election making you feel today?” Participants could choose one of five emojis ranging from happy to angry. They could also elaborate on their choice by typing a sentence or two.

Every morning participants would receive a message with a graph of all the reactions registered by Feels the previous day. After the election was over, each was sent a personalized report tracking his own emotional journey. Thousands of people wound up participating, a small number for a digital newspaper that reaches some 100 million visitors per month. But the project generated a lot of engagement: Participation grew steadily, with one-third of participants using it every day and the vast majority staying with Feels to the end.

“The emotional component of it is what made it the largest bot we’ve experimented with,” Marburger says. “We had a human-curation piece on our side [to select reader comments that would be shared with all participants], and we were asking a really human question.” In an election that often felt dehumanizing, it was a genuine human connection.

Meanwhile, the Post has also been using AI for a very different human-centered purpose: to liberate reporters from mindless work. In 2016 the Post created a “robot reporter” with the byline “Heliograf.” In its first year on the job, it produced 850 articles, on topics ranging from election results to sports coverage. It sounds like something out of a sci-fi movie — a robot scribbling into a notebook on the sidelines of a high school football game — but the reality is more mundane: Computers turn wire-service reports and other machine-readable data into basic stories.

The Associated Press has been using AI to churn out business coverage of financial earnings, as well as some sports stories. Errors have dropped while productivity has soared (machines don’t take coffee breaks). Six months after the project’s launch, the output of these business stories has increased twelve-fold (from 300 companies covered to 4000), according to the AP’s Francesco Marconi. And there’s another benefit: Turning these relatively rote kinds of stories over to AI allows human reporters to focus on more substantial work. As a neophyte reporter for the Post’s business section back in the 1990s, I was sometimes assigned to write bone-dry stories about mortgage-rate fluctuations, a dreary task that I can attest is a perfect assignment for robots.

With both Heliograf and Feels, AI was brought in to redefine the work that newspapers have always done, both internally (reporting stories) and externally (making an emotional connection with readers). In one case, a narrow category of human work was shifted to machines; in the other, AI was used to do something new. Experimentation is the path to survival, and The Post and the AP are not alone in this work. The New York Times and other establishment news outlets have product innovation units analogous to Marburger’s, as do new media outlets such as BuzzFeed and Vox.

 

The machine-human debate is often framed as a zero-sum game, especially in headline-grabbing studies like the Oxford-Yale one predicting that machines will triumph over humans in the workplace. It’s worth noting that study was based on a survey of people working in the AI field, who have an incentive to believe in the potential of machines to do almost anything. Which job did they forecast will be the very last to be automated? AI researcher, of course.

Others paint a more nuanced picture. James Manyika, director of the McKinsey Global Institute and an expert on AI and employment, says his research indicates that the jobs most likely to be handed completely over to machines are those that are easiest to automate, especially in “the middle pay-range” where replacing humans can save the most money. He cites bookkeeping and truck driving as leading examples of these highly-likely-to-be-automated jobs.

The far more significant shift will be in the nature of work people do, however. All jobs include a variety of different tasks, and it’s likely that AI will be able to do some but not all of them. In one McKinsey study, after breaking jobs down into their constituent tasks, Manyika and his colleagues found that about 50 percent of current jobs are about 30 percent automatable — that is, almost a third of the work now performed within those jobs could be handled by machines. Partial, task-specific automation of jobs will be the most common scenario in the coming decades, Manyika predicts. “This idea that jobs will change is the bigger phenomenon. All jobs have the potential to be at least partly automatable.”

As the machines get smarter, the range of tasks that can be automated will expand. “Machines aren’t simply following carefully codified instructions provided by human programmers,” write Andrew McAfee and Erik Brynjolfsson in their book “Machine Platform Crowd,” “they’re learning how to solve problems on their own.” As a result, “companies need to rethink the balance between minds and machines.” This includes jobs that require higher-level thinking formerly unique to humans.

In an interview, I asked McAfee, principal research scientist at MIT’s Sloan School of Management, how this rebalancing would work. “In most domains, I think the new balance is going to be giving more consequential work to the machines and less to the minds,” he said. An example is medical diagnostics, at which machines turn out to be unusually good. According to McAfee, this is an area in which people need to get over their squeamishness about trusting AI to solve significant human problems. “Most of us would feel that an automated diagnosis is inferior to what we get from this Marcus Welby stereotype of a compassionate, caring, incredibly good doctor that we have. It’s an unreasonable stereotype, but we walk around with it. And the thought of turning that over to some cold piece of technology is uncomfortable for a lot of people.”

This doesn’t mean humans will no longer be needed in medicine. Instead, we’ll see a new division of tasks. “The AI’s job will be to diagnose the patient,” McAfee and Brynjolfsson write. “The person’s job will be to understand and communicate the diagnosis, and to coach the patient through treatment.” Similar shifts will happen in many other kinds of work, as human-machine collaboration becomes the norm.

The Post tried another human-machines experiment during the election, delivering the daily “flash briefing” on the campaign that the paper offered as audio through Amazon’s Alexa device. When the briefing was launched in March 2016, it was read aloud by a robotic text-to-speech voice. The first month, it had in the tens of thousands of “listens.” In May, when the team replaced the robot with the voice of a human political reporter, the number of listens rose by a factor of ten. By October it had reached hundreds of thousands of listens a month.

Marburger doesn’t how much the human voice contributed to this rise. Alexa was selling well at the time, adding new potential listeners every week, and the election was a huge draw across all media. Still, his hunch is that the voice switch — which made the listening experience so much better — was partly behind the success of the briefings.

He thinks more humanistic thinking is needed throughout the media space. “I feel like we’ve lost sight of the human in the middle of it. … On the highest level, the more and more that tech is integrated into our lives — your phone is basically an extension of your body — the more it matters.”

 

Marburger traces his worldview back to his education. He went to Purdue to study computer science, but after the first year concluded it was not what he wanted to do with his life. He created his own major combining creative and technical writing, and for the latter studied human-computer interaction. He also took journalism classes and “got the bug” for the news business.

“Technology tends to dehumanize,” writes Norman in his 1990 book, “Turn Signals are the Facial Expressions of Automobiles.” “This is not a necessary part of technology, but it relentlessly encroaches upon us unless we exert caution. Technologists tend to create what technology makes possible without full regard for the impact on human society. Moreover, technologists are experts at the mechanics of their technology, but often are ignorant of and sometimes even disinterested in social concerns.”

One antidote is the kind of hybrid education Marburger embodies, unusual at a time when pure STEM is all the rage. With AI on the rise and abundant data showing technology is where the jobs are, young people are shying away from the humanities. Over the last decade, the number of college students majoring in engineering and science has risen in proportion to all other fields, while humanities have seen a decline. Some prominent Silicon Valley figures have encouraged this trend, arguing that a liberal arts education deprives students of skills crucial to success in a tech-driven age.

Now, though, a wave of thinkers and theorists are arguing that the broad, critical thinking taught by the humanities is exactly what society — and the tech and media industries, in particular — are crying out for.

At Stanford University, “fuzzies” is a nickname for humanities and social science majors, while engineering and hard sciences majors are called “techies,” explains Scott Hartley in his new book, “The Fuzzy and the Techie: Why the Liberal Arts Will Rule the Digital World.” Hartley’s work as a venture capitalist has convinced him that “the timeless questions of the liberal arts, and their insights into human needs and desires, have become essential requirements in the development of our technological instruments.”

In fact, more than a few people have succeeded in digital innovation not in spite of their humanities backgrounds, but because they had a unique perspective on what technology could do. Hartley’s book offers dozens of examples, including Stuart Butterfield, a philosophy major who started the popular messaging app Slack, and Katrina Lake, who majored in economics and founded fashion retailer Stitch Fix.

If Hartley’s right, the liberal-arts humanists he writes about represent the next wave of technology innovators. “The greatest threat to humanity is not robots trying to take over,” he told me. “It’s humans forgetting how to be human.”

“What I’m terrified about is everybody believes STEM education is the answer,” says John Seely Brown, the revered technology thinker who ran the Xerox PARC research center and co-wrote with Ann Pendleton-Jullian “Design Unbound: Designing for Emergence in a White Water World.” “No, STEM education is the problem.” In a world dominated by digital networks, everyone is navigating a constantly shifting information environment, a challenge Brown likens to whitewater rafting. What’s needed, he told me, is “a new alloy” of skills that fuses the arts and the humanities on one hand, with science and technology on the other.

“We’re trying to create the da Vincis of tomorrow,” Brown says. “Da Vinci knew how to operate in both spheres.”

In the media business, this means bringing the two sides together in what is ultimately the most human of crafts: storytelling. Even as Marburger’s product development team has made workings of The Post’s digital platform more humanistic, the paper’s editorial staff, energized by editor Marty Baron, has ramped up its storytelling in the service of hard-hitting journalism. Each side of the effort complements and supports the other. Remove either — bots or reportage — and the whole collapses.

At a television industry conference last year, FX Networks CEO John Landgraf dismissed the voguish idea that technology will soon drive story. “There’s a whole thing going on right now with Netflix and others in Silicon Valley saying algorithms are going to rule all and make decisions and I say, ‘Posh.’”

The magic happens, Landgraf continued, when creative people get together in a room. “You listen really, really carefully on a human level to what somebody tells you, and you sit and you dialogue and you think about stories and you watch film and television — and you pick the best stories and the best people you possibly can.” Given humanity’s long record of constructing spectacular stories, it’s remarkable this even needs saying. But as Landgraf has suggested, this is a genuine power struggle: “I want the humans to be able to hold their own against the emerging strength of the machines.”

For the moment, storytelling is one field where people remain firmly in the driver’s seat. Indeed, this might be a case where the “rebalancing” between technology and humans happens in the opposite direction: we blaze new trails in storytelling, sparked here and there by data from our AI amanuenses.

In their book, McAfee and Brynjolfsson recount one abject failure of machine learning. An advanced computer was “fed on Jane Austen novels,” then asked to write its own fiction. The result was drivel. “To your eldest say when I tried to be at the first of the praise, and all this has been so careless in riding to Mr. Crawford,” the algorithm sputtered, “but have you deserved far to scarcely be before, and I am sure I have no high word, ma’am, I am sure we did not know that the music is satisfied with Mr. Bertram’s mind.”

Given that AI is expected to have that New York Times bestseller by 2049, maybe it’s time to bring in the humans.