Artificial Intelligence: The Debate

August 4, 2017 | By Sasha Ganeles, Planner

Mark Zuckerberg and Elon Musk are both considered visionaries, yet are often pitted against each other as competitors. Both work closely with artificial intelligence, making them well qualified to state opinions on the matter. But last week, a barbed exchange between the two men sparked the nerdiest gossip mill ever, and set off a large scale debate of whether AI is a technology to fear and control, or if it is a key part of our success moving forward.

The debate took on new meaning (one could almost hear the chorus of ‘I told you so’) when two Facebook AI bots started chatting in a newly devised language. As many people joked about it being the sign that Skynet was becoming a reality, the debate intensified over whether or not AI is to be trusted. Is AI our friend or foe?

The Case for AI

Team Captain: Mark Zuckerberg

Zuckerberg has historically been pro-AI, believing it’s generally a positive advancement for society, and will continue to be. He’s such a believer, in fact, that he’s built his own personal AI for his home (which you can see in this hilarious video).

While the public’s initial reaction to the new language and communication between two Facebook bots was quite alarmist, the real reason for shutting them down was not necessity or fear. Facebook claimed that they shut it down because it wasn’t executing its assignment of learning how to communicate with humans. The bots were learning from each other, and so began chatting in a shorthand. It wasn’t dangerous, but rather off-task.

The AI which fascinates Sci-Fi writers and concerns people is one that surpasses human abilities. Dubbed ‘Superintelligence,’ this type of intelligence is defined as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.’ But according to the Fellows of the American Association for Artificial Intelligence, Superintelligence won’t be developed for at least another quarter century — if ever. One noted “We can write single-purpose programs that can compete with humans… but the world is not neatly compartmentalized into single-problem questions.” And Superintelligent or not, there’s no reason for intelligent machines to want to dominate humanity — the desire to dominate is natural, not manufactured.

Yes, AI will have an impact on jobs, and yes, the applications to weapon systems are concerning, but the benefits outweigh potential consequences. Scientists believe it’s possible that AI systems could collaborate with people to create a symbiotic Superintelligence. Whether it’s fantastical fashion, transforming medicine, or helping to prevent car accidents, AI has already proven itself a great collaborative tool for humans.

In the US, the technology giants of Silicon Valley have pledged to work together to make sure that any AI tools they develop are safe. One preventative measure keeps AI agents isolated from their environment. And in 2015, leading researchers signed a letter pledging to ban the creation of autonomous weapons that AI could potentially compromise. Human beings are firmly in control of AI, and its potential to transform our lives for the better is far more likely than any hypothetical doomsday scenarios.

The Case Against AI

Team Captain: Elon Musk

Shoot-for-the-moon Musk doesn’t try to tell the world that AI shouldn’t be created — it is in the spirit of science and progress, after all — but he does urge bodies of power to closely monitor and regulate it. He’s even admitted that he invests in AI companies to keep an eye on the rate of progress of humanity’s “biggest existential threat.” No amount of good intentions can prevent one from creating something evil by accident, he says. And he’s not alone in his distrust and warnings of AI — he has King of the Nerds Stephen Hawking in his corner.

Team Musk cautions that rushing to embrace AI could mean sacrificing humanity to machine-learning overlords. “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” Musk famously lamented. While Roombas aren’t running wild yet, there is a tangible negative impact of AI we can all see — that it threatens to take away jobs.

The AI revolution is unlike other labor upheavals that replace certain jobs with others (i.e. paper and typewriters replaced by computers). Experts predict that the revolution will indeed affect jobs (mostly low-paying) on a massive scale, but the more troubling effect is that the companies developing and adopting the AI technology will disproportionally profit. For example, Apple and Uber would be virtually unstoppable business powerhouses should they no longer need a human labor force. Coupling these phenomenons create a dangerous binary: incredible wealth and power concentrated in few hands, and enormous numbers of people without jobs.

Like Frankenstein’s monster, the primary concern is about controlling the technology we create and, clearly, avoiding apocalyptic chaos. Even though the actual effects of the two Facebook bots was not malicious, it did open an interesting chasm of bots branching out from what they’ve been taught and thinking for themselves. In reality, what’s to stop bots from anticipating being shut down for communicating in a non-human language, and instead devising a new code that appears to satisfy its human listeners?

This is not just a hypothetical. As machine learning becomes more and more popular, we are slowly losing our ability to understand just how these new intelligences work. An autonomous car company recently released a car that learned to drive without following a single programmatic instruction. It instead taught itself to drive through an algorithm and exposure to human drivers. How can developers understand and limit the choices that machines make if they have no idea how the machine makes decisions in the first place?

And while there are limited American regulations in place that uphold the integrity of the technology, there are no safeguards in place for other powerful agents such as China. And there’s mounting concern that the Chinese government could use AI to compute vast amounts of data and control the population in new and more influential ways.

We cannot let the purr of progress lull us into complacency. AI can have a tremendously productive effect on our society, but we should be aware of the implications of our choices before we go full-steam ahead for progress’s sake.


Whether or not you believe AI is our future or our downfall, there’s no denying that there are many complicated ethical questions to be answered. People are uneasy about the lack of humanity in AI technology — but in reality, the real threat isn’t the technology but the people and businesses behind it. The companies that create these technologies must be held responsible for their impact. But many act as if good intentions are enough to recuse themselves from the consequences. For now, let’s hope that AI doesn’t overtake us while we’re distracted by the war of the nerds.