Advertisement

COMMENTARY: Nothing to see here, just tech gurus debating the end of mankind

Tesla CEO Elon Musk talks about the development of the worlds biggest lithium-ion battery in Adelaide, Australia, Friday, July 7, 2017.
Tesla CEO Elon Musk talks about the development of the worlds biggest lithium-ion battery in Adelaide, Australia, Friday, July 7, 2017. Ben Macmahon/AAP Image via AP

When two of the biggest names in technology begin sending passive-aggressive tweets about each other, it’s worth a chuckle. When the issue at stake could potentially involve the survival of our species, it’s worth noticing.

The gentlemen in question are Elon Musk, of Tesla, SpaceX and a smattering of other high-tech adventures, and Mark Zuckerberg, creator of Facebook. The issue at hand is the creation of true artificial intelligence (AI) — a machine-based form of intelligence that could rival, and surpass, mankind.

Some, with their minds filled with visions of Cylons, Terminators and the Borg, consider this inherently dangerous. Zuckerberg, for his part, finds such talk alarmist, and even irresponsible, he recently said.

“I think you can build things and the world gets better. But with AI especially, I am really optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible,” he said during a Facebook Live interview.

Story continues below advertisement

Musk, who has been outspoken in his warnings about the possible danger of AI, was having none of that. “I’ve talked to Mark about this,” he tweeted. “His understanding of the subject is limited.” Ouch.

WATCH: Musk issues warning on artificial intelligence; call for regulations

Click to play video: 'Musk issues warning on artificial intelligence; call for regulations'
Musk issues warning on artificial intelligence; call for regulations

The issue is not, per se, whether artificial intelligence is good or bad. Some artificial intelligence is around us already — computers are programmed to predict our needs and meet them with minimal human oversight. Sometimes it’s as simple as a video or music streaming service learning what your tastes are and then serving up options likely to suit you; other applications involve incredibly cutting-edge research, where millions of datasets are organized and sorted by computer with only the most relevant examples singled out for human review. Use of this kind of artificial intelligence is uncontroversial, is already a reality and is something we’ll be seeing more of as time goes on. Musk’s own companies will benefit from it. This isn’t the problem.

Story continues below advertisement

The problem Musk (and others) worry about is this: what happens when an AI can think for itself outside the confines of narrowly written software directives, and take action in line with its own self-determined priorities? Perhaps most alarmingly, what happens when a computer becomes better at writing new programming than we are?

Theorists refer to that moment as a technological singularity. Programming is something a computer should be better at, really: they have perfect memory, the ability to compare countless lines of codes in real time and flawless mathematical precision. If human beings are able to develop an artificial intelligence that’s as good as we are at coding, the thought is that the computer, with its inherent intellectual advantages, would be able to rapidly outpace us from that moment on.

Breaking news from Canada and around the world sent to your email, as it happens.

How rapidly? We have no idea, but potentially, so rapidly that it would surpass any previously held notion of what a computer would be capable of doing, and perhaps in the virtual blink of any eye. We take years to develop new operating systems and coding languages. Imagine what would happen if an AI could do so in seconds, every few seconds, forever. Every generation of computer code, instead of taking years to roll out, could take minutes. There could be multiple technological revolutions and world-changing breakthroughs over the course of a lunch and the growth would be exponential, as each improved AI went out and created an even more improved AI.

Story continues below advertisement

We don’t know if any of this will happen. But we also don’t know what would happen if it did. We don’t know what the upper limit of such an AI could be, or even if there would be an upper limit. We also can’t even begin to imagine what such an AI would value or how it would act. Would it be benign, a friendly partner for humanity? Would it go full Skynet out of the Terminator universe, and decide that mankind is a rival for the Earth’s resources and hijack our own nuclear weapons to use against us? Either extreme is possible, as well as everything in between.

That’s the problem here: not only do we not know what might happen, we have no idea what reasonable assumptions might look like. There’s simply no way to responsibly forecast how an AI would or could behave. That’s why I come down on the Musk side of this equation: when dealing with something of unknowable power and unknowable intent — not just unknown, but unknowable — some precautions don’t seem unreasonable.

Story continues below advertisement

Still, Zuckerberg has a point, too. AI could have hugely positive impact on our lives. So by all means, let’s pursue it. But let’s make sure a guy like Musk is overseeing the project, and making sure we tread carefully as we move into this exciting new era of technology. We needn’t fear AI, but we shouldn’t just stumble blindly into it, either. Creating a new order of life is literally wielding godlike powers. A bit of caution and forethought isn’t too much to ask for.

Matt Gurney is host of The Morning Show on Toronto’s Talk Radio AM640 and a columnist for Global News.

Sponsored content

AdChoices