"Many of us have tried to educate [Elon] about real vs. imaginary dangers of AI ..."

When I turned on my computer this morning, my news feed was overran with stories about Elon Musk tell
ARCHITECHT
"Many of us have tried to educate [Elon] about real vs. imaginary dangers of AI ..."
By ARCHITECHT • Issue #113 • View online
When I turned on my computer this morning, my news feed was overran with stories about Elon Musk telling the National Governors Association that artificial intelligence is an existential threat to mankind and that we need more regulation in place to keep it in check. The former claim was at best hypothetical when Musk first started talking about it in 2014, and it remains so today. Heck, even Oxford professor Nick Bostrom, who I think we can partially credit for planting this idea in Musk, seems to dialing back the idea of imminent destruction via AI.
So I was happy to read to read this piece in WIRED on Monday afternoon, quoting a few AI experts who take issue with Musk’s comments. The money quote comes from Pedros Domingos, a respected machine learning researcher and professor at the University of Washington:
“Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.”
The folks quoted in that story point out, fairly, that focusing on these existential-threat scenarios distracts from real, shorter-term scenarios, such as job losses and mass automation of our economy. 
But let’s not ignore the risk of scaring people—especially people, like governors, with lawmaking power—into calling for overregulation of AI. As I’ve written multiple times (including here), regulation is necessary and should be a requirement before certain applications of AI make their way outside the lab. But overregulation, or bad regulation, can stifle innovation, rear its ugly head decades later when the technology has already moved on, and become extremely costly to comply with and enforce (see, for example, international data privacy laws).
I think these things are especially true with AI, which has so much promise to complement its potentially negative side effects. The space is also moving so fast, at least in research settings, that it’s difficult to imagine how one would know where to draw regulatory lines or when they’ve been crossed. Even OpenAI, the nonprofit AI research group that Musk funds, is doing some remarkable work that might give people pause even if it is done in the open.
In the United States, a patchwork of different state laws governing AI in different ways would also be very problematic for a technology that isn’t easily confined by geographic borders.
And certainly Tesla, arguably Musk’s flagship company, will benefit greatly from advances in AI. As would, technically, the people driving Tesla vehicles and sharing the road with them. Smart regulation to foster innovation and safety in autonomous vehicles is a good thing. Dumb regulation spawned from fear of killer robots is not. 

Sponsor: Cloudera
Sponsor: Cloudera
Highlights from the latest ARCHITECHT Show
Artificial intelligence
Sponsor: DigitalOcean
Sponsor: DigitalOcean
Cloud and infrastructure
Sponsor: CircleCI
Sponsor: CircleCI
All things data
Sponsor: Bonsai
Sponsor: Bonsai
Did you enjoy this issue?
ARCHITECHT

ARCHITECHT delivers the most interesting news and information about the business impacts of cloud computing, artificial intelligence, and other trends reshaping enterprise IT. Curated by Derrick Harris.

Check out the Architecht site at https://architecht.io

Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.