"Many of us have tried to educate [Elon] about real vs. imaginary dangers of AI ..."

When I turned on my computer this morning, my news feed was overran with stories about Elon Musk tell
ARCHITECHT
"Many of us have tried to educate [Elon] about real vs. imaginary dangers of AI ..."
By ARCHITECHT • Issue #113
When I turned on my computer this morning, my news feed was overran with stories about Elon Musk telling the National Governors Association that artificial intelligence is an existential threat to mankind and that we need more regulation in place to keep it in check. The former claim was at best hypothetical when Musk first started talking about it in 2014, and it remains so today. Heck, even Oxford professor Nick Bostrom, who I think we can partially credit for planting this idea in Musk, seems to dialing back the idea of imminent destruction via AI.
So I was happy to read to read this piece in WIRED on Monday afternoon, quoting a few AI experts who take issue with Musk’s comments. The money quote comes from Pedros Domingos, a respected machine learning researcher and professor at the University of Washington:
“Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.”
The folks quoted in that story point out, fairly, that focusing on these existential-threat scenarios distracts from real, shorter-term scenarios, such as job losses and mass automation of our economy. 
But let’s not ignore the risk of scaring people—especially people, like governors, with lawmaking power—into calling for overregulation of AI. As I’ve written multiple times (including here), regulation is necessary and should be a requirement before certain applications of AI make their way outside the lab. But overregulation, or bad regulation, can stifle innovation, rear its ugly head decades later when the technology has already moved on, and become extremely costly to comply with and enforce (see, for example, international data privacy laws).
I think these things are especially true with AI, which has so much promise to complement its potentially negative side effects. The space is also moving so fast, at least in research settings, that it’s difficult to imagine how one would know where to draw regulatory lines or when they’ve been crossed. Even OpenAI, the nonprofit AI research group that Musk funds, is doing some remarkable work that might give people pause even if it is done in the open.
In the United States, a patchwork of different state laws governing AI in different ways would also be very problematic for a technology that isn’t easily confined by geographic borders.
And certainly Tesla, arguably Musk’s flagship company, will benefit greatly from advances in AI. As would, technically, the people driving Tesla vehicles and sharing the road with them. Smart regulation to foster innovation and safety in autonomous vehicles is a good thing. Dumb regulation spawned from fear of killer robots is not. 

Sponsor: Cloudera
Sponsor: Cloudera
Highlights from the latest ARCHITECHT Show
Highlights from the latest episode of the ARCHITECHT Show podcast, where Buoyant co-founder and CEO William Morgan discusses his company’s flagship linkerd “service mesh” tool and the lessons he learned from scaling Twitter.
Artificial intelligence
They range from chipmakers to a mobile phone for kids. It’s easy to see why Amazon is excited to seed the ground with startups that want to turn the Alexa assistant into something much more than a speaker.
According to Crunchbase, investors have pumped $3.6 billion into AI companies so far this year. Although, more than $1.4 billion went to driverless car startup Argo and Chinese company called SenseTime.
This is a good take on both sides of an interesting debate. On the one hand, letting this evolution play out could help optimize AI systems. On the other hand, solving the black-box problem is already challenging enough.
This is exactly what it sounds like, although it’s focused on IBM’s TrueNorth chips. There are a lot more efforts underway inside universities, startups and large companies.
Essentially, the argument goes, that by promising not to store and analyze certain user interactions, Apple doesn’t generate enough data to do AI right. It makes sense—I wrote a similar take back in 2014—but advances in AI hardware and algorithms should help narrow this gap to some degree. The tough question for consumers will be determining how much intelligence and privacy they really want or need.
This is a good podcast interview with Parc (formerly Xerox Parc) CEO Tolga Kurtoglu. That company—and probably most of us—wants AI that can have two-way conversations with users so they trust its decisions.
The headline of this blog post (and book excerpt) from Keras creator Francois Chollet pretty much speaks for itself. This is a really good primer on the subject.
Google announced Facets, which is an open source tool to help engineers understand their datasets, as part of its PAIR initiative last week. Here’s a blog post with more details.
It seems plausible that AI could help judge how quality chatbot conversations seem. But at this point I think the goalposts have moved for the Turing test, because it’s easy enough to believe a machine is dumb human on Twitter.
This is really high-level, but makes some good points around applications. Overlooked in talk about national and cybersecurity are benefits such as removing the friction from everyday citizen interactions.
Sponsor: DigitalOcean
Sponsor: DigitalOcean
Cloud and infrastructure
Rumor has it that AWS and VMware are working on hybrid cloud software that would help bridge the gap between the public cloud and private data centers. This isn’t surprising, because (1) I’m pretty certain AWS and VMware have talked about this before; (2) VMware has announced similar partnerships with other cloud companies, including Google and Salesforce, in the past; and (3) Microsoft and Google recently announced their own hybrid cloud plans/partnerships. But if this isn’t based around open standards (Google and Nutanix partnered around Kubernetes, for example) any AWS-VMware project is going to pose lock-in questions.
… it dropped a questionable term around patent infringement lawsuits from its Terms of Service, which is what the headline links to. AWS also rolled out new GPU instances, which, for what it’s worth, are not the new high-end data center GPUs Nvidia announced earlier this year.
New mainframes aren’t usually too interesting, and IBM takes a lot of flack for over-marketing its products. But IBM says the new System Z can process 12 billion encrypted transactions per day, and it’s building out a distributed network of them to handle financial transactions.
More reporting on the fast-moving plans from Google, IBM and others around commercial quantum computers. They, too, would likely be delivered via the cloud and provide big advances in data security. For more on that, check out this story from IEEE Spectrum, as well.
The why is pretty obvious, but the how is a little more complicated. Here’s some info on that part of it.
If you’re into network architectures, there’s a lot of good stuff in this interview with Google Fellow Amin Vahdat. It’s also notable from a cloud-provider POV, as Google loves playing up its network as a competitive advantage.
Speaking of Google, here’s a case study of how Silicon Therapeutics does R&D using Google’s cloud. We’ve seen similar case studies from AWS and Microsoft in the past, but they’re always a good reminder of how fast and cheap work that used to require a supercomputer has become.
Site reliability engineering is becoming a very big deal, which like all things in tech, means there’s competition to hire qualified people. LinkedIn is just coming out and explaining what it wants and what its interview process entails.
Here’s a little post from James Governor at Redmonk about how quickly software goes from the next big thing to falling out of favor to legacy. This can be good for developers in the short term, but bad for institutional knowledge.
redmonk.com  •  Share
Sponsor: CircleCI
Sponsor: CircleCI
All things data
Another well-funded security startup, this one focused on providing analysts with data and information that it calls the Security Knowledge Graph. 
Another Redmonk post, this one from Stephen O'Grady. It’s a smart analysis on why companies like MongoDB and Cloudera not only move the to cloud, but also expand into new capabilities that make them the focal point of application development or data management.
redmonk.com  •  Share
There’s no evidence this bug, which existed in lots of implementations, was exploited, but it’s noteworthy given how prevalent Kerberos is, especially in Hadoop and big data technologies.
We’ve seen this kind of story a lot recently, for obvious reasons. Policy aside, it’s sad to think we could pull back on data collection as the tools to analyze are really coming into their own.
This research project is a good example of how to collect data in unconventional ways, and could be super-valuable if it is productized and actually caught on. I still get lost every time I have to find something in the Venetian in Las Vegas.
This research from IBM on a system called Foresight reminds, in theory, of what we’ve seen from companies like Ayasdi and BeyondCore (which Salesforce bought in 2016). Machines analyze the data, then analysts can dig on things that might be interesting or useful.
arxiv.org  •  Share
Sponsor: Bonsai
Sponsor: Bonsai
Did you enjoy this issue?
ARCHITECHT
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at https://architecht.io
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.