ARCHITECHT Daily: Can black-box AI outrun the reality of lawyers, regulators and real-life?

The thing with computers is that they're not actually intelligent. I think most everyone can agree th
ARCHITECHT
ARCHITECHT Daily: Can black-box AI outrun the reality of lawyers, regulators and real-life?
By ARCHITECHT • Issue #61 • View online
The thing with computers is that they’re not actually intelligent. I think most everyone can agree there’s not a robot in existence that, even equipped with the most advanced AI models around, could navigate a day in the real world—at least not anytime in the foreseeable future. The walking, the driving, the judgments, the interactions, the reading of people, the unwritten rules, the actual rules, the times when it’s probably OK to bend the rules … our brains and our bodies do a lot.
I say all of this to introduce two thought-provoking items from last week:
On the latter, I disagree with the notion that our computers understand the world. But I agree that they’re able to identify patterns and connections beyond the scale of what humans can presently do, at least in an even remotely comparable timeframe. And I think it’s fair to suggest that the next big push in computer science is to try cracking the black boxes of these models so that we can get a better of sense of why they do what they do. 
Better understanding AI models could actually prove very useful to scientists, but let’s focus on the economy. As the Techonomy post argues, while AI might replace some jobs outright, there are many places where it will just serve as a very powerful tool. The better that people understand machines, the better we’ll be able to work alongside them and use them to augment our judgment. The better we’ll be able to effectively apply AI to new areas and even integrate AI into the social fabric.
I would also argue that AI left as a black box runs the risk of running into a combination political-legal-regulatory wall at some point in the not-too-distant future—a wall that could fundamentally effect its continued adoption and advancement. The buck has to stop somewhere, and it’s not going to be at the algorithm. Whether we’re talking about cars, credit scores, medical diagnosis, business decisions or even just Amazon Alexa, people and organizations are going to be held responsible when something goes wrong. 
Mitigating that risk involves understanding what’s going on inside your AI system so you know when and how to use it safely and effectively. And so you know how to answer when someone wielding a congressional summons or a civil complaint comes calling. Relying on machines’ “intelligence” might be sufficient for voice search and chatbots, but I can’t imagine it will suffice when lives and, frankly, money are on the line.

Sponsor: Cloudera
Sponsor: Cloudera
Artificial intelligence
VentureBeat has a 3-part series profiling startups participating in Nvidia’s Inception competition. If you’re interested in finding out what 14 of them are up to, check out these posts:
Cloud and infrastructure
Sponsor: Marshal.io
Sponsor: Marshal.io
All things data
Did you enjoy this issue?
ARCHITECHT
ARCHITECHT delivers the most interesting news and information about the business impacts of cloud computing, artificial intelligence, and other trends reshaping enterprise IT. Curated by Derrick Harris. Check out the Architecht site at https://architecht.io
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.