ARCHITECHT Daily: If you read only one thing about AI today ...

... make it this analysis of Google DeepMind's AlphaGo system by Andrej Karpathy of OpenAI. (Bonus: i
ARCHITECHT
ARCHITECHT Daily: If you read only one thing about AI today ...
By ARCHITECHT • Issue #89
… make it this analysis of Google DeepMind’s AlphaGo system by Andrej Karpathy of OpenAI. (Bonus: it’s remarkably easy to read for a piece about deep learning models.) (Also, a disclaimer: OpenAI does research, too, and Karpathy acknowledges a bias.)
The latter aside, I think it’s a fair assessment of some of the shortcomings of AlphaGo as a centerpiece for discussions around how far AI has advanced. As he notes, it’s still a very narrowly applicable system that can play Go and, presumably, very little else. In a recent ARCHITECHT Show podcast appearance, guest Bradford Cross predicted AlphaGo could suffer the same fate as IBM’s Watson—lots of attention and hype, and then a storm of criticism when it can’t deliver in the real world.
However, as Karpathy also explains, Google is not IBM and AlphaGo is not Watson:
“AlphaGo does not generalize to any problem outside of Go, but the people and the underlying neural network components do, and do so much more effectively than in the days of old AI where each demonstration needed repositories of specialized, explicit code.”
So even if AlphaGo itself was a research experiment that turned into a PR opportunity, it does suggest that DeepMind has the people and technologies at its disposal to tackle even bigger applications without always reinventing the wheel. Certainly, DeepMind is working on lots of other research projects, including its famous game-learning systems, that might be even more interesting than AlphaGo.
To the point about Watson, though: DeepMind has also been attempting to take its technologies into the real world, specifically with the National Grid and National Health Service in the United Kingdom. The latter has gotten DeepMind into some hot water with regard to how it went about gathering patient data and what it planned to use that data for. As I’ve noted here several times, legal and regulatory issues have a special kind of way of complicating real-world applications—for better and for worse—so at some point Google and DeepMind will need to prove they can master that game, as well.

Sponsor: Cloudera
Sponsor: Cloudera
Artificial intelligence
I couldn’t resist leaving the original clickbait headline. Actually, though, the good news is that there are few if any pure AI stocks out there, so you’re not really investing in an AI company. Google, Apple, Amazon, Facebook, Microsoft …
It’s not perfect, but it doesn’t need to be. Mitsubishi wants to apply it to cars, elevators and other things it builds where, presumably, Step 1 is figuring out who’s speaking to the system.
On a related note … They achieved about 71 percent accuracy with an approach that doesn’t involve training data. I can envision something like this being useful in closed captioning for live events.
arxiv.org  •  Share
Some of these ideas seem pretty widely accepted among others in the AI space, but at this point it’s tough to see Numenta cracking the code. 
Antivirus programs actually seem like a good use case for deep learning. Here’s some insights into how Sophos is applying it, and why.
It’s called OneBM, and the company’s researchers claim it can hang with top Kaggle competitors in terms of producing accurate predictions.
arxiv.org  •  Share
Sponsor: DigitalOcean
Sponsor: DigitalOcean
Cloud and infrastructure
Somewhere, data center engineers at Google, Amazon and Microsoft are smiling. Of course, if something did happen in their data centers (and it has), BA would have to answer questions about that.
This is a good writeup on how the CNCF is trying to bring order and some semblance of uniformity to Kubernetes, but is shying away from trying to create actual standards.
GitHub published a survey of open source software contributors, which has some interesting results. There’s some good data about questionable behavior and desires, and also about views on quality of OSS. Security is the one place where it clearly has an edge.
If you’re not dogfooding, you’re not trying. I think I used to be a part of the World Community Grid, but I wasn’t sure it was still up and running given, among other things, all the free resources cloud providers offer to scientists.
Researchers at Carnegie Mellon developed a machine learning system for automatically tuning databases for maximum performance. So far, it works about as well as having a DBA do it.
Media partner: GeekWire
Media partner: GeekWire
All things data
Called SystemML, it’s essentially a high-level language and compiler. If a job is too intensive to run on the user’s machine, SystemML will offload it to a Spark cluster. On a related note: Apache still needs money!
More than 5 petabytes, actually. “The expert says he discovered 4,487 instances of HDFS-based servers available via public IP addresses and without authentication, which in total exposed over 5,120 TB of data.” Secure your systems, people!
Speaking of Hadoop and security (actually, the post above is really user error, not Hadoop error), here’s Hadoop’s creator talking about Apache Spot and how companies are using the various pieces of the Hadoop stack.
I tend to agree with this analysis about whether we should apply big data, or AI for that matter, to human resources. Relying too much on data actually seems like it could result in discrimination.
hbr.org  •  Share
Did you enjoy this issue?
ARCHITECHT
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at https://architecht.io
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.