… make it this analysis of Google DeepMind’s AlphaGo system by Andrej Karpathy of OpenAI. (Bonus: it’s remarkably easy to read for a piece about deep learning models.) (Also, a disclaimer: OpenAI does research, too, and Karpathy acknowledges a bias.)
The latter aside, I think it’s a fair assessment of some of the shortcomings of AlphaGo as a centerpiece for discussions around how far AI has advanced. As he notes, it’s still a very narrowly applicable system that can play Go and, presumably, very little else. In a recent ARCHITECHT Show podcast appearance, guest Bradford Cross predicted AlphaGo could suffer the same fate as IBM’s Watson—lots of attention and hype, and then a storm of criticism when it can’t deliver in the real world.
However, as Karpathy also explains, Google is not IBM and AlphaGo is not Watson:
“AlphaGo does not generalize to any problem outside of Go, but the people and the underlying neural network components do, and do so much more effectively than in the days of old AI where each demonstration needed repositories of specialized, explicit code.”
So even if AlphaGo itself was a research experiment that turned into a PR opportunity, it does suggest that DeepMind has the people and technologies at its disposal to tackle even bigger applications without always reinventing the wheel. Certainly, DeepMind is working on lots of other research projects, including its famous game-learning systems, that might be even more interesting than AlphaGo.
To the point about Watson, though: DeepMind has also been attempting to take its technologies into the real world, specifically with the National Grid and National Health Service in the United Kingdom. The latter has gotten DeepMind into some hot water with regard to how it went about gathering patient data and what it planned to use that data for. As I’ve noted here several times, legal and regulatory issues have a special kind of way of complicating real-world applications—for better and for worse—so at some point Google and DeepMind will need to prove they can master that game, as well.
I couldn’t resist leaving the original clickbait headline. Actually, though, the good news is that there are few if any pure AI stocks out there, so you’re not really investing in an AI company. Google, Apple, Amazon, Facebook, Microsoft …
GitHub published a survey of open source software contributors, which has some interesting results. There’s some good data about questionable behavior and desires, and also about views on quality of OSS. Security is the one place where it clearly has an edge.
If you’re not dogfooding, you’re not trying. I think I used to be a part of the World Community Grid, but I wasn’t sure it was still up and running given, among other things, all the free resources cloud providers offer to scientists.
Called SystemML, it’s essentially a high-level language and compiler. If a job is too intensive to run on the user’s machine, SystemML will offload it to a Spark cluster. On a related note: Apache still needs money!
More than 5 petabytes, actually. “The expert says he discovered 4,487 instances of HDFS-based servers available via public IP addresses and without authentication, which in total exposed over 5,120 TB of data.” Secure your systems, people!
Speaking of Hadoop and security (actually, the post above is really user error, not Hadoop error), here’s Hadoop’s creator talking about Apache Spot and how companies are using the various pieces of the Hadoop stack.
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris.
Check out the Architecht site at https://architecht.io