ARCHITECHT Daily: AI is all about your data. And maybe your processor.

If you recall the early days of big data, there was a lot of talk about quantity versus quality. On t
ARCHITECHT Daily: AI is all about your data. And maybe your processor.
By ARCHITECHT • Issue #54 • View online
If you recall the early days of big data, there was a lot of talk about quantity versus quality. On the one hand, you had Peter Norvig and Google talking about the unreasonable effectiveness of data—which makes a lot of sense at Google’s scale and with, especially at the time, the limited types of data and scope of things it was trying to analyze. On the other hand, you had pragmatists reiterating the old mantra of “garbage in, garbage out.”
Untold millions of MapReduce jobs later, I think most people can agree that both things are true. It’s true for any sort of data science or predictive analytics process, and it’s especially as more people and organizations start experimenting with artificial intelligence, and deep learning specifically. 
I came across three completely different types of content—a blog post, a research paper, and a Quora answer—this week that help drive this point home. Enjoy:
Best practices for applying deep learning to novel applications: This is a really informative paper, from a U.S. Navy researcher, explaining to novices how to get started with deep learning.
What are the best sources to study machine learning and artificial intelligence?: The top answer from Kaggle co-founder Ben Hamner is great. Among his lessons: “Good problems to start with have several criteria … [including] Data is readily available that’s well-suited to addressing the problem (otherwise the bulk of your time will go here).”
Is your data holding you back? This is a really good overview on data gaps from Silicon Valley Data Science—essentially, the process of figuring out what you have, what it’s good for, and what needs to be done to make it good for thing for which you want it to be good.
Oh, and it turns out that when you’re doing deep learning, having the right processors in place can also make a very big differenceGoogle says it is getting remarkable performance and efficiency improvements from its custom-built Tensor Processing Units. In fact, its TPUs are so efficient—and so widely used, like every time someone does a voice search on their phone—that they’ve saved the company from having to build additional data centers.
If you’re wondering why Intel is investing so many resources into AI, look no further than those TPUs at Google. Beyond wanting to own on-device processing for things like computer vision, I think Intel is also banking on the possibility that GPUs might not be the long-term answer to mainstream AI workloads (even though they are today and, by the way, IBM’s cloud now offers the newest, most-powerful Nvidia GPUs as a service). If Google’s TPUs are 15-30 times faster than GPUs and CPUs, and 30-80 times more efficient, you can bet other cloud providers, web companies, and large enterprises doing AI are going to want that type of performance for themselves.

If you enjoy the newsletter, please help spread the word via Twitter, or however else you see fit.
If you’re interested in sponsoring, please drop me a line.
Check out the ARCHITECHT site
Artificial intelligence
Cloud and infrastructure
All things data
Did you enjoy this issue?
ARCHITECHT delivers the most interesting news and information about the business impacts of cloud computing, artificial intelligence, and other trends reshaping enterprise IT. Curated by Derrick Harris. Check out the Architecht site at
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.