First things first
Even during my recent break from this newsletter, when I was actively avoiding my RSS feed, I couldn't escape the public debate about the limits of artificial intelligence. Or, more specifically, about the limits of the deep learning -- the AI technique most commonly used and understood today for tasks such as computer vision.
And the debate rages on, even in mainstream media outlets that wouldn't have touched deep learning with a 10-foot pole just a few years ago. Case in point:
It's easy to understand why this debate is taking place in such public forums: Because AI is an incredibly popular topic today (even outside of tech circles) and because a contrarian viewpoint will always draw attention.
What's harder to understand is why we have to have this debate at all. Deep learning has proven remarkably useful for tasks involving machine perception and pattern recognition, but it's only now finding its way into production systems in companies outside of Google, Facebook and the like. Among its myriad uses, deep learning has the potential to greatly improve aspects of our health care system, and to help make driverless cars a reality.
So, naturally, industry and academia are going to milk deep learning for all it's worth. And they should. I don't think you can seriously argue that we're staring down another AI winter when deep learning has already has such a large economic and societal impact (just think about all those Google Home and Amazon Echo commercials you've seen during the Super Bowl and the Olympics), and when it's poised to make its mark in areas touching trillions of dollars and could literally help save lives.
Deep learning might present a limited view of what's possible with AI, but it's also an economically viable technique. Moreover, research into and attention paid to deep learning doesn't mean there can't be research into other areas pushing for more general AI, or AI that really can learn.
If I had to predict, I would guess that deep learning research and application will continue pretty heavily for the next few years and eventually become commonplace. Simultaneously, research will be happening in other areas of AI and, when they mature and prove themselves applicable outside of the lab, they will be the new AI stars. Deep learning will be yesterday's news, but at that point will be so integral to certain applications and industries that it won't soon be replaced.
(This will continue to be good news for Nvidia, which, by the way, had another huge spike in data center revenue. Here's a good analysis of its data center prospects from The Next Platform.)
Hadoop is already yesterday's news
I think I have such a strong reaction to these deep-learning-isn't-magic-so-let's-dis-it arguments because we've watched the same thing play out with Hadoop over the past several years now. It's rare to hear anything new or interesting come out of the Hadoop space anymore, but for companies that found a use for it, Hadoop is still very important.
Take, for example, these two recent blog posts about Hadoop usage from LinkedIn and Oath (nee Yahoo):
- Dynamometer: Scale testing HDFS on minimal hardware with maximum fidelity (LinkedIn)
- Success at Apache: A newbie's narrative (Oath)
But, more importantly, Hadoop -- which took its fair amount of flak for being slow, cumbersome and generally not perfect -- helped advance the discussions about big data and data science. It also begat Apache Spark and other technologies that have had a significant impact on analytics and data-processing. One could argue that Hadoop, flawed as it is, actually helped fuel the excitement over deep learning, which is finally delivering on some of the magic insights that big data promised a decade ago.
It's also worth looking at what's going on with companies like Hortonworks, Cloudera and MapR, all of which originally pitched themselves as enterprise Hadoop distributions -- and none of which presently do. They talk about data platforms, machine learning, IoT and pretty much everything but Hadoop. They're also earning more money and, in the case of Hortonworks, making headway toward profitability.
My main point being that things evolve and improve, even if they were overhyped or over-promised in the beginning. Hadoop wasn't perfect 10 years ago and deep learning isn't perfect today, but in the long run they'll both generate a lot of economic impact and inspire bigger and better things.