First things first

We might be at an inflection point in the field of artificial intelligence, where applied AI heads off in one direction and research goes in another. Not because the two areas are inherently distinct, but rather because there's public relations situation happening and something has to give. Research is conflated with products, products are conflated with stage demonstrations, and it seems like fewer and fewer people have a real grasp on what's what.

There were a number of articles this week (including from a special AI edition of Bloomberg Businessweek) that highlight what I'm talking about. I would sum up the situation like this:

  • Certain deep learning techniques (CNNs, LSTMs, RNNs and, arguably, reinforcement learning) have become well understood and quite reliable. They're often the technologies behind mainstream applications of AI, including computer vision, natural language processing, speech recognition and other types of pattern matching. So when we use digital assistants, or hear about AI for analyzing medical images, or banks using AI for fraud detection, there's probably one of these tools behind it.
  • But deep learning is limited in its application because training deep learning models often requires so much data and compute power. (OpenAI recently published a blog post showing what looks like a strong correlation between major advances and increased computing power, but there has been plenty of pushback on this premise.) It's also limited because what it's really good at is pattern-matching, which has many obvious benefits -- just look at some of the applications linked to below -- but falls short of anything resembling artificial general intelligence (aka the Holy Grail).
  • So, as the folks behind deep learning get famous (and sometimes argue over who actually deserves credit), there's a growing chorus from other corners of the AI research community saying, "Hold up! This stuff is useful, but let's please not conflate it with this world-changing AI that everyone seems to be talking about all the time. Any sort of actual intelligence would at least need to understand cause and effect, rather than just be able to identify faces or decipher the words I'm speaking."

In a sense, this a higher-profile discussion of the same thing I've been writing about recently regarding all the attention and money that governments are pouring into AI right now. What happens when we focus so much attention on a term -- artificial intelligence -- is that we lose sight of what's right before our eyes. What are the pros, cons, risks, and real-world killer applications of deep learning? How can we (the royal we, not just Google, Apple and Facebook) use these technologies to actually improve our governments and businesses?

There's definitely a place for AI research and discussions about how to deal with ever-smarter computing systems, but it's probably not in the same breath as discussions about what's possible and truly usable today. We can give deep learning its due without automatically taking the discussion into the realm of "what if ..." and complete economic transformation. As with most things in life, there's a gap between applied AI and research that's real, important, and worth acknowledging whenever we get tempted to ramp up the hyperbole.

On a related note, the Chinese city of Tianjin is setting up a $16 billion fund to spur adoption of AI in the region. These types of investments have a questionable track record, but the focus here on actual things such as robotics, hardware and software, and upgrading rather than transforming existing industries, suggests it could actually pay off.

Read and share this issue online here.

Sponsor: MongoDB

The ARCHITECHT Show podcast


AI and machine learning





















Sponsor: Replicated

Cloud and infrastructure
















Data and analytics