First things first

I think it's safe to say the artificial intelligence field got ahead of itself, although the people, companies and institutions engaged in it aren't entirely to blame. They were aided by media landscape eager to report on the next big thing (and incentivized to focus on controversy for the sake of clicks) and, whether you think they're right or wrong, lots of sensational comments from folks like Elon Musk. The result is that we're seeing a field that appears at odds with itself, complete with no end of infighting and divergent views about how far along the field really is.

I plan to write a little more about this next week in an attempt to flesh out my feelings on where I think AI can be super-useful and where I think it's being overhyped, but if I had to boil them down to a nutshell, it might be this: AI was foisted upon the public and enterprises as the greatest thing since sliced bread, with very little mainstream attention to how it works and what it can realistically do. The result has been lots of opinions, ungodly investment and, I would argue, not nearly enough time to really think through the best way to approach AI. The latter point ranges from consumer applications to how research is conducted an funded.

These three items from today touch on these ideas in different ways:

  • Facebook's Yann LeCun explains why it lets researchers split their time with academia (Business Insider): The role of corporate research labs like those at Facebook and Google has generated a fair amount of debate in AI circles, where they're often decried by critics as the antithesis of what's great about academic research. But I think LeCun makes some really good points here about the benefits of commercial labs, particularly when their lead researchers often maintain academic roles, as well. He points to his early-career experience at AT&T Bell Labs, but also other fields, like law and medicine, where practitioners often teach. And, honestly, I think it's generally a good idea for people to be able to view their field from multiple angles. Thinking about a subject not just academically, but also commercially and practically, can open one's eyes to new ideas or concerns, or give new perspective as to what research is best carried out where. Also, companies like Facebook and Google have lots of money for salaries, and even more data, and who can really blame experts in their field for wanting to take advantage of that?
  • The real payoff from artificial intelligence is still a decade off (Foreign Policy): An interesting analysis suggesting that we've probably gotten ahead of ourselves in expecting an immediate AI-powered economic revolution. The author points to research and history in predicting that the major benefits from AI might not be fully realized until 2030. It's a well-reasoned argument, and pretty accurate in its assessment of where the AI field is today. Not only are the technologies not fully matured yet, but we're really just getting started with potentially important applications in areas beyond consumer web features (photo search is great, but not world-changing). And even once they're discovered and developed, cultural norms and infrastructure will have to come along for the ride. So, yeah, it could take a while.
  • Intel sold $1 billion of artificial intelligence chips in 2017 (Reuters): I actually linked to an Intel blog post yesterday where the company referenced this stat, but I didn't have time to write anything. However, it's worth noting that Intel is talking about Xeon processors, not any sort of specialized AI chip that's all the rage right now, and into which it has made significant investments itself (Intel claims its Nervana Neural Network Processor will ship in 2019). Also Nvidia's data center business hit a $2 billion run rate in 2017, powered largely by machine learning workloads, I'd guess, and shows no signs of slowing down. Furthermore, Intel's $1 billion is only an estimate that, according to the Reuters article, "was derived from customers that told Intel they were buying chips for artificial intelligence and from calculations of how much of a customer’s data center is dedicated to such work." But, all that being said, if Intel's number is accurate, it's pretty impressive considering many people have already written Intel off in AI. Doing a billion dollars on a general-purpose chip could be a good omen, especially if its AI chips deliver markedly better performance.
Read and share this issue online here.

AI and machine learning









Cloud and infrastructure





Data and analytics