The fight for AI chip dominance just gets more interesting

If you've been reading this newsletter over the past several months, you've definitely noticed an upt
ARCHITECHT
The fight for AI chip dominance just gets more interesting
By ARCHITECHT • Issue #169 • View online
If you’ve been reading this newsletter over the past several months, you’ve definitely noticed an uptick in activity around artificial intelligence hardware – mostly involving Nvidia, Intel, Google, and a growing handful of startups and university researchers competing to build the best chip architectures for deep learning workloads. (I wrote about it again just last week, in fact.) What’s most interesting about this is that it’s a two-front battle – training AI models in the data center, and performing the actual AI task (aka inference) on end-user devices or application servers – with no clear favorites but huge potential revenue.
Some might argue that Nvidia is a clear favorite to continue dominating the training side of things as it has since deep learning really hit in 2013, but that’s by no means a sure thing. I explained some of my thinking around this in the link above (and also here, from August), but I also want to share a particularly insightful take on the subject by Google’s Urs Hölzle at the Structure conference last week. Essentially, he said, while Nvidia lucked into its GPUs performing well on AI workloads, new chips like Google’s Tensor Processing Units are custom-built for those workloads. 
We haven’t even scratched the surface of AI adoption. If non-GPU architectures can run workloads faster, more efficiently and at lower cost than GPUs, there’s every reason to believe they will ultimately come to dominate those workloads. This is doubly true if those chips are being offered up by cloud providers like Google at a low cost, or just being used by cloud providers to power their own services and training.
But enough from me. Here are three good stories on this same subject from today, particularly the first one about how the Chinese government’s desire to have a Chinese company develop the world’s best AI chip:
In non-AI news, the big three cloud providers have been busy the past few days rolling out new regions and price cuts. Price-cutting on raw instances has slowed down quite a bit now that cloud computing is generally accepted as the new norm for how companies buy IT, but I think it still provides two big benefits for providers:
  • Helping a provider claim its platform is less of a lock-in risk, because the cost of trying it or leaving it are lower.
  • Helping a provider win new types of workloads (including AI) by cutting prices on specific types of instances.
Both of which happened at Azure and Google, respectively:

Sponsor: Cloud Native Computing Foundation
Sponsor: Cloud Native Computing Foundation
Artificial intelligence
New podcast episodes every week!
New podcast episodes every week!
Cloud and infrastructure
Did you enjoy this issue?
ARCHITECHT

ARCHITECHT delivers the most interesting news and information about the business impacts of cloud computing, artificial intelligence, and other trends reshaping enterprise IT. Curated by Derrick Harris.

Check out the Architecht site at https://architecht.io

Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.