ARCHITECHT Daily: How long will the good times last for Nvidia?

It must feel pretty good to be Nvidia co-founder and CEO Jensen Huang right now. Ever since deep lear
ARCHITECHT Daily: How long will the good times last for Nvidia?
By ARCHITECHT • Issue #73
It must feel pretty good to be Nvidia co-founder and CEO Jensen Huang right now. Ever since deep learning caught fire a few years ago, really kicking off the current machine learning and artificial intelligence revolution, Nvidia has been a critical part of the conversation. After all, GPUs were a big part of the reason deep learning actually worked this time around, and they’re still the engine that powers the (vast?) majority of AI models.
So it’s no surprise that when Nvidia announced its first-quarter earnings on Tuesday, it brought news of a 48 percent year-over-year uptick in revenue—including a nearly 3x uptick in revenue for its Datacenter GPU business. And among the early announcements coming out of its GPU Technology Conference this week, we’ve seen a promise to train 100,000 developers on deep learning this year; a GPU-powered video platform for cities, spanning from camera to the cloud; and partnerships, like this one with H2O, to help bring GPU-powered deep learning to large enterprises not named Google or Facebook.
(On a related note, there are a lot of high-profile conferences happening this week: Nvidia GTC, Microsoft Build, Dell EMC World, OSCON and OpenStack Summit. Did I miss any?)
The company has been on a two- or three-year mission to make machine learning as popular as possible (not that it needed much help), and to position its GPUs as the default hardware platform for doing it. It’s doing pretty well, too: literally (I’m pretty certain) every popular deep learning library and framework is built to run on Nvidia GPUs with its CUDA programming model.
You can’t blame Nvidia for doubling down on machine learning and trying stake its claim as the only processor game in town. Because the good times are not guaranteed to last, especially if Nvidia gets lazy on the innovation front or lays off on the marketing. Lurking in the shadows (if that’s possible), is Intel, which would love to see machine learning workloads run on its line of CPUs, FPGAs and other next-generation gear.
In fact, there’s evidence to suggest FPGAs might actually be viable alternative to GPUs for AI workloads in the data center. And just this week, Intel led an $8 million investment in FPGA software startup Falcon Computing. Two other FPGA-based startups, Flex Logix Technologies and Edico Genome, also raised venture capital (the latter, as noted yesterday, from Dell).
That’s not to mention Google’s decision to build its own AI chips, called Tensor Processing Units, which it claims are largely superior to GPUs for its purposes. This is important: cloud providers like Google, AWS and Microsoft are major purchasers of data center hardware, and are all investing heavily in AI. If those companies, or even two of them, aren’t buying GPUs in bulk, then Nvidia is leaving a lot of money on the table.
Outside the data center, there is a lot of investment in specialized, low-power, chips optimized to run AI workloads right on devices. In other cases, including with the new Caffe 2 deep learning framework recently announced by Facebook, models can run (even if they can’t be trained) on existing hardware such as smartphone processors and the Raspberry Pi.
Would I love to be Nvidia right now? Absolutely. But I’d also spend a lot of time thinking about my next moves to get out in front of the competition, or at least to make sure it remains the stuff of research labs and niche deployments.

Sponsor: Cloudera
Sponsor: Cloudera
Artificial intelligence
As with most things, the nature of tweets complicates deep learning, as well. Twitter explains how its Cortex team overcame issues such as feature sparsity to apply AI to optimizing our timelines.
Translating text from one language to another isn’t something new for deep learning, but Facebook’s approach is. Using a CNN helps it process language in parallel instead of word-by-word like traditional RNN approaches.
They’re trying to create chips that can function at room temperature, which would be a big deal compared with the absolute zero required for most current approaches.
Generative adversarial networks (or generative artificial networks, in this post) work by creating synthetic data in order to train another network. This is good for accuracy, but could also be a boon for privacy.
Social Capital’s Chamath Palihapitiya called IBM Watson “a joke” in a video interview with CNBC, suggesting it’s more marketing than substance. He might find some supporters among AI circles.
Apparently, the new “new Silicon Valley” is eastern Canada, or so the litany of news stories about AI investment in Toronto and Montreal would have us believe. It actually could turn out to be true—but, oh, those winters!
I’m not sure how fair it is to criticize Uber for its relationship with Pittsburgh and CMU, but its new AI division in Toronto will have more infrastructure in place and probably more competition.
Listen the the ARCHITECHT Show podcast. New episodes every Thursday!
Listen the the ARCHITECHT Show podcast. New episodes every Thursday!
Cloud and infrastructure
I find this relationship fascinating. A suggestion that might make sense over the long term is Kubernetes on bare metal as the core, surrounded by strategically and technically valuable OpenStack components/projects.
If you want more tech powerhouses to be located in Europe, funding them is a good way to do it. And it seems like MariaDB, which claims MySQL creator Monty Widenius as CTO, is in a good position to capitalize.
Basically, it wants to flag bad configuration at levels above what cloud providers handle. Like all those exposed MongoDB instances a few months ago.
No exact numbers, but almost half of all VMs using Google’s VM-migration tool seems like a decent number. On the one hand, it’s counter-intuitive given the spike in Linux use on Azure. On the other hand, we’re talking about legacy apps and lots of those are on Windows.
Maybe it depends what you’re building, but enterprise IT hardware seems to have been particularly hard-hit with layoffs recently.
You have to give credit to the Yahoo engineering team for continuing to build and open source, but at some point the shine is completely off that rose, right? The next Hadoop isn’t walking through the door anytime soon.
James Governor of Redmonk argues that engineers who can communicate well and lead movements are worth their weight in gold. I would tend to agree, because even in Silicon Valley, excitement helps tech thrive.  •  Share
This is convincing argument that CPU monitoring needs to be more granular, distinguishing between actual use and stalled but not technically idle.
Media partner: GeekWire
Media partner: GeekWire
All things data
It could be a good way to create a virtuous cycle for customers, who want to access and analyze data in the same place. But selling consumer data is not a particularly good look in this age.
Get it while it lasts! Transparency is not the name of the game in Washington, D.C., at the moment.
Consider the source—Domino sells a cloud-based data science platform—but maybe consider the argument that laptops, or even an in-house server, don’t have enough juice to run the most-accurate models.
Uber probably isn’t going anywhere, which means it’s going to keep pushing big data projects to new limits. This feature might not be huge, but it’s a good example of how Uber works with communities to get stuff done.
Did you enjoy this issue?
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.