First things first

I'm keeping this brief today because the issue is so long as is, but here are two big things worth noting on two very different subjects:

Docker does multi-cloud, multi-OS Kubernetes

For a couple of years, the big question was how Docker was ever going to make enough money to justify its valuation, and it seems like a big first step to answering that question is by embracing Kubernetes. Here are its two big announcements from this week's DockerCon event:

You could argue that Windows support isn't a huge deal given the noticeable shift toward Linux even among Microsoft Azure users, but Docker's approach to managing Kubernetes clusters from Docker EE could be a bigger deal. Basically, it's giving users a single place to see what's deployed where (on-prem, AKS, EKS or GKE) and push updates, and to migrate applications from one cloud to another.

Docker almost certainly won't (isn't?) be the only company offering this capability, but it's in a pretty good position to capitalize on the opportunity given how well it knows how to manage, well, Docker environments.

Clarifai got hacked while working on Project Maven

Project Maven, as you might recall, is the Pentagon project that spurred a near revolt inside Google. Clarifai is a computer vision startup and, actually, one of the first to launch and last to remain independent during the big buying spree a few years back. I don't have any issue with Clarifai -- a startup without, I assume, the financial freedom to turn down government contracts -- doing the work, but something seems odd about this whole issue.

Here is a WIRED story explaining how the company might have tried to cover it up.

And here is the Clarifai blog post disputing parts of that story.

It actually doesn't seem the the hack was anything serious -- Clarifai calls it an isolated and untargeted bot on a single research server -- but the incident underscores the importance of security with regard to artificial intelligence. Whether it's adversarial perturbations or some other kind of attack, there's really no room for error when were dealing with life-and-death situations. Even beyond military applications, security protocols at companies doing computer vision will need to be especially robust, because a lot of those applications tend to involve health care, factories, and other areas with lots of money and lives on the line.

Also, losing your code or your customer's/Pentagon's data would obviously be bad.

So what if AI is a bubble: Keep investing

This is a really good post from the founder of a Canadian AI startup about why we shouldn't worry so much about whether we're in an AI bubble, but the final two paragraphs really sum it up:

While the great expectations for the internet’s early days far surpassed what was possible at the time, the promises made in the late 1990s eventually came true. It may have taken 20 years, and may have been a commercial failure, but today, my mother literally buys kitty litter online at the click of a button.

Unlike many companies that fell into oblivion when the dot-com bubble popped, the companies that have stuck around all took a long-term attitude. Those in the AI business need to remember this.

Please read the whole thing (linked to above), and maybe even check out what the author's company, Dessa, is up to. Or I can tell you: it helps companies build their own AI systems.


Sponsor: Neo4j

AI and machine learning

Sponsor: MongoDB

Cloud and infrastructure

Sponsor: Replicated

Data and analytics