ARCHITECHT Daily: Eric Brewer knows a lot about open source and cloud infrastructure

I'm going to take the opportunity up top today to direct readers to my writeup of the latest ARCHITEC
ARCHITECHT
ARCHITECHT Daily: Eric Brewer knows a lot about open source and cloud infrastructure
By ARCHITECHT • Issue #75
I’m going to take the opportunity up top today to direct readers to my writeup of the latest ARCHITECHT Show podcast, which features Google VP of infrastructure (and CAP theorem deviser) Eric Brewer. He provides a treasure trove of insights into areas such as open source, containers, the cloud computing market (ranging from competition to the economics of buying RAM) and the rationale for custom machine-learning processors. 
It’s a really good and wide-ranging discussion, and you’ll at least read the highlights linked to above, if not listen to the whole thing. If you’re interested in the thinking behind Google projects and products such as Kubernetes and Spanner, you’ll definitely want to listen.
Also, just a note as we approach the 20th ARCHITECHT Show episode this week. I’ve been fortunate to have a lot of great guests on the ARCHITECHT Show since starting it in January, many of whom are at least partially responsible for creating some of today’s hot technologies and companies. Among them:
Other guests includes founders and engineering leaders from companies such as: Google, Kaggle, GitHub, Instagram, DigitalOcean, DataDog, Honeycomb, Bonsai, Segment, Sapho, Metamarkets, Joyent, Backblaze. 
If you haven’t been listening, you can find highlights and listening options for all the ARCHITECHT Show podcasts here. If you have been listening, thanks (and feel free to tell your friends)!

Sponsor: Cloudera
Artificial intelligence
This argument is interesting but flawed, IMHO. People will still do plenty of gruntwork while learning, in school, etc. And—depending onthe industry, of course—much of it is a colossal waste of energy.
qz.com  •  Share
I love that this story had legs to warrant a separate interview and a response from IBM that includes the line, “Does any serious person consider saving lives, enhancing customer service and driving business innovation a joke?” 
The a16z podcast tackles quantum computing, including the notion that, like Moore’s Law, quantum computing will happen not just because of technology, but also because of economic forces and sheer will.
This is a nice explanation not just of how Duolingo uses text-to-speech to train language models, but also why it prefers the approach over human voices. 
Without ever using the term “reinforcement learning”! But, really, video games work so well as training grounds because AI systems can learn what worked and what did not.
I’m still baffled at the idea of a AI research arm within Salesforce, but it’s apparently doing good work. An accurate method for summarizing documents would be hugely valuable, including for Salesforce!
metamind.io  •  Share
It’s system still is not great at getting the right answers, but it’s better than previous approaches and also attempts to show its work by providing rationale for how it came to a particular answer.
arxiv.org  •  Share
This is a different approach and problem space from the research above, but the core goal is the same, if you ask me. More intelligent AI means models that can reason across disciplines; here, answering questions about what’s happening in an image.
arxiv.org  •  Share
There’s a lot to learn, and a lot of redundant energy to be saved, by publishing when things didn’t go as planned. Especially in a space like AI, where there’s a lot of parallel activity happening.
arxiv.org  •  Share
Cloud and infrastructure
Apple’s data centers are large and often use sustainable energy. Apple appears to be building its own servers in the U.S. (possibly to avoid supply-chain tampering). Everything else is still a mystery, even as Google opens up more.
Cloud computing is still not free, and may actually be more confusing than ever in terms of what you’re actually paying for. This is some solid advice for trying to manage that.
Meanwhile, kind of separately from the whole AI discussion, supercomputers still do lots of computationally heavy work and justify lots of government investment. They’re not training digital assistants.
gcn.com  •  Share
Media partner: GeekWire
All things data
Lattice is based on Stanford’s DeepDive project, includes Hadoop co-creator Mike Cafarella as a co-founder, and uses ML to add structure to images, text and other “dark data.” Apple has a lot of that type of data and will only add more; imagine making it more useful to consumers and/or advertisers.
In a nutshell, the company wants to automate, where it can, repetitive tasks in the data science workflow. Airbnb’s advice also speaks to the value in what Lattice Data (above) was working on: “[Automated machine learning] is cheap to try once you have already composed your training data.
medium.com  •  Share
Informative post if you’re still managing a vanilla Apache environment, but the list of companies doing that must be stagnant, at best. Cloud and enterprise distros have some a long way.
Listen the the ARCHITECHT Show podcast. New episodes every Thursday!
Did you enjoy this issue?
ARCHITECHT
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at https://architecht.io
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.