ARCHITECHT Daily: LinkedIn's Open19 initiative is open source hardware for everybody else

LinkedIn officially launched the Open19 Foundation on Tuesday, formalizing and expanding what had pre
ARCHITECHT Daily: LinkedIn's Open19 initiative is open source hardware for everybody else
By ARCHITECHT • Issue #82
LinkedIn officially launched the Open19 Foundation on Tuesday, formalizing and expanding what had previously been an internal attempt to standardize data center infrastructure around a single rack design. LinkedIn’s partners in launching Open19 are Flex, GE Digital, HPE and Vapor IO, although the foundation also includes about two dozen other members from across the data center ecosystem.
The most obvious (two-part) question when you first hear about Open19—and one that I admit I asked publicly more than once—is, “Why does it exist? Why can’t LinkedIn just join the Open Compute Project and use its designs?” The answer, according to the three people I spoke with about Open19—all of whom were also intimately involved with OCP from its early days—is threefold: 
  1. OCP is more about sharing designs and best practices than it is about setting a standard.
  2. OCP’s designs aren’t particularly useful to smaller-scale purchasers. This has to do with buying power, as well as the fact that OCP designs favor a 21-inch-wide rack, rather than the standard 19-inch width.
  3. OCP, while popular, made some hardware manufacturers nervous at the prospect of having to give up their IP or competitive edge.
Those three people, by the way, were Yuval Bachar, Open19 president and principal engineer for global infrastructure architecture and strategy at LinkedIn; Curt Belusar, senior director of hyperscale engineering for the service provider business at HPE; and Cole Crawford, founder and CEO of Vapor IO.
Basically, Bachar explained, Open19 defines how the racks are built, how power and networking are done, and the form factor(s) of servers, but what’s inside those servers is up to the manufacturer. For manufacturers, this means they can take advantage of a low-cost, standard rack design, but can still try to distinguish themselves at the level where actual computing is done. Buyers get simplicity: a vast reduction in cables and the ability to easily swap in new components.
I think the buying-power aspect of Open19 is also interesting, in the sense that smaller data center operators can’t achieve the same economies of scale as a company like Facebook when it comes to buying custom gear from ODMs. If enough manufacturers and buyers rally around Open19, that should create a market where buyers, including LinkedIn, have plenty of viable off-the-shelf options to choose from.
Hypothetically, having a standard could also let smaller buyers “piggyback” on bigger buyers’ orders so everyone can get lower prices. Crawford noted that because OCP designs are so extensible and large buyers often include their own custom IP, it makes it impossible for anybody else to leverage those economies of scale. However, because the Open19 design is a standard, LinkedIn could, hypothetically, order 2,000 cages from an ODM and resell 200 to a smaller buyer at cost. They should all be exactly the same, and everyone should pay a lower price per unit.
If it seems like Open19 is making less of a splash than OCP did when it launched in 2011, that’s probably because custom hardware and open source designs aren’t particularly novel at this point. We’ve also 6 years to watch the cloud computing market mature, creating a barbell effect when it comes to infrastructure acquisition. On the one hand, you have mega web companies and cloud providers building their own stuff or using OCP designs, and on the other you have companies moving more and more workloads into those clouds. 
Presumably, Open19 adopters will be part of that shrinking middle—companies like LinkedIn, GE, and perhaps telcos, banks and large retailers—that are big enough to feel the pain of data center complexity and bespoke architectures, and also too big or too regulated to move entirely to the cloud.
Crawford, whose company focuses on managing edge computing deployments, thinks having a standard rack design will also make it easier for companies to manage large edge deployments. Many people predict edge computing is the next big thing, necessary for low-latency processing and data access for connected devices and machine learning tasks. A small, energy-efficient design is ideal for small real estate footprints at edge locations, and standard power and networking specs simplify the job for technicians who have to service the racks.
“Amazon has 4 data centers in America … We have to think about 40,000 data centers,” he said.
It’s also worth noting that there’s no real reason that OCP and Open19 couldn’t work together, or that manufacturers couldn’t apply certain OCP design elements to gear built to the Open19 specification. In fact, HPE is a member of both organizations, and LinkedIn parent company Microsoft is an active OCP member.
At this point, however, Bachar said LinkedIn and Microsoft are operating distinctly when it comes to data centers: “We’re continuing to develop our own data centers with our own technology.”
A note for the rest of this issue: There are quite a few links to academic artificial intelligence papers below. I blame it on the recent deadline for submissions to the NIPS conference (which, as I noted yesterday, appeared to strain the supply of cloud-provider GPUs.) I tried to pick the ones I think could have the biggest impacts; I probably missed some.

Sponsor: Cloudera
Sponsor: Cloudera
Artificial intelligence
Stop me when this starts sounding familiar. I have to wonder at what point we stop beating this dead horse and apply those AlphaGo resources to something else. For some perspective on human-vs.-machine competitions, check out this great writeup from Backchannel on the epic chess matches between Gary Kasparov and IBM Deep Blue.
Some really smart folks, including Spark creator Matei Zaharia, think machine learning tools need to be easier. The DAWN project is spending 5 years working on an end-to-end machine learning platform.
Its first order of business will be tackling money laundering by analyzing users and behavior. Listen to the ARCHITECHT Show podcast this week (it publishes on Thursday) to get the lowdown from founder and CEO Bradford Cross on Merlon and the state of AI.
The CDL is an incubator program with a focus on machine learning and some top-notch advisors. Now, it also has a quantum machine learning initiative that gives accepted companies access to a D-Wave quantum computer. 
Qualcomm gets overlooked in discussions about AI hardware, but it has a huge device footprint and is investing in neuromorphic designs and deep learning.
I read a press release about a kinda creepy, borderline civil-liberties-infringing facial recognition product today. It’s not a bad idea for someone to draw a line between surveillance cameras and big brother.  •  Share
If there’s a cooler project name than Algorithmic Warfare Cross Functional Team, I don’t know what it is. Its initial focus is to “provide vision algorithms for object detection, classification, and alerts,” before moving on to more advanced applications.  •  Share
This is one of those applications that’s probably good for the economy and for retail operations trying to figure out how to market themselves, but that sounds a little creepy to everyone else.
This is a really good take on where we’re at in terms of mainstream AI adoption. Essentially, people want to play around with whatever data they can find, and renting cloud GPUs is still way too expensive for most.
Another great blog post, this one doing a great job explaining why data matters so much in AI, and how to go about working with it. Cleaning, pre-training and dealing with real-world (read “non-academic”) datasets are art forms worth learning early.  •  Share
Speaking of quality data, meet Scale, a startup that just launched and lets developers outsource tasks like labeling, categorization and transcription to humans. Seems like a more-automated version of CrowdFlower.  •  Share
In general, I’m burned out on stories about Alexa at the moment, but this provides some creative options for how someone might go about wreaking havoc on nearby devices.
There are lots of good uses for AI in journalism, as this puff piece points out. As long as news outlets resist the urge to use it for bolstering our filter bubbles, I’m all for it.
This blog post from Ayasdi does a good job explaining its unsupervised-learning value proposition in a world of exploding data. The company raised a lot of money a few years ago, and seems to have good software for pattern detection in large, complex datasets.
I linked to this paper last week, about an AI system that explores video games rather than just seeking explicit rewards. Here’s a nice writeup of the research.
The whole point of neuromorphic chips is that they can recognize patterns using low power and, ideally, smaller datasets. This one was trained to make music and doesn’t perform half bad.
Essentially, they’ve developed an question-answering system that learns to ask follow-up questions in order to find the right answer.
Here’s research out of NYU, with Yann LeCun as one of the authors. I’ve seen his name of a few papers recently (I believe), usually focused on systems that can predict what will happen next in a video. Consequences are an important thing to understand.
IBM has built its TrueNorth neuromorphic chips, but most AI workloads still run on GPUs. IBM is trying to bridge that gap. Frankly, this is probably a better long-term bet than Watson.
Google’s DeepMind division just released a large dataset including 400 different actions and 400 clips of each action. Now it’s working on applying models trained on that data to new datasets and applications.
Sponsor: DigitalOcean
Sponsor: DigitalOcean
Cloud and infrastructure
Large companies, especially, love being able to charge the right teams for usage on shared infrastructure. This is a good feature for AWS to add to its storage service.
Speaking of AWS … This is a really interesting hire for AWS. It’s slowly acquiring a lot of open source (and Sun Microsystems) experience. A cynic might suggest it’s trying to match Google in employing the creators of important things.
Essentially, this argument goes that one-stop databases tend to be OK at a lot of things, so people are better off using best-of-breed options. However, even Mike Stonebraker, who’s quoted here as being against polyglot databases is now working on this project …
When you count New York Times, Airbnb, Spotify, Pinterest and lots of other big names among your customers, you’re probably doing something right in the content-delivery and web performance game.
This is just a link to the report’s infographic, but reading even this is like reading the business case for containers and CI/CD. Key words include agile, new products, consistent, etc.
Heap Analytics explains how it was able to get its server footprint in check (a 10x usage improvement) by fixing a performance issue in Postgres. (By the way, Heap sells tools for doing this, too.) It kind of reminds me of how Segment saved $1 million by troubleshooting its DynamoDB setup, among other things.
Google’s Customer Reliability Engineering team has been publishing posts all year about best practices. This one does a good job of explaining how to prioritize risks and communicate what matters.
Scriptflask is one of many approaches Netflix is taking to help its teams manage and work with the company’s large number of microservices. Anyone going down the microservices path should heed advice from companies like Netflix.  •  Share
Speaking of microservices, this important piece of that stack, which lets container runtimes take advantage of networking plugins, is now part of the Cloud Native Computing Foundation.  •  Share
Yahoo is still a pretty big web property, but I sense it’s losing its luster in terms of influence. Nonetheless, it’s still open sourcing stuff, some of which might be worth looking at.
SETI@Home was more or less my introduction to distributed computing back in 2003, so I have a soft spot for it. Here’s a writeup on how it came to be and what the project looks like now.
Media partner: GeekWire
Media partner: GeekWire
All things data
I’m not certain that deep learning is moving the needle too much for companies like MapR at the moment, but this is part of the evolution of companies from selling Hadoop to selling machine learning, IoT and more.
Machine learning is one big trend for big data companies, and cloud services are the other. A lot of them held out for various reasons, but now everyone has a cloud play.
The project, called Measures for Justice, seeks to shed light on injustices in the U.S. court system. If we’re going to use data for good, which is still a popular thing to talk about, we actually need the right data.
In part because it makes it more difficult to gather data. If people think their data can be used against them by insurers, they’ll be less likely to share it.  •  Share
Listen the the ARCHITECHT Show podcast. New episodes every Thursday!
Listen the the ARCHITECHT Show podcast. New episodes every Thursday!
Did you enjoy this issue?
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.