Google's new People+AI Research effort is exactly what AI needs

On Monday, Google announced an effort called PAIR (People + AI Research) that's focused on studying a
ARCHITECHT
Google's new People+AI Research effort is exactly what AI needs
By ARCHITECHT • Issue #109
On Monday, Google announced an effort called PAIR (People + AI Research) that’s focused on studying and improving the ways that people, from researchers to consumers, interact with artificial intelligence. Whether or not PAIR itself the answer to what ails AI, the project hits upon some major issues that need to be solved before any of the pie-in-the-sky predictions about AI can come true. These range from interpreting the results of deep learning models (which could be critical for applications in regulated industries) to optimizing the consumer user experience (which, obviously, will affect the growth of consumer AI products).
You can get some more details about the project in the Google links above, but also via posts on CNBC and WIRED:
However, I think one of the most insightful pieces I’ve read on this general topic comes from the Google Design blog, which on Sunday published a really good piece on human-centered machine learning systems. It’s presented as a 7-point list, the first of which is so obvious we shouldn’t even need to talk about it anymore (but, of course, we do): Don’t expect Machine learning to figure out what problems to solve. Basically, the advice is that anyone serious about building a good product should organically come across a real problem that needs solving, rather than started at AI as the solution and then finding a problem (real or not) that AI could technically solve.
The rest of the post offers good advice on everything from figuring out whether AI is actually necessary to solve the problem (those Gmail popups reminding you to include an attachment actually use heuristics) to figuring how users interact with and expect to interact with your product. Finding the right balance on handling false negatives and false positive is also important, and varies by use case and user.
It’s really easy to think about how this type of advice applies to consumer AI products, but it’s equally as applicable to enterprise AI. For every application where automated pattern recognition on huge datasets is the answer, there’s another application where a human actually must interact with the software. And you don’t have to venture too deep into the history of IT to find instances where good-enough technologies with great UX won out over best-in-class technologies that missed the boat on UX.
Experts lately (and rightly) have been focusing a lot of energy on highlighting the importance of finding the right data on which to train AI applications, but UX should not be far behind on the list of things to think about. Today’s newsletter, for example, is full of all sorts of great ideas about applying AI everywhere from agriculture to home automation, and full of research projects in fields ranging from health care to robotics where it’s really easy to think about how it might make its way into the commercial sector. However, especially today, the path between great ideas (even technically possible ideas) and successful products almost certainly goes through UX.
On a related note: RIP Jawbone and, soon enough it seems, the idea of a mainstream fitness tracker market. 

Sponsor: Cloudera
Sponsor: Cloudera
Check out the latest ARCHITECHT AI podcasts
In this episode of the ARCHITECHT AI and Robot Show, Signe Brewster speaks with Drone Racing League founder and CEO Nicholas Horbaczewski about the fast-growing sport he created, where pilots race drones at speeds up to 85 miles per hour. Among other things, Horbaczewski discusses the sport’s evolution from idea to nationally televised event watched by 33 million people last year; how drone racing is pushing the industry to create better components for mainstream drones; and why he thinks the first iterations of drone package delivery and other commercial applications will actually be piloted by people. 
Highlights from Episode 3, in which Demisto co-founder Rishi Bhargava explains how the company is using a chatbot to make life better for the professionals who spend their days neck-deep in security alerts, putting out fires both real and imagined.
Sponsor: Linux Foundation
Sponsor: Linux Foundation
Artificial intelligence
Most of the money from the Ethics and Governance of Artificial Intelligence Fund is going to Harvard and MIT, and $1.7 million is being split across 7 other organizations that are trying to ensure advances in AI don’t come at the expense of civil rights or a fair society.
This is a good overview of what Fujitsu is about to start selling for AI workloads, and what challenges it and other companies face in this space. Namely that Nvidia all but owns the market for machine learning workloads right now.
The idea isn’t bad, but the UX had better be great, and Amazon, Google or Apple (who own the end devices and the AI talent) had better not decide they want to actually own the smart home. Target market appears to be McMansion owners.
Possibly more notable than the results here (there has been previous research into ECG data) is that Andrew Ng, who has been talking a lot about health care lately, worked on this study. 
I am all for Las Vegas’s move to become a smarter and more innovative city, but this is strange. Predicting accidents hours before they happen seems pretty much like predicting where traffic will be heavy, which does not take too much data or intelligence.
This is a classic use case for machine learning, where there’s just too much data and too small or complex of patterns for people to recognize. You would not want to be on the receiving end of a false positive, though.
Paying people to label images in their spare time is a good approach to gathering training data for driverless cars, and for AI in general. Then you just have to hope they’re doing it right.
Depending on the use case, AI and big data have had mixed results in the farming world. Here’s a map of companies working in various areas of agriculture, and how much money they’ve raised.
The system, called Max-AI uses deep learning to classify and sort the various types of plastics that make their way across its conveyor belts. Its manufacturers say it will soon do a lot more.
Put me on the record as being against computers actually writing stories, at least more than they have to. Plus, it’s insulting to think local media can be handled by machines. But picking up the slack that reporters can’t get to, and helping writing slugs or find images, are great uses.
I recently linked to SparkCognition’s latest funding round, which included Boeing as a strategic investor. Here’s some more on how the former is being applied in aviation, manufacturing and, hopefully, defense and national security.
Not to belabor this point, but solving this will be a very big deal. This article highlights some of the compelling forces (e.g., new EU rules on algorithms) and how researchers at various institutions, including Uber, are trying to figure it out. (Find links to other stories in this Science package here.)
In the case of the research, we’re talking about humanoid avatars in a virtual environment, but one could imagine transferring this type of capability to robots.
A bill in the works by Washington’s Maria Cantwell would require Congress to study how automation will affect the workforce and would define AI for purposes of federal law. China is already working on a 30-year AI roadmap, so any work by the U.S. and other countries would probably be a good idea.
Here’s a WIRED story on research that was published last month, in which the companies build a reinforcement learning systems that takes advice from humans rather than just trusting its own solutions.
Edmonton officially joined the pantheon of Canadian AI hubs when DeepMind announced its new lab there last week, but the University of Alberta has long been a leader in AI research.
www.cbc.ca  •  Share
Sponsor: DigitalOcean
Sponsor: DigitalOcean
Cloud and infrastructure
We’ve been hearing about this for years, and now the hybrid/on-prem version of Azure is available to order from several partners. It is supposed to ship in September. I am very intrigued to see what kind of effect this has on the idea of how private/hybrid clouds should be built, and whether it has a material effect on Microsoft’s cloud business.
Oh, yes, Google got into the hybrid cloud business recently via its Nutanix partnership. Here’s a how-to on how to set up a Nutanix+Google environment. Also, assuming there’s nothing stopping Nutanix from also partnering with Microsoft and AWS, any advantage this gives Google could be short-lived.
There was a time when every startup just used MongoDB anyhow. Now, the company is giving select ones access to its Atlas managed database service. The technology has, by all accounts, improved, and Atlas can run on multiple clouds, so it’s probably worth a look.
It’s a noble cause, but technically the resources are volunteer computers and smartphones via the World Community Grid project. Given the computing power now available in consumer devices, I’m surprised we haven’t seen a resurgence in this type of project (aside from Pied Piper’s new internet on Silicon Valley.)
This analysis is as much about the economics of cloud computing as it is about the economics of AI. It’s worth looking at, as is the discussion of it on Hacker News, where some Google employees get into the details of how GPUs are actually offered and priced.
Cloud databases are so plentiful right now, it’s hard to get a sense of what’s what. This is a decent review of what Google’s Cloud Spanner is and is not, largely by comparing it to other offerings from Google and other cloud providers.
Technically, the database was on a publicly available AWS instance, but obviously the blame lies with WWE. When will companies learn?
It’s the result of a patent-infringement lawsuit brought by Cisco. Arista is a big hit among web companies and currently has almost 10 percent market share, so any long-term delays or significant redesigns could become a major problem for the company and its customers.
This seems premature, but what do I know about predicting stocks? What I do know is that Intel has a lot of money and is very cognizant of the challenges it faces. If you’re selling, you’re betting it can’t capitalize on its advantages over smaller competitors.
fortune.com  •  Share
Listen: It’s hard to get too excited over a new server lineup in 2017, but expressly going after next-gen workloads seems like a good way to go about launching a new server lineup.
The new method of creating “cache hierarchies” could help multiple applications run better on new chips. It seems like there’s some promise here for cloud providers, too, as they try to get more efficient and target ever smaller workloads.
Sponsor: CircleCI
Sponsor: CircleCI
All things data
It’s funny how the further away I move from organized sports, the bigger a business they become. Hudl is all about analyzing video from games or practices, and now it’s trying to hire a bunch of machine learning engineers and step up its computer vision game.
Unravel appears to have hit upon a big market opportunity, which is managing performance across the usual-suspect big data systems whether they’re running locally or in the cloud. Most companies will offer their own monitoring/management tools, but having a single view is usually nice.
That’s the takeaway from this research paper into graph-database performance versus other options, including Postgres. I would argue the seemingly solid adoption of Neo4j, and the steady stream of graph databases being created by web companies, suggests they do serve a real purpose.
This is a pretty good endorsement of a startup called BuildingBlocks, whose cloud service helps governments bring their data together and actually perform useful analytics on it.
gcn.com  •  Share
This is just a great example of sourcing a variety of datasets and using them for the public good. The really cool thing is the results of this could help families going forward, too.
arxiv.org  •  Share
If you’re into possibly useful analysis on the state of big data systems for processing data across different geographic locations, this is for you. I have to assume new data center and edge architectures will help solve this problem as much as new processing frameworks will.
arxiv.org  •  Share
Sponsor: Bonsai
Sponsor: Bonsai
Did you enjoy this issue?
ARCHITECHT
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at https://architecht.io
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.