ArchiTECHt Daily: Risk and regulation are where the rubber meets the road for AI

Yesterday, I linked a Harvard Business Review story discussing the trade-off that AI companies need t
ArchiTECHt Daily: Risk and regulation are where the rubber meets the road for AI
By ARCHITECHT • Issue #49
Yesterday, I linked a Harvard Business Review story discussing the trade-off that AI companies need to make between perfection and good-enough. It’s a good point, and one that struck me while reading a couple of health care press releases—one about a partnership between Philips and an AI startup called PathAI around breast cancer diagnosis, and another about a partnership between Samsung and an AI startup called MedyMatch around real-time stroke-and-brain-bleed diagnosis. 
There are a ton of research papers about using deep learning to analyze medical images and diagnose conditions, but that hypothetical “this-is-very-promising” feeling morphed into something resembling unease with the prospect of this stuff actually being applied in the wild. It was especially true with the stroke situation, where decisions need to be made in real time in order to prevent brain damage. It also did not escape me that the partnership between Samsung and MedyMatch is “pending regulatory approval” of the latter’s technology.
I’m assuming that regulators want to make sure the algorithms are accurate, which is a good thing to ensure. But per the HBR authors, where do they draw that line? Seventy-five percent? Ninety percent? One hundred percent? How this compares to the accuracy of humans and other technologies on the same task also seems relevant.
What worries me more than accuracy, though, are the externalities. What happens if, down the line, doctors start relying too much on the AI and either forget what they’ve learned or never really amass the kind of knowledge learned only through experience. Nick Carr wrote a whole book about topic, and countless others have weighed in on it, but we’re at the time now where automation is really kicking into high gear—including in professions such as health care, law and banking.
It’s also worth considering the level of blowback that will follow any high-profile mistakes or other issues caused by AI in a field like medicine. Take driverless cars as an example. Humans crash cars ALL THE TIME—I saw three separate accidents involving at least eight cars while driving my kid home from skating lessons last night—but it’s major news when an autonomous driving system does the same. Regulators and politicians need to so some serious thinking about what level of perfection we expect from AI systems—in automobiles and elsewhere—and where we’re willing to place the blame when something goes wrong.
The big question isn’t whether AI and automated systems will be better than humans at certain things—they will—but, rather, how logically and fairly human systems react when something goes wrong. If the risk of adopting AI is too high for either consumers or companies, then uptake might be slower than we’d all like to see.
I couldn’t find a good place to fit this news about about targeted phishing attacks on prominent GitHub developers, so I’m just including it up here. There are many scary things about the methods and the actual malware, but the scariest part might be the Dimnie trojan’s roots in espionage. Depending on who’s behind the attacks, the particular GitHub repos might not be the ultimate target, but rather the internal networks and IP of the large companies for which those developers work.

If you enjoy the newsletter, please help spread the word via Twitter, or however else you see fit.
If you’re interested in sponsoring the newsletter and Architecht Show podcast, please drop me a line.
You can get all Architecht content direct via Feedly here:
What's new on Architecht
My interview with Twistlock CEO Ben Bernstein about how the declarative, minimalistic nature of containers and microservices provides a better for identifying a security breach.
Around the web: Artificial intelligence
Aside from being an informative look into how and why Instacart is using deep learning to predict shopping lists, this post also uses emojis to explain everything. And this image:
Source: Instacart
Headquartered in Toronto and advised by Google’s Geoff Hinton, this could be a pretty big deal. Canada is smart to try and capitalize on AI and be a major player in the next technology revolution.
In this talk, the company’s AI director suggests the notion of pairing AI with prior knowledge bases such as Freebase. Apple was slow to embrace AI, but the payoff could be huge if it’s the first one to pull this off successfully.
Marcus, who briefly headed Uber’s AI lab, suggests that short-term corporate thinking could stifle AI progress. On the other hand, the promise of corporate dollars are arguably what’s going to fund a lot of AI research.
I like this headline, and wholeheartedly agree with the sentiment. Hire that chief data officer and let him or her make the AI decisions.  •  Share
Stratechery’s Ben Thompson opines on AI, including its application as maybe the ultimate tool. That brings with it some very promising, and very scary, possibilities.
The headline links to a blog post about Facebook’s recent paper on using GPUs to do nearest-neighbor queries at massive scale. This one is about the internal Facebook AI academy that WIRED covered on Tuesday.
First-order concerns should be standards around privacy and discrimination.
Around the web: All things data
This is promising, but the GPU angle could also be a tough sell in a competitive analytics space where most vendors don’t require a GPU purchase. But as the backend for a cloud service, speed is all that matters.
Frankly, I can’t believe this isn’t more commonplace by now. For more on some of the data science, check out my interview with Zymergen CTO Aaron Kimball. 
Over the past few years, there have been quite a few startups promising to connect users with the people and topics they care about. Slack actually has the data to do it and a platform on which people want to be.  •  Share
I consider this a win for data, as the GSA took a look and realized it could automate answers to a lot of repeat questions. Also, Amazon’s call center service looks even smarter now, too.  •  Share
Around the web: Cloud and infrastructure
This actually isn’t entirely uncommon. “Digital transformation” is a total buzzword, but it’s real and folks in charge of IT are pushing containers to get fast and flexible, and also portable across clouds.
This is an intriguing question, given the combined data center, and just data, engineering chops inside the two companies. Porting LinkedIn to Azure would be a big deal, though.  •  Share
It’s smart for Ford to build out capacity, but I wonder if it will also build out edge locations for tasks that require low latency.
If it’s not Super Micro it will be someone else. White-box vendors and custom manufacturers are the future for webscale buyers. Looks at the “Others” row in IDC’s server numbers.
A look at some high-profile NoSL breaches over the past few years, including the recent MongoDB ones. MongoDB and security researchers implore you to turn on the security features.
Soasta was early in using cloud resources to pound stress-test applications, for what it’s worth. It’s probably a good fit inside Akamai.
If certifications really are valuable, this could be a good one to have. It’s popular in enterprises and, according to the job data I linked to yesterday, Cloud Foundry skills will get you paid.
Did you enjoy this issue?
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.