ArchiTECHt Daily: At the nexus of cloud, wearables and AI

Four years ago, most of the world (even in Silicon Valley) hadn't heard of deep learning and had no i
ARCHITECHT
ArchiTECHt Daily: At the nexus of cloud, wearables and AI
By ARCHITECHT • Issue #15 • View online
Four years ago, most of the world (even in Silicon Valley) hadn’t heard of deep learning and had no idea the artificial intelligence revolution it would come to drive. Today, it’s powering a not-significant percentage of our consumer experiences and might even be a savior for the wearable market. 
At least, that’s my takeaway after reading yesterday about the AI (specifically, natural language processing) capabilities Google has built into Android Wear 2.0. It’s not even that I think smartwatches and “smart messaging” are particularly exciting (like a true curmudgeon, I still prefer analog watches and manually typed text messages), but the direction in which we’re heading is. If we’re running NLP models on devices as small as smartwatches today, imagine what our smartphones, home automation hubs and, let’s not forget, civic and scientific sensors will be capable of in the not-too-distant future.
Another story from yesterday that got a lot less press than Android Wear, but highlights what I’m talking about, is the new computer vision division of police-department outfitter Taser. It acquired a startup called Dextro (who I covered back in 2014, if you want some background) and the computer vision team from wearable provider Misfit in order to form a new division called Axon AI. While Dextro focused on making video searchable, and Taser is talking a lot about video analysis, it’s not difficult to imagine a future in which police body cameras and even consumer wearables are able to do advanced computer vision in real time—with or without a cloud connection.
Last week, a startup called xnor.ai announced $2.6 million in funding to do just that.
Of course, wearable technologies are really just a subsection of the Internet of Things, which is also largely powered by deep learning (at least if devices aim to do anything useful). And IoT is driving investment in edge computing (aka fog computing) to ensure no device is ever without low-latency access to extra processing power or storage capacity. 
Finally, on a related note, I recently covered research into a framework that can let groups of consumer devices actually train deep learning models. Other scientists suggest smartphones could benefit from advances in quantum computing. IBM is apparently showing off promising results from its brain-inspired TrueNorth chip. 
Basically, our stuff is getting really smart, really fast, and in many cases we won’t even need to worry about having an internet connection to use it.

What's new on ArchiTECHt
Sam Lambert of GitHub
Sam Lambert of GitHub
Around the web: Artificial intelligence
Silicon on the left, gallium on the right.
Silicon on the left, gallium on the right.
Around the web: Cloud and infrastructure
Around the web: All things data
Around the web: Security
Did you enjoy this issue?
ARCHITECHT

ARCHITECHT delivers the most interesting news and information about the business impacts of cloud computing, artificial intelligence, and other trends reshaping enterprise IT. Curated by Derrick Harris.

Check out the Architecht site at https://architecht.io

Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.