From the inane to the profound in artificial intelligence

There were two big happenings in the world of artificial intelligence on Wednesday, but most people p
ARCHITECHT
From the inane to the profound in artificial intelligence
By ARCHITECHT • Issue #147 • View online
There were two big happenings in the world of artificial intelligence on Wednesday, but most people paid a lot less attention to the one that might really matter. What got most of the attention was Google’s rollout of a bunch of new AI-powered phones and devices
I never really gave much thought to smartphone launches until AI became one of their dominant themes. Now that I’m paying some attention, I can’t help but feel like Google and Apple have found themselves a hammer with computer vision and speech recognition, and, accordingly, they now see a lot of things that need to be nailed. I’m pretty certain nobody was dying for the Google Clips camera (which sits on your counter and automatically captures “endearing, heartwarming moments”) or the Pixel 2’s music-identification service. And now that they’re here, I’m pretty sure most people will still find a way to live without them.
If you read my take on the hubbub about the iPhone 8’s new AI features, that pretty much sums up how I feel about the Pixel 2. They’re useful devices and it’s amazing how far phones have come in the last decade, but from an AI perspective they’re kind of underwhelming. 
That being said, the real-time translation feature of the Google Pixel (ear) Buds does sound very cool in theory. I’m just not sure how frequently I’ll come across someone else who happens to be wearing a pair and does not speak English and with whom I absolutely need to speak. Take out the first dependency, though, and you have something really meaningful.
And all of that being said, no one has ever really come to me for opinions on gadgets, so I’m not sure why I’m wasting anybody’s time with them ;-) 
DeepMind’s new ethics society
As for the news that might actually be more important, Google division DeepMind announced its new Ethics & Society committee on Wednesday. Per DeepMind:
This new unit will help us explore and understand the real-world impacts of AI. It has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all. 
I say it might be important because, as with all things fusing technology and ethics, the devil is in the details. In this case, the details are how much money, resources and time DeepMind and its “fellows” are willing to put into this initiative, and how independent its outcomes will be from the profit motives of DeepMind and Google. For more on that, I think Natasha Lomas at TechCrunch offered some fair criticisms of the effort and valid questions for DeepMind to answer.
But conflicts of interest aside, I’m also concerned with the efficacy of think tanks and other institutions that rely too heavily on academicians and research types to drive their directions. This is mostly because (and I’m certain I’ve talked about this in some earlier issue of this newsletter) ever since the advent of “big data” several years ago, we’ve had really smart people talking about many of the same ethical issues that arise with AI. But for all the justified concerns over algorithmic bias, privacy and filter bubbles, it seems to me like governments and companies (at least in the United States) have been pretty slow to act. 
Why another ethics committee, even one managed by DeepMind, would change anything is a mystery to me.
What might be really useful—especially now that AI is not just a hypothetical, but a real thing in the wild (and in our pockets) and advancing every day—is some more significant dialogue among a broader range of folks, from CEOs to blue-collar workers, and from politicians to academics. I don’t think any group is equipped to seriously tackle the issue of AI on its own, but a concerted effort might result in some real progress or at least open people’s eyes to different points of view.
More new AI chips are on the way
Remember yesterday, when I linked to the story about Nvidia open sourcing its work on AI accelerators for embedded devices? Or last week, when Intel announced its work on a brain-inspired neuromorphic processor? Well, they’re doing it in part because there’s a ton of research happening into building low-power chips that function more like a brain—and a lot of money to be made by cracking that code commercially. 
If you can look past the hyperbole, here are two more promising projects in that space:

Sponsor: Bonsai
Sponsor: Bonsai
Artificial intelligence
Listen to the ARCHITECHT Show and ARCHITECHT AI Show podcasts!https://architecht.io/architecht-show/home
Listen to the ARCHITECHT Show and ARCHITECHT AI Show podcasts!https://architecht.io/architecht-show/home
Cloud and infrastructure
If you enjoy the newsletter, please help spread the word via Twitter, or however else you see fit.
If you’re interested in sponsoring the newsletter and/or the ARCHITECHT Show podcast, please drop me a line.
Use Feedly? Get ARCHITECHT via RSS here:
Big data and data science
Did you enjoy this issue?
ARCHITECHT

ARCHITECHT delivers the most interesting news and information about the business impacts of cloud computing, artificial intelligence, and other trends reshaping enterprise IT. Curated by Derrick Harris.

Check out the Architecht site at https://architecht.io

Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.