First things first
I apologize that this is much later than usual, and also rather long. However, there are a lot of interesting news items, so please do expand the email (I assume your client cuts it off) and peruse the whole issue. For what it's worth, I might soon return to the Monday-Thursday publishing schedule I did for much of last year, which results in more frequent but smaller issues. If you have an opinion on that, please let me know.
Moving on to the substance, I'll assume most readers have been paying attention to the latest in protests against tech companies working with government agencies, but if not, here are some stories that will catch you up:
- Microsoft employees protest work with ICE, as tech industry mobilizes over immigration (New York Times)
- Shareholders, ACLU urge Amazon to stop selling facial recognition software to police ahead of event at tech giant’s HQ (GeekWire)
I'm going to keep my thoughts brief because (1) I don't want to wade too deeply into the politics of it all and (2) these are highly nuanced issues that are best analyzed in about 10,000 words rather than a few hundred. But here's what I will say:
- The people protesting Microsoft and Amazon almost certainly were emboldened by Google employees' recent victory in getting that company to cease its artificial intelligence work with the Pentagon related to drone warfare. However, as I noted last week, while there are easy victories when technology could literally be used to kill somebody, there's also a whole lot of gray area. Had Microsoft actually been working with U.S. customs enforcement on separating families, I do believe it would have at least tried to end that contract, or at least apologized profusely. But short of these extreme cases, I think the competition among cloud providers is so intense -- and government contracts can be so lucrative -- that most will find reasons to continue working with government agencies (and also private companies), even on some ethically questionable stuff.
- None of this is about AI per se, but AI definitely plays a role. Mostly, it's because advances in computer vision, speech recognition and other areas have been so great that they've changed the nature of IT. No one would have guessed 15 years ago that computer vision APIs would be a thing, much less that Google or Amazon would be working with the government on computer vision systems. Or even that any company also offering servers and email services would be part of that kind of work. But that's where we are today: governments go to their usual contractors to build planes, but they go to Google to make them smarter. The issue is that tech employees, by and large, have more leverage and a different worldview than their counterparts in the defense industry.
- The AWS case is particularly intriguing, because it seems destined to be a losing cause for the anti-facial-recognition protesters. Why? Because AWS can probably point to a lot of good uses that could come of law enforcement using its technologies, and because AWS -- like all platform providers -- doesn't want to get into the habit of policing its platform. From a business (and arguably legal) point of view, the safest bet still seems to be taking money from paying customers and claiming you don't keep track of what they're all doing. And then, when something really heinous gets called to your attention, you can act accordingly. But however strong the argument for potential civil rights abuse with facial recognition tools, that just doesn't seem like it's going to cross the threshold for AWS.
On a related note to this topic, Mike Loukides at O'Reilly has a thoughtful blog post about how startups, especially, can work against "ethical fade" within their companies.
AI for health care images is heating up
Probably because there was a big computer vision conference this week, I came across a slew of blog posts about research into using deep learning to analyze medical images. This has been a ripe research area for years, but it's always good to be reminded of how much improvement there has been and how far there is to go in some areas. Honestly, though, I think deep learning will soon be the first line of diagnosis in many cases, with doctors providing second opinions and, obviously, directing treatment and doing other doctorly things.
Here a few of those research stories: