First things first

I said I was going to experiment with a different publication schedule. So here's a mid-week edition.

A report titled "The Malicious Use of Artificial Intelligence", written by a bunch of AI researchers from different organizations, received a lot of press this week for its predictions about how AI could be used to commit cybercrime and curtail civil rights. It's long (99 pages, but here's one of many summaries) and speculative, but it's general premise is difficult to argue with: even without superintelligence, AI could be used to do a whole lot of bad in the world.

Used in this capacity, AI's true strength in real-world applications remains the speed and scale with which it allows humans to carry out otherwise time-consuming tasks. It won't take super-humans to wreak total havoc on our digital world, but rather sufficiently smart/determined people with sufficiently smart software. White hats will really need to keep on their toes, or risk playing a game of catch-up that will be difficult to win.

What's really fascinating, though, is to see this group of contributors (which includes representatives from OpenAI and the EFF) calling for research into whether AI needs more government regulation and less openness in terms of what's published and how it's licensed. Those are areas definitely worth consideration, but it's hard to see them gaining much traction in certain circles. It's also hard to figure how we'll be able to enforce rules around AI publication and information when even the NSA can't keep information within its walls.

One could also argue that having trying to cordon off the "official" research will lead to different pockets of AI work -- some of which are happening beyond the gaze of the powers that be. There's enough information already available that anyone so inclined to go rogue will have a pretty good starting point. Perhaps maximum openness is the best way to ensure everyone's working from the same baseline.

At any rate, it's a thought-provoking report and raises some issues that deserve immediate attention from policymakers.

Elon Musk leaves OpenAI

I call this the "coincidental Elon Musk edition" because Elon was one of the first (and loudest) individuals out warning about the perils of AI, albeit about perils far more speculative than what's in the report above. Musk also co-founded OpenAI -- which participated in said report -- in the name of, well, open AI research to help ensure ethical and safe development of AI. And, this week, OpenAI announced that Musk is leaving his position on its board because, "As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon."

I believe I've asked this question several times over the past few years, as Musk's warning about AI grew more dire and attracted more attention. If he's going to run businesses that rely on AI and promise to transform our world, then fear-mongering is not a great way to get people excited about those technologies. (Unless, of course, Musk can convince the world that he's the one who's figured out how to do AI safely.)

Tesla gets cryptojacked via Kubernetes

And completing the Elon Musk trifecta is this week's news that Tesla is among the growing number of companies to have its resources jacked to mine cryptocurrency. Specifically, hackers were able to access an unsecured Kubernetes management console, which gave them access to the teams Amazon Web Services credentials. They used these to install and run mining software on Tesla's AWS cluster -- and the hackers also had access to an AWS S3 bucket containing potentially sensitive data.

This is not technically an issue with either Kubernetes or AWS, unless you were to argue that they should enforce security by design. But nonetheless, AWS appears to at least be taking the unsecured S3 issue seriously, by making free its Trusted Advisor service for checking whether buckets are publicly accessible. (Chef also announced a new version of its InSpec product that helps guard against this.) Tesla is far, far from the first company to have data exposed because it left its cloud resources unsecured.

If we tie this story back into the research report on malicious uses of AI, you can begin to see where the next battleground might be for cloud providers. The ones that can best secure customers' data and resources, by identifying bugs, security holes and malicious activity before they can do damage, will be in a very good position. AI will likely be an important tool for pulling this off, and eventually could be one of the things they most need to protect against.


AI and machine learning

A word from our sponsors

Cloud and infrastructure

Data and analytics