First things first

I've been writing quite a bit lately about the tendency for governments and business "thought leaders" to ascribe almost mythical powers to artificial intelligence, so I was happy to come across an article that put a name to this practice: AI solutionism. I'm cutting liberally to make this quotable (there are paragraphs between the ellipsis), but this is the argument, in a nutshell:

This is the en vogue philosophy that, given enough data, machine learning algorithms can solve all of humanity’s problems.

...

If policymakers start deploying neural networks left right and centre, they should not assume that AI will instantly make our government institutions more agile or efficient. Simply adding a neural network to a democracy does not mean it will be instantaneously more inclusive, fair or personalised.

...

Instead of painting an unrealistic picture of the superpowers afforded by AI robots, we should take a step back and separate AI’s actual technological capabilities from magic. Machine learning is neither magical fairy dust, nor the solution to everything.

That happens to also be a good lens through which to look at claims of AI revolutionizing other industries -- such as media and journalism, in the case of this TechCrunch article. The author suggests a bunch of ways that AI could help the media industry better target readers and create better content, but there are bigger issues at play here than recommendation engines (even if actively try to not create filter bubbles) or kind-of-new types of content. In fact, many of the authors suggestions -- including, to name just two, data journalism and identifying bias in coverage -- are already very much in practice today and actually go back many years.

In fact, I'm an adviser to an AI startup, AI Reverie, that was founded by a New York Times data scientist. You can also listen to my podcast interview with Conde Nast CTO Ed Cudahy, in which he explains how his company's fashion magazines use computer vision to identify certain brands of purses and other items. Beyond mere plugs, my point here is that people know this stuff and have been working at it. Keywords, SEO, ideal publishing times, what's trending, visualizations, computer vision -- newsrooms big and small experiment with it all to varying levels and with varying levels of success.

AI can augment some of these processes and make them faster and easier, or make certain data analysis richer, but it can't save media. (I have lots of opinions about what can, but that's a topic for another day and another outlet.) Today, and for the foreseeable future, AI, and machine learning more broadly, are tools. Aimed at the right problems and implemented intelligently, they can make things better, faster, more accurate, more scalable, what have you. But the world's best torque wrench can't fix a car that's been rusted to the chassis any more than AI can fix decades worth of issues that have affected entire institutions to their cores.

This is especially true when the problems an industry or an individual company are facing aren't even technical to begin with. As hard as it might try, for example, Facebook can't AI its way out of its current mess. It can filter some fake news and make facial recognition amazing across a huge segment of the world's population, but in the end it will need to confront its business model and ethos to actually make a change.

Read and share this issue online here.

The ARCHITECHT Show podcast

AI and machine learning









Cloud and infrastructure






Sponsor: Replicated

Data and analytics