First things first

My lord, do I have mixed emotions -- and a lot of them -- about the state of artificial intelligence. I'll start on a topic that I've tried to explain several times recently, including here:

It seems like were at an inflection point, where companies interested in making real money off of AI technology need to decide between chasing the next big thing or doubling down on the techniques we already have in production.

When it comes to doubling down on existing technologies (deep learning, specifically), it's heartening to see that the world's new fastest supercomputer -- Summit, build by IBM and deployed at Oak Ridge National Laboratory -- was built to excel at deep learning workloads. Here's an excerpt from that post, focused on its deep learning capabilities:

[T]he GPUs alone will provide 215 peak petaflops at double precision. Also, since each V100 also delivers 125 teraflops of mixed precision, Tensor Core operations, the system’s peak rating for deep learning performance is something on the order of 3.3 exaflops.

Those exaflops are not just theoretical either. According to ORNL director Thomas Zacharia, even before the machine was fully built, researchers had run a comparative genomics code at 1.88 exaflops using the Tensor Core capability of the GPUs. The application was rummaging through genomes looking for patterns indicative of certain conditions. “This is the first time anyone has broken the exascale barrier,” noted Zacharia.

Of course, Summit will also support the standard array of science codes the DOE is most interested in, especially those having to do with things like fusion energy, alternative energy sources, material science, climate studies, computational chemistry, and cosmology. But since this is open science system available to all sorts of research that frankly has nothing to do with energy, Summit will also be used for healthcare applications in areas such as drug discovery, cancer studies, addiction, and research into other types of diseases. In fact, at the press conference announcing the system’s launch, Zacharia expressed his desire for Oak Ridge to be “the CERN for healthcare data analytics.”

To me, that's the pinnacle of thinking about AI right now: "What do we have available to us, and how can we take advantage of it to solve big problems?"

Less impressive is the ongoing feud between Elon Musk, Mark Zuckerberg and a host of people only tangentially involved in real AI research over whether we need to be concerned about artificial superintelligence. As important as this issue actually is, the public and non-scientific nature of it does a disservice to the work being done by researchers. And as compelling as this New York Times feature on the billionaire feud is, I think the most important parts -- the opinions expressed by Oren Etzioni (of the Allen Institute for AI) and Rodney Brooks (of MIT) -- will probably be the most overlooked:

[Sam] Harris warned that because the world was in an arms race toward A.I., researchers may not have the time needed to ensure superintelligence is built in a safe way.

“This is something you have made up,” Mr. Brooks responded. He implied that Mr. Harris’s argument was based on unscientific reasoning. It couldn’t be proven right or wrong — a real insult among scientists.

“I would take this personally, if it actually made sense.” Mr. Harris said.

A moderator finally ended the tussle and asked for questions from the audience. Mr. Etzioni, the head of the Allen Institute, took the microphone. “I am not going to grandstand,” he said. But urged on by Mr. Brooks, he walked onto the stage and laid into Mr. Harris for three minutes, saying that today’s A.I. systems are so limited, spending so much time worrying about superintelligence just doesn’t make sense.

We have real problems to solve, in society and also in business, and even limited AI could help us solve some of them. Yeah, we need to keep an eye toward the future and plan for unplanned contingencies. But we also need to look at the world as it is, the world we live in every day, and ask how we can improve it using what we have available today. People are concerned about another AI winter coming (see the link in the AI section below), and I fear this type of ideological debate, rather than a debate about making meaningful progress on existing problems, will help that come along faster.

What to make of Google's new AI principles

And then there is the topic of Google's recent decision not to help build AI-powered weapons for the military. From an ethical and certainly from a moral standpoint, this is probably the right decision to make. But I think Google's new AI principles overlook some pretty basic concerns about the technology it is building already.

The blog post talks a lot about privacy and surveillance and harm, but technology doesn't need to expressly target these things in order to do some serious damage. From fake news to data gathering to questions over how the web has created filter bubbles and splintered society, there's a lot of gray area between doing no harm and explicitly doing harm. And as we've seen already (this an issue with platforms, overall), even neutral AI technologies can be used by law enforcement or other institutions in ways that some people find troubling. (Amazon, by the way, doesn't seem ready to take an ethical stand here.)

So the questions here become how high is the bar to which Google will hold itself, and does that involve monitoring usage of its various cloud APIs so nobody uses them for unethical or criminal ends? And, also, how deep does Google look into its own products in order to determine of "beneficial" they really are, or how "harmful" they really are not.

It's commendable for Google to take a stand, but in the end, you have to think that capitalism will win out. Google might not work directly with governments on building weapons or or spying on citizens, but it's still going to work with the military on other stuff, it's not going to actively track everybody using its cloud platform, and its own efforts to commercialize AI are probably going to be given the benefit of the doubt on the beneficial-vs-harmful scale.

If I had to tie this into my thoughts on the debate over superintelligence, the connection would be all that gray area. It's so easy to get sucked into these debates over what's possible and what's not, or what's right and what's wrong, and to miss all the reality happening in the middle. AI really is a big opportunity, but it could end up being a big missed opportunity if all our attention is focused on edge cases and ideology.

Read and share this issue online here.

Sponsor: MongoDB

AI and machine learning










Sponsor: Neo4j

Cloud and infrastructure









Sponsor: Replicated

Data and analytics