First things first


Let me say up front that I remain convinced that artificial intelligence will automate away fewer jobs than many people think, at least in the next decade. There are some obvious exceptions but, for the most part, I see AI very much as a technology for augmenting human work.

In part, this is because while some things do seem like they change overnight -- the phonebook and the residential landline all but vanished -- the processes remain. Our methods are different, but people still look talk to each other and look up businesses and phone numbers. Driverless cars a thing, but there are a lot of institutional, infrastructural and economic hurdles to overcome before steering wheels are a relic.

Companies considering how they can take advantage of AI or how it will affect their businesses, and policymakers contemplating how to regulate AI or prepare society for its effects, would be wise to consider the phases of its impact and not go too crazy focusing on what they think will happen further down the road. And as they're making decisions, it would be good to speak with employees, citizens and stakeholders across the board in order to ensure that any decisions are actually in the best interests of everyone involved.

This goes double for technology companies, which have great capacity for innovation but don't always consider the ramifications of what they're building. Basically, I'm suggesting that we all need to talk more -- and more circumspectly -- about how AI can and should change our lives, and then we all need to act accordingly.

And this is all a long way of saying there were a lot of good pieces published in the past few days talking about these issues. So read what some people much smarter than me have to say about it:

Is Kafka getting some competition?

Away from policy and onto actual technology, I read this week about a new company called Streamlio that I thought was worth pointing out, especially for anybody building data infrastructure. It's building a real-time data platform consisting of Apache Heron (built at Twitter) for streaming processing, as well as Apache Pulsar and BookKeeper (two technologies built at Yahoo) for messaging and storage. Streamlio is talking an awful lot about the performance benefits of Pulsar over Kafka, which suggests it sees Apache Kafka (and, likely, Confluent) as its biggest hurdle to mass adoption.

I would love to get a better sense of how folks neck-deep in the world of big data and streaming data are thinking about the systems that underpin their applications. To me, it seems that Kafka is mature and really just catching on among mainstream users, which makes for an uphill climb for a company/platform like Streamlio. I often assume, perhaps incorrectly, that if something is going to replace an embedded incumbent technology -- or win against it for new workloads like IoT or edge computing -- it needs to be something fundamentally newer and better (or perhaps just easier).

Perhaps Streamlio is all of those things. Or perhaps the technology that is those things will come from a cloud provider, or from a newer webscale company without the large legacy footprint of companies like Yahoo, Twitter or LinkedIn. At any rate, you can read more about Streamlio and what it's up to here:

Derrick Harris

AI and machine learning















Cloud and infrastructure
















Data and analytics