You can read the article and make your own judgments (and I’d love to hear why I’m wrong), but my basic beef is this: The intelligence part of AI is not just the interface, but also (primarily?) the backend. And most of the backend systems in this particular piece don’t seem very intelligent. In fact, they seem to be doing things we’ve been able to do for years via traditional data analytics and scripts.
That’s all fine and dandy, and companies are free to use them to automate certain aspects of their business processes—but presenting relatively dumb software as AI obscures the fact that researchers are making serious progress on systems that could actually be much more intelligent in roles such as customer service and text analysis. (See, for example, this piece on what Maluuba, now part of Microsoft, is working on.) Focusing on technologies with limited capabilities gives workers and policymakers a false sense of what will eventually be possible. This could result in ill-formed opinions about actual risk, and short-sighted policy decisions.
On the other hand, focusing on the wrong technologies doesn’t help employers accurately gauge how and when they might optimize their operations with AI. Whether that’s ultimately better or worse for employees remains to be seen—there’s an argument for both—but it would be good for everyone to get a real sense of what’s coming.
A handful of observations from the NYT article:
Hotel search is pretty much a solved problem, right?
Identifying correlations like (users of App X will spend more, or prefer this type of hotel) has been possible at least since the advent of Hadoop. For example, this from 2011.
“Shall” is not a vague term in legal documents. It has very specific meaning: The party does this, or the party has breached the contract or broken the law.
The real breakthrough in email automation is not suggesting the right reply, but rather answering specific questions that aren’t binary or don’t lend themselves to a website link for more info.
I’ve been on the receiving end of an x.ai digital assistant for scheduling a meeting. It worked well enough (with maybe one extraneous email) but I actually felt a sense of unease in suspecting it was a bot and not knowing how personable my responses should be.
The best guests and biggest news in cloud computing, artificial intelligence and software engineering. The biggest news, best analysis and most insightful interviews in cloud computing, artificial intelligence and software engineering.
This discussion, here viewed through the lens of predictive policing, is even more important in the AI era. There are times when correlation is enough (e.g., recommending a song I might like) and times when it might violate civil rights.
And data, too, as a point of control over consumers. Cornering the market on data hasn’t been an historical basis for antitrust regulation. Algorithms colluding without (blatant) human intervention is another issue.
At least, if this new partnership with Indiana University is any indication. Better, faster sensors and calculations are great, especially for informing human decision-makers about what’s up in a given situation.
TL;DR: It’s more work than you might expect on the configuration side, but much less work once deployed. At this point, it’s a tradeoff that users need to think about when considering the serverless approach.
This post might not even cover them all, but it’s a good start. From Red Hat to Cloudera to Kubernetes, there’s still a lot we need to figure out in order to optimize the open source and commercial aspects of software.
I found this article both fascinating and puzzling. It’s a good look at the way the movie industry has relied on tape drives for archiving, but (unless I’m missing something obvious) I can’t figure out why digital storage isn’t an option.
There’s a geopolitical angle to this, but also an architectural one. Specifically: When it comes to deploying infrastructure for edge computing, companies that know a lot about data centers (e.g., Google, Amazon and Microsoft) have a built-in advantage.
This is cool research. Basically, they show how to maximize both cost-savings and availability in spot/preemptible instances by building a model that takes into account factors such as price variability, risk of losing the server, etc.
This is a nice chart encapsulating how a collection of companies are using data, and where they’re seeing results. Cost-cutting has been most effective so far, which is not surprising considering it is low-hanging fruit.
This is a good (if brief) interview with Cloudera co-founder Mike Olson. And here’s a longer take on the IPO from Michael Coté. The gist of both, which is true for most situations, is that time will really tell whether the company is a success.
This article gives a fairly detailed looks at the pros of the Look, without going into the cons. Privacy is probably Nos. 1, 2 and 3 on most people’s list of concerns, followed by lack of necessity and the mundanity of a world where we all look like what’s hot on Amazon.
And scraping 40,000 Tinder images and sharing them publicly, via Kaggle, might be that limit. For better or worse (but mostly better), scraping and privacy continue to be limiting factors in data sharing.
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris.
Check out the Architecht site at https://architecht.io