ArchiTECHt Daily: How to build an AI startup with legs

So, Y Combinator announced in a blog post on Sunday that it's going to select a group of artificial i
ARCHITECHT
ArchiTECHt Daily: How to build an AI startup with legs
By ARCHITECHT • Issue #41
So, Y Combinator announced in a blog post on Sunday that it’s going to select a group of artificial intelligence startups as part of the upcoming batch of companies in its popular accelerator/seed funding program. If you’re inclined to see bubbles whenever a space gets really frothy with investment, this is as good a sign as any to start preparing for a pop.
However, YC is doing at least one particularly smart thing with its call for AI startups: it’s looking for startups targeting vertical industries, rather than just AI startups in general. If there’s something seemingly everyone agrees on right now, it’s that most successful AI companies will be those building vertical products rather than horizontal platforms or general-purpose AI software. Most companies have neither the expertise to manage any sort of AI system, nor, frankly, the desire to learn an entirely new field of computer science.
Several years ago, everyone was supposed to invest in big data. A few years ago, they were all supposed to hire data scientists to manage those systems and build applications. How many companies outside of Silicon Valley and Wall Street actually did that? I would venture to guess it was very few, relatively speaking.
So here we are with AI and there is, understandably, a fair amount of skepticism from the companies that actually have to buy what Silicon Valley is building with unbridled optimism. 
However, the good news for everybody is that there’s plenty of good AI advice already out there for startups and software companies that are willing to listen. Most of it starts with the aforementioned advice to build a targeted application. I would take things even further and suggest having at least one founder who has actually worked deeply in the industry you’re trying to target—AI right now is going to be best at optimizing specific tasks and workflows for day-to-day practitioners rather than revolutionizing an entire industry. 
Lots of folks are also offering up advice on how to think about generating and analyzing data in AI startups, and figuring out which approaches to use. We barely scratched the surface of tried-and-true machine learning approaches before diving whole hog into neural networks; it’s possible that “AI” might not actually be the best fit right now. Further, it’s possible that no matter what approach you take, focusing sales and marketing around a product’s “intelligence” will be a recipe for disaster if it can’t live up to someone’s preconceived (and possibly lofty) expectations of what that means.
With that in mind, here’s the best advice I’ve seen this week about starting companies that do AI:

Sponsor: Datos IO
Sponsor: Datos IO
Around the web: Artificial intelligence
Baidu is investing like crazy in AI across nearly every area in which it operates. Read this article, then listen to my podcast with Baidu’s Andrew Ng for more insights on why AI is so hot—and so promising—in China.
The first of two profiles here from a special series in the Financial Times last week. Two notes: (1) You probably have to be really smart or have launched 5 years ago to take this approach, and (2) DeepMind’s recent privacy snafu is a great case study in why industry expertise matters for products.
www.ft.com  •  Share
This FT profile is a good look at robotics startup Preferred Networks, but also a nice look into the nascent AI and general startup scene in Japan right now.
www.ft.com  •  Share
The best thing in the world for Amazon would be an an Echo, and Alexa assistant, that can rival Google Home in search and actually understands context.
They’re not just algorithmic, either. Certain approaches to deep learning can be too data- and computation-intensive for a lot of use cases, too.
I cut my teeth in tech journalism writing about grid computing. Now someone has devised a volunteer grid, a la SETI@home, for deep learning workloads. This could be a useful approach for training models.
arxiv.org  •  Share
Around the web: Cloud and infrastructure
AI won’t necessarily kill per-user licensing or annual licensing, but it’s a possibility worth considering. I assume workflows and buying models for apps will evolve slower than for infrastructure.
A convincing argument from an Accenture Cloud exec on why the cloud is the best place for your legacy apps, too. Basically, moving them is going to save you money and open the opportunity to evolve them.
China seems like a great place for IBM to make a lot of money in cloud. Seriously. Its biggest competition in the mainland might be Alibaba rather than Amazon.
fortune.com  •  Share
I have a couple issues with this, including that AWS is going to keep growing even if its overall market share decreases thanks to competition from Microsoft and Google. Cloud computing is just getting started.
fortune.com  •  Share
Speaking of Google Cloud, here’s a take on its evolution from a platform obsessed with engineering and its own way of doing things into a plaform that can relate to the customers it desperately wants to win.
redmonk.com  •  Share
Lots of potentially useful info here on where Google’s data centers are, how much they cost, how many servers they house, etc. 
This will happen soon enough as AWS, Google and Microsoft ramp up capabilities of their “serverless” services. People once questioned whether you could do HPC on the cloud at all, but that has been a popular use case for years now.
Quantum computing vendor D-Wave Systems has made quite a few exciting announcements over the past few weeks. Here, University of Texas professor Scott Aaronson—and self-professed D-Wave skeptic—provides some counterpoints to D-Wave’s claims.
Sponsor: Marshal.io
Sponsor: Marshal.io
Around the web: All things data
If you can ignore that the subject of this story, Blackboard, used pure Apache Hadoop rather than a commercial distro—and that the story was probably pitched by Snowflake Computing—there’s a good lesson in why a lot of people are down on Hadoop right now.
Making database queries faster by learning from previous queries is an idea so obvious it makes you wonder why it hasn’t been applied more broadly. FWIW, one of the researchers behind this project is Hadoop co-creator Michael Cafarella.
arxiv.org  •  Share
Did you enjoy this issue?
ARCHITECHT
The most interesting news, analysis, blog posts and research in cloud computing, artificial intelligence and software engineering. Delivered daily to your inbox. Curated by Derrick Harris. Check out the Architecht site at https://architecht.io
Carefully curated by ARCHITECHT with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.