The time has come. Following years of mounting interest in a type of artificial intelligence (AI) called deep learning, the biggest public cloud infrastructure provider, Amazon Web Services (AWS), today announced its first Amazon AI services that make use of deep learning.

Deep learning generally involves training artificial neural networks on lots of data, such as photos, and then getting them to make inferences about new data. One of AWS’ top competitors, Google Cloud Platform, introduced the Cloud Machine Learning service that can do deep learning earlier this year. In China, the Alibaba public cloud has the DT PAI service available for AI workloads. Additionally, startups such as Clarifai offer cloud-based deep learning services.

But AWS isn’t providing a basic general-purpose AI service — not today, anyway.

There is the new Rekognition image recognition service — presumably drawing on the talent and technology from deep learning startup Orbeus, whose team Amazon hired in the past year.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

There is also the new Polly text-to-speech (TTS) service, which supports 47 voices and 24 languages. It’s free to process up to 5 million characters a month, and after that it costs $0.000004 per character, AWS chief evangelist Jeff Barr wrote in a blog post.

But the most significant announcement today is the launch of Amazon Lex. It’s effectively the technology underlying Alexa, Amazon’s voice-activated virtual assistant. Alexa is the basis of the Amazon Echo line of smart speakers, which have taken off — one recent report said Amazon has sold more than 5 million of them. Lex provides deep learning-powered automatic speech recognition and natural-language understanding.

“This will allow you to build all kinds of conversational applications,” AWS chief executive Andy Jassy said during today’s keynote at AWS’ re:Invent user conference in Las Vegas. “You’ll submit either a piece of text or a piece of audio. You’ll specify a response [to the input], and then it’ll return that response.”

This will be able to power chatbots on apps like Facebook Messenger, and eventually Slack. But it will also let developers tap data in existing data repositories — there are connectors to Salesforce, Microsoft Dynamics, Marketo, Zendesk, Quickbooks, Twilio, and HubSpot, Jassy said. It can also support triggers built with the AWS Lambda event-driven computing service.

Chatbots are among the trendiest technologies of 2016. Facebook and Microsoft have also unveiled platforms for building chatbots this year, and now the biggest cloud provider is catching up.

The service is available in preview now through just one of AWS’ data center regions — US East (Northern Virginia). It’s free to make up to 10,000 text requests and 5,000 speech requests per month, and after that it costs $4 per each thousand speech requests and 75 cents for every thousand text requests, Barr wrote in a blog post.

To an extent AWS leaked its own news. During an onstage interview at the AWS Public Sector Summit in Washington D.C. in June, Jassy said that AWS would unveil a deep learning service “in the next few months.” Earlier this month The Information reported on AWS’ plan to launch a deep learning service, and Fortune last week reported that AWS would expose more powerful AI tools to developers.

AWS has been laying the groundwork for this. In May Amazon open-sourced the DSSTNE deep learning framework, and earlier this month AWS announced that its deep learning framework of choice would be MXNet, not DSSTNE.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More