For the past five years, Amazon has held a big conference for customers of its Amazon Web Services (AWS) cloud division in Las Vegas. Software industry people and existing customers have always paid attention. But this year AWS has become such a financial juggernaut for Amazon that it has become a bigger deal in the wider world of business. And that means this is a really big week for AWS, perhaps the biggest ever.
Of course, AWS came prepared — it’s been planning this event for months. Executives announce one piece of news after another onstage, just like at Google I/O or Microsoft Build. It can leave one overwhelmed, particularly if one doesn’t know the portfolio of AWS services backward and forward.
[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2119058,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,dev,","session":"D"}']Here’s a rundown of the announcements coming out of re:Invent this year.
New instances, FPGAs, and GPUs for EC2
AWS announced the launch of new virtual machine (VM) instances that will be available for developers to rent out by the hour to run their applications. This is the most predictable announcement of the 2016 re:Invent conference, as AWS tends to unveil new VM types at this event.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Rather than just releasing new VM instances backed with graphics processing units (GPUs), AWS chief executive Andy Jassy said that AWS will be exposing “elastic GPUs for EC2,” a way for people to attach GPU resources to their existing VM instances. Also, there are new F1-branded VM instances that are accelerated with field-programmable gate arrays (FPGAs). The AWS service lets you write code, package it up as an image, and then run it as custom logic on the FPGAs.
A DigitalOcean killer
AWS announced Amazon Lightsail, a new way for developers to quickly and easily get access to low-cost virtual private servers (VPS).
They don’t need to worry about provisioning storage, security groups, or identity and access management (IAM) when they want to just get a box to run a simple application. They can just use Amazon Lightsail now.
AWS’ first AI services, including a conversational app framework called Lex, which is what’s behind Alexa
The time has come. Following years of mounting interest in a type of artificial intelligence (AI) called deep learning, AWS today announced its first Amazon AI services that make use of deep learning.
[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":2119058,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,dev,","session":"D"}']
There is the new Rekognition image recognition service — presumably drawing on the talent and technology from deep learning startup Orbeus, whose team Amazon hired in the past year.
There is also the new Polly text-to-speech (TTS) service, which supports 47 voices and 24 languages. It’s free to process up to 5 million characters a month, and after that it costs $0.000004 per character, AWS chief evangelist Jeff Barr wrote in a blog post.
But the most significant announcement today is the launch of Amazon Lex. It’s effectively the technology underlying Alexa, Amazon’s voice-activated virtual assistant. Alexa is the basis of the Amazon Echo line of smart speakers, which have taken off — one recent report said Amazon has sold more than 5 million of them. Lex provides deep learning-powered automatic speech recognition and natural-language understanding.
[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":2119058,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,dev,","session":"D"}']
Athena service for querying data in S3
AWS announced the launch of Amazon Athena, a new tool for running queries on data that’s stored in AWS’ widely used S3 cloud storage service.
People can use the standard Structured Query Language (SQL) with the service and don’t need to worry about setting up the infrastructure for it.
AWS doesn’t believe Athema will overlap with the querying tools that are available through its Elastic Map Reduce (EMR) service and its Redshift data warehousing service, AWS chief executive Andy Jassy said during today’s keynote.
[aditude-amp id="medium3" targeting='{"env":"staging","page_type":"article","post_id":2119058,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,dev,","session":"D"}']
PostgreSQL support in the Aurora database engine
At its re:Invent user conference in Las Vegas, AWS today announced that its Aurora managed cloud database engine now supports the popular PostgreSQL structured database.
Amazon already lets developers store and retrieve data using PostgreSQL through its Relational Database Service (RDS). RDS added support for PostgreSQL in 2013. Of course, developers could also just install the open-source PostgreSQL atop AWS computing and storage infrastructure on their own. Sure enough, it will be possible to migrate PostgreSQL databases from RDS to Aurora.
PostgreSQL support is the top request from customers of Aurora, AWS chief executive Andy Jassy said today.
[aditude-amp id="medium4" targeting='{"env":"staging","page_type":"article","post_id":2119058,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,dev,","session":"D"}']
The Snowmobile truck and 100PB Snowmobile Edge boxes to efficiently move data to the cloud
At its re:Invent user conference in Las Vegas today, AWS unveiled the next generation of its Snowball boxes that customers can use to move lots of data to AWS.
The first version was announced at last year’s re:Invent — it had a capacity of 50TB. Then, in April of this year, AWS showed off a 80TB version. This time around, AWS decided to add on computing resources, which opens up new possibilities for companies that need a Snowball.
Multiple Snowball Edge boxes — each of which has a 100TB capacity and a color touchscreen — can divvy up databases with sharding, for one thing, AWS chief executive Andy Jassy said, and they can sync data to S3, too. But also, they can run new AWS software called AWS Greengrass, which effectively brings the serverless event-driven computing model of AWS Lambda outside of AWS and onto other kinds of devices, including embedded devices.
[aditude-amp id="medium5" targeting='{"env":"staging","page_type":"article","post_id":2119058,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,cloud,dev,","session":"D"}']
Unit testing and debugging services, and a personal health dashboard
During the second keynote, AWS announced the launch of a new service called CodeBuild, which is meant to automatically compile developers’ code and then run unit tests on it.
AWS will charge by the minute and automatically scale it in and out based on the needs of the workload, said Amazon vice president and chief technology officer Werner Vogels. The service can also be customized.
AWS is also introducing a service called X-Ray, which is meant to help developers with debugging their code. The service will show performance bottlenecks and show which services are causing issues. It will also show the “impact of issues for users,” Vogels said. Additionally Vogels introduced AWS OpsWorks for Chef Automate, a fully managed version of a Chef server for automating the management of infrastructure.
And AWS is giving customers a new Personal Health Dashboard that compliments the existing Service Health Dashboard. “As the name indicates, this dashboard gives you a personalized view into the performance and availability of the AWS services that you are using, along with alerts that are automatically triggered by changes in the health of the services. It is designed to be the single source of truth with respect to your cloud resource, and should give you more visibility into any issues that might affect you,” AWS chief evangelist Jeff Barr wrote in a blog post. A new AWS Health application programming interface (API) — available to customers that subscribe to AWS’ business or enterprise support services — provides programmatic access to this information.
DDoS mitigation services
AWS said it has turned on distributed denial of service (DDoS) attack mitigation technology, called AWS Shield Standard, for all of its customers, free of charge. It “protects you from 96 percent of the most common attacks today, including SYN/ACK floods, Reflection attacks, and HTTP slow reads,” AWS chief evangelist Jeff Barr wrote in a blog post.
To help customers prevent more sophisticated attacks, AWS is also introducing a premium tier called AWS Shield Advanced. It lets customers call in a special support team that’s available 24 hours a day and seven days a week. And customers can get notifications about attacks.
When AWS detects attacks, “we will work together with DDoS protection teams to create the right level of protection using WAF [web application firewall]. We will also keep an eye on cost, making sure you don’t incur any additional cost by using our service,” Amazon vice president and chief technology officer Werner Vogels said.
A mobile analytics tool
AWS also launched Amazon Pinpoint, a mobile analytics service. Pinpoint will help developers understand the behaviors of people using mobile apps. It lets developers send push notifications and then track the impact of them, Amazon vice president and chief technology officer Werner Vogels said during today’s keynote.
It integrates with AWS’ existing Mobile Hub service, and it supports both iOS (Swift and Objective C) and Android apps, with optional campaign analytics and A/B testing.
AWS Glue, for automated data integration
AWS launched AWS Glue, a tool for automatically running jobs for cleaning up data from multiple sources and getting it all ready for analysis in other tools, like business intelligence (BI) software.
This type of work is typically known as extract-transform-load, or ETL. Companies including Informatica and Talend offer software for it. Now AWS has a cloud service for it.
It’s been possible to use AWS infrastructure to do ETL work, with services like EMR (Elastic Map Reduce). The other big public clouds have Hadoop-based tools for this sort of thing, too. But with AWS Glue it will be easier. And with the help of JDBC connectors, it will be able to connect with data in on-premises services.
Several enhancements to the Lambda event-driven computing service
AWS said it has added support for the C# programming language in its Lambda event-driven computing service. It also talked about a new capability called Lambda@Edge, which makes it possible to run Lambda functions at edge locations where customers store media content around the world on its CloudFront content distribution network (CDN).
Also new: AWS Step Functions, which will allow developers to build full applications in the form of functions that are hooked up together. A visual editor makes it easy to connect multiple functions, Amazon vice president and chief technology officer Werner Vogels explained onstage.
Altogether this is a big refresh to Lambda, which AWS first introduced at re:Invent two years ago. At the time it was viewed as a revolutionary concept because it let developers do complex things without having to set up and manage the underlying computing and storage infrastructure.
AWS Batch
AWS released a preview of AWS Batch, a service for automating the deployment of batch processing jobs.
In the past decade or so, people have relied on the Hadoop open-source big data software to do batch processing, and AWS and other public clouds have come up with managed versions of Hadoop and additional services that are catered to batch and streaming workloads. Now AWS is trying to more directly meet the needs of developers who want to process lots of data automatically even if it doesn’t happen instantly.
And it’s designed to work with containers as opposed to the more traditional virtual machines (VMs). Customers can provide the exact container images that need to be run on top of the AWS EC2 computing infrastructure. (Shell scripts and Linux executables are also supported, and it will be possible to run Lambda functions in the future.) On top of that, it’s able to take advantage of cheaper EC2 instances — specifically that which is available on the spot market. But customers can specify the types of instances they’d like, as well as minimum and maximum compute resources.
Open-source software for building your own container scheduler
AWS has open-sourced new software called Blox that lets developers create custom schedulers for use inside AWS’ EC2 Container Service (ECS). The first two components of the Blox software, a “reference scheduler” and a service for capturing data on clusters that can then be queried, are available now on GitHub under an Apache 2.0 license.
With this move, AWS is making its container deployment service easier to tinker with, rather than using existing third-party schedulers, such as Google-backed Kubernetes, Mesosphere’s Mesos, or Docker’s own Swarm.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More