Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":2068536,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,cloud,dev,","session":"A"}']

AWS officially launches P2 GPU-backed instances

Matt Wood, general manager of product strategy for Amazon Web Services (AWS), takes the stage at the AWS Santa Clara Summit in July 2016.

Image Credit: Screenshot

Public cloud market leader Amazon Web Services (AWS) today officially announced the availability of new P2 virtual machine instances that feature graphics processing units (GPUs).

VentureBeat reported earlier this month that AWS was testing the instances ahead of an impending launch. Now that’s happened.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":2068536,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"big-data,business,cloud,dev,","session":"A"}']

“These instances were designed to chew through tough, large-scale machine learning, deep learning, computational fluid dynamics (CFD), seismic analysis, molecular modeling, genomics, and computational finance workloads,” AWS chief evangelist Jeff Barr wrote in a blog post. Deep learning, a trendy type of artificial intelligence, often involves using GPU-backed servers to train neural nets on lots of data so they can make inferences about new data, and now AWS is providing more powerful infrastructure for that computing.

The infrastructure helps bring AWS closer to competitors like Microsoft Azure and IBM SoftLayer when it comes to offering powerful GPU resources in the cloud, so people don’t need to worry about maintaining them on premises. (Microsoft’s Azure N-Series GPU instances are currently in preview.)

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

The instances use up to eight Nvidia Tesla K80 GPUs, each of which contains two Nvidia GK210 GPUs. “Each GPU provides 12GB of memory (accessible via 240 GB/second of memory bandwidth), and 2,496 parallel processing cores,” Barr wrote.

There are three sizes for the P2: p2.large (1 GPU, 4 vCPUs, 61 GiB of RAM), p2.8xlarge (8 GPUs, 32 vCPUs, 488 GiB of RAM), and 488 GiB (16 GPUs, 64 vCPUs, and 732 GiB of RAM). The instances are available now in AWS’ US East (Northern Virginia), US West (Oregon), and Europe (Ireland) data center regions. (Azure’s N-Series instances are with available with no more than four GPUs.)

Also today, AWS announced the introduction of a deep learning Amazon machine image (AMI) that can be installed onto VM instances on AWS. The AMI comes with the Caffe, MXNet, TensorFlow, Theano, and Torch open-source deep learning frameworks installed, so customers don’t need to worry about getting them running.

Update on September 30: Added information about Microsoft’s Azure GPU instances.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More