Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":561345,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"cloud,","session":"B"}']

Amazon cloud outage takes down Reddit, Airbnb, Flipboard, Coursera, & more

Image Credit: Reddit

reddit-down

Amazon’s EC2 cloud infrastructure has once again had an outage that has partially taken down web services including Reddit, Airbnb, Flipboard, GetGlue, Coursera, and more.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":561345,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"cloud,","session":"B"}']

The Service Dashboard that shows the health of Amazon’s cloud services notes the following about EC2 in North Virginia:

10:38 AM PDT We are currently investigating degraded performance for a small number of EBS volumes in a single Availability Zone in the US-EAST-1 Region.

11:11 AM PDT We can confirm degraded performance for a small number of EBS volumes in a single Availability Zone in the US-EAST-1 Region. Instances using affected EBS volumes will also experience degraded performance.

11:26 AM PDT We are currently experiencing degraded performance for EBS volumes in a single Availability Zone in the US-EAST-1 Region. New launches for EBS backed instances are failing and instances using affected EBS volumes will experience degraded performance.

12:32 PM PDT We are working on recovering the impacted EBS volumes in a single Availability Zone in the US-EAST-1 Region.

1:02 PM PDT We continue to work to resolve the issue affecting EBS volumes in a single availability zone in the US-EAST-1 region. The AWS Management Console for EC2 indicates which availability zone is impaired.

EC2 instances and EBS volumes outside of this availability zone are operating normally. Customers can launch replacement instances in the unaffected availability zones but may experience elevated launch latencies or receive ResourceLimitExceeded errors on their API calls, which are being issued to manage load on the system during recovery. Customers receiving this error can retry failed requests.

Amazon famously suffered a major outage back in June, which took down popular services including Netflix, Instagram, Pinterest, and Heroku. That outage was blamed on a strong thunderstorm that took down the power at Amazon’s North Virginia data center. Before that, Amazon also suffered large EC2 outages in April and August of 2011.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Several services took to Twitter to blame issues on Amazon’s outage:

Some services simply apologized for being down:

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":561345,"post_type":"story","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"cloud,","session":"B"}']

Update at 3:16 p.m. PT: A member of hacker collective Anonymous has claimed responsibility for attacking EC2, but we find the claims suspect and Amazon has denied an attack occurred.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More