Amazon web services outages

From wikieduonline
Revision as of 18:13, 7 December 2021 by Welcome (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Amazon Web Services outages[edit]

Year Month and date (if available) Event type Details
2011 April 21 Outage partial availability zone At 12:47 AM PDT on April 21, an invalid traffic shift prior to network upgrade caused EBS instances to lose connectivity to one another with an availability zone of US-East-1 region. Once the errors were localized to just one availability zone, the EBS recovery These connectivity errors impacted EBS volume and EC2 instances in multiple availability zones and caused issues for customers until full recovery at 3:00 PM PDT on April 24.[1][2]
2011 August 7 Outage Power lost in Ireland, EU West region, causing disruption and outage "service disruption began at 10:41 AM PDT on August 7th"[3] (also mentioned but distinct from the outtage mentioned above; it happened around the same time as the US outtage). Due to followup issues, full restoration of e g EBS and RDS took in the order of days.[4]
2011 August 8 Outage EC2 went down around 10:25 p.m. Eastern in Amazon's U.S. East Region. The cloud outage lasted roughly 30 minutes, but took down the Web sites and services of many major Amazon cloud customers, including Netflix, Reddit and Foursquare. The issue happened in the networks that connect the Availability Zones to the Internet and was primarily caused by a software bug in the router.[5]
2012 June 29 Service disruption availability zone A major disruption occurs to the EC2, EBS, and RDS services in a single availability zone (due to a large scale electrical storm which swept through the Northern Virginia area).[6]
2012 October 22 Outage A major outage occurs (due to latent memory leak bug in an operational data collection agent), affecting many sites such as Reddit, Foursquare, Pinterest, and others.[7]
2012 December 24 Outage AWS suffers an outage, causing websites such as Netflix instant video to be unavailable for customers in the Northeastern United States.[8][9]
2013 September 13 Outage Availability Zone AWS US-East-1 region experienced network connectivity issues affecting instances in a single Availability Zone. We also experienced increased error rates and latencies for the EBS APIs and increased error rates for EBS-backed instance launches.[10]
2014 November 26 Service disruption Amazon CloudFront DNS server went down for two hours, starting at 7:15 p.m. EST. The DNS server was back up just after 9 p.m. Some websites and cloud services were knocked offline as the content delivery network failed to fulfill DNS requests during the outage. Nothing major, but worthy of this list because it involved the world's biggest and longest-running cloud.[11]
2015 September 20 Outage DynamoDB Availability Zone The Amazon DynamoDB service experiences an outage in an availability zone in the us-east-1 (North Virginia) region, due to a power outage and inadequate failover procedures. The outage, which occurs on a Sunday morning, lasts for about five hours (with some residual impact till Monday) and affects a number of related Amazon services include Simple Queue Service, EC2 autoscaling, Amazon CloudWatch, and the online AWS console.[12] A number of customers are negatively affected, including Netflix, but Netflix is able to recover quickly because of its strong disaster recovery procedures.[13]
2016 June 5 Outage AWS Sydney experiences an outage for several hours as a result of severe thunderstorms in the region causing a power outage to the data centers.[14][15][16]
2017 February 28 Outage Amazon experiences an outage of S3 in us-east-1.[17] There are also related outages for other services in us-east-1 including CloudFormation, autoscaling, Elastic MapReduce, Simple Email Service, and Simple Workflow Service. A number of websites and services using S3, such as Medium, Slack, Imgur and Trello, are affected. AWS's own status dashboard initially fails to reflect the change properly due to a dependency on S3.[18][19][20] On March 2, AWS reveals that the outage was caused by an incorrect parameter passed in by an authorized employee while running an established playbook, that ended up deleting more instances than the employee intended.[21]
2018 March 2 Service degradation Starting 6:25 AM PST, Direct Connect experienced connectivity issues related to a power outage issue in their US-East-1 Region. This caused customers to have service interruptions in reaching their EC2 instances. Issue was resolved fully by 10:26 AM PST.[22]
2018 May 31 Outage Availability Zone Beginning at 2:52 PM PDT a small percentage of EC2 servers lost power in a single Availability Zone in the US-EAST-1 Region. This resulted in some impaired EC2 instances and degraded performance for some EBS volumes in the affected Availability Zone. Power was restored at 3:22 PM PDT.[23]
2019 August 23 Outage A number of EC2 servers in the Tokyo region shut down due to overheating at 12:36pm local time, due to a failure in the datacenter control and cooling system.[24]
2019 August 31 Outage and data loss The US-EAST-1 data center suffered a power failure at 4:33am local time, and the backup generators failed at 6am. According to AWS, this affected 7.5 percent of the EC2 instances in one of the ten data centers in one of the six Availability Zones in US-EAST-1. However, after restoring power, a number of EBS volumes, which store the filesystems of the EC2 cloud servers, were permanently unrecoverable. This caused downtime for companies such as Reddit.[25][26][27]
2019 October 22-23 Service degradation from DDoS AWS sustained a distributed denial of service attack which caused intermittent DNS resolution errors (for their Route 53 DNS service) from 10:30am PST to 6:30pm PST.[28]
2020 November 25 Outage Beginning at 9:52 AM PST the Kinesis Data Streams API became impaired in the US-EAST-1 Region. This prevented customers from reading or writing data.[29]

See also[edit]

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy.. Source: wikipedia

  1. "Summary of the Amazon EC2 and Amazon RDS Service Disruption in the US East Region". aws.amazon.com. 2011-04-29. Retrieved 2018-11-13.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  2. "Amazon EC2 Goes Dark In Morning Cloud Outage". CRN. 2011-04-21. Retrieved 2018-11-13.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  3. "Summary of the Amazon EC2, Amazon EBS, and Amazon RDS Service Event in the EU West Region". AWS. 2011-08-16. Retrieved 2021-03-19.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  4. "Amazon admits lightning didn't strike its Dublin Data center, but a series of errors did". The Sociable. 2011-08-16. Retrieved 2021-03-19.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  5. "Amazon Offers Explanations, Apologies For Dual Cloud Outages". CRN. 2011-08-16. Retrieved 2018-11-13.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  6. "Summary of the AWS Service Event in the US East Region". Aws.amazon.com. 2012-07-02. Retrieved 2018-08-29.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  7. "Summary of the October 22, 2012 AWS Service Event in the US-East Region". aws.amazon.com. 2012-10-22. Retrieved 2013-07-17.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  8. "Summary of the December 24, 2012 Amazon ELB Service Event in the US-East Region". aws.amazon.com. 2012-12-24. Retrieved 2018-08-30.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  9. Bishop, Bryan (24 December 2012). "Netflix streaming down on some devices due to Amazon issues". The Verge. Retrieved 5 February 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  10. "Uh oh. Amazon U.S. East is in trouble again". Gigaom. 2013-09-13. Retrieved 2018-11-13.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  11. "AWS CloudFront wobbles at worst possible time". The Register. November 27, 2014. Retrieved November 12, 2018.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  12. "Summary of the Amazon DynamoDB Service Disruption and Related Impacts in the US-East Region". Amazon Web Services. September 21, 2015. Retrieved December 5, 2016.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  13. Template:Cite magazine
  14. "Summary of the AWS Service Event in the Sydney Region". aws.amazon.com. 2016-06-05. Retrieved 2018-08-29.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  15. Chirgwin, Richard (June 5, 2016). "AWS endures extended outage in Australia. Heavy clouds take out clouds". The Register. Retrieved December 5, 2016.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  16. Coyne, Allie (June 9, 2016). "Failure in power redundancy triggered AWS Sydney outage. Failure in power redundancy triggered AWS Sydney outage". Retrieved December 4, 2016.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  17. "Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region". 2017-02-28. Retrieved 2018-08-30.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  18. Etherington, Darrell (February 28, 2017). "Amazon AWS S3 outage is breaking things for a lot of websites and apps". TechCrunch. Retrieved February 28, 2017.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  19. "Amazon's cloud service has outage, disrupting sites". USA Today. February 28, 2017. Retrieved February 28, 2017.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  20. Condon, Stephanie (February 28, 2017). "AWS investigating S3 problem at major data center location. AWS is investigating a problem with S3 storage in its US-East region, its oldest data center, which has impacted several businesses". ZDNet. Retrieved February 28, 2017.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  21. Novet, JOrdan (March 2, 2017). "AWS apologizes for February 28 outage, takes steps to prevent similar events". VentureBeat. Retrieved March 2, 2017.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  22. "Hundreds of Enterprise Services Reportedly Hit by AWS Outage". The Register. 2018-03-05. Retrieved 2019-02-01.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  23. "AWS outage killed some cloudy servers, recovery time is uncertain". The Register. 2018-06-01. Retrieved 2018-11-13.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  24. "Summary of the Amazon EC2 and Amazon EBS Service Event in the Tokyo (AP-NORTHEAST-1) Region". AWS. Retrieved 2020-12-02.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  25. "Amazon AWS Outage". WhizLabs. 2019-09-23. Retrieved 2020-12-02.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  26. "Amazon AWS Outage Shows Data in the Cloud is Not Always Safe". BleepingComputer. 2019-09-05. Retrieved 2020-12-02.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  27. "AWS EBS Volumes Aren't Safe from Failure, Backup to S3". CloudSavvyIT. 2020-07-27. Retrieved 2020-12-02.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  28. "AWS hit by DDoS attack dragging half of web down". CRN. 2019-10-23. Retrieved 2020-12-02.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
  29. "Prolonged AWS outage has taken down a big chunk of the internet, recovery may take 'a few hours'". The Verge. 25 November 2020. Retrieved 26 November 2020.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>

Advertising: