Experiencing failure in alert creation or updation for Classic alerts – 12/18 – Resolved

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

Final Update: Wednesday, 18 December 2019 17:26 UTC

We've confirmed that all systems are back to normal with no customer impact as of 12/18, 16:40 UTC. Our logs show the incident started on 12/17, 17:00 UTC and that during the 23 hours 40 mins that it took to resolve the issue 90% of customers in South Central US may have received failure notifications when performing service management operations - such as create, update, delete and read for classic metric alerts hosted in this region.
  • Root Cause: Engineers determined that a recent configuration change caused a backend service in charge of processing service requests to become unhealthy, preventing requests from completing.. 
  • Mitigation: Engineers performed a change to the service configuration to mitigate the issue.
  • Incident Timeline: 23 Hours & 40 minutes - 12/17, 17:00 UTC through 12/18, 16:40 UTC
We understand that customers rely on Azure Monitor as a critical service and apologize for any impact this incident caused.


Initial Update: Wednesday, 18 December 2019 16:05 UTC

We are aware of issues within Classic Alerts and are actively investigating. Some customers in South Central US may experience failure in updation or creation of new alerts.
  • Work Around: None 
  • Next Update: Before 12/18 18:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience.

REMEMBER: these articles are REPUBLISHED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.