This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.
Final Update: Thursday, 15 October 2020 19:49 UTC
We've confirmed that all systems are back to normal with no customer impact as of 10/15, 19:40 UTC. Our logs show the incident started on 10/15, 18:00 UTC and that during the one hour and forty minutes that it took to resolve the issue 1.1% of customers in the East US region experienced alert failures, delayed alerts and query failures.
-Jack Cantwell
We've confirmed that all systems are back to normal with no customer impact as of 10/15, 19:40 UTC. Our logs show the incident started on 10/15, 18:00 UTC and that during the one hour and forty minutes that it took to resolve the issue 1.1% of customers in the East US region experienced alert failures, delayed alerts and query failures.
- Root Cause: The failure was due to a back end component becoming unresponsive due to unexpectedly heavy load. The component self-healed.
- Incident Timeline: 1 Hour & 40 minutes - 10/15, 18:00 UTC through 10/15, 19:40 UTC
-Jack Cantwell
Initial Update: Thursday, 15 October 2020 19:17 UTC
We are aware of issues within Log Search Alerts and Log Analytics query and are actively investigating. Some customers may experience delayed or failed alerts as well as errors while querying Log Analytics data in the Azure portal.
-Jack Cantwell
We are aware of issues within Log Search Alerts and Log Analytics query and are actively investigating. Some customers may experience delayed or failed alerts as well as errors while querying Log Analytics data in the Azure portal.
- Next Update: Before 10/15 20:30 UTC
-Jack Cantwell