This post has been republished via RSS; it originally appeared at: Microsoft Tech Community - Latest Blogs - .
Final Update: Thursday, 16 December 2021 06:01 UTC
We've confirmed that all systems are back to normal with no customer impact as of 12/16, 05:50 UTC. Our logs show the incident started on 12/16, 1:32 UTC and that during the 4 hours & 18 minutes that it took to resolve the issue some customers using Azure Log Analytics and/or Sentinel may have experienced ingestion latency and data gaps and incorrect alert activation in multiple regions ( West US, West US2, South East Asia, East US, East US2, Japan East, West Europe, Central US).
We've confirmed that all systems are back to normal with no customer impact as of 12/16, 05:50 UTC. Our logs show the incident started on 12/16, 1:32 UTC and that during the 4 hours & 18 minutes that it took to resolve the issue some customers using Azure Log Analytics and/or Sentinel may have experienced ingestion latency and data gaps and incorrect alert activation in multiple regions ( West US, West US2, South East Asia, East US, East US2, Japan East, West Europe, Central US).
Root Cause: The failure was due to one of our dependent service.
Incident Timeline: 4 Hours & 18 minutes - 12/16, 1:32 UTC through 12/16, 05:50 UTC
We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.
-Srikanth
-Srikanth
Initial Update: Thursday, 16 December 2021 05:16 UTC
We are aware of issues within Log Analytics and are actively investigating. Some customers may experience ingestion latency and data gaps and incorrect alert activation in multiple regions.
-Anmol
We are aware of issues within Log Analytics and are actively investigating. Some customers may experience ingestion latency and data gaps and incorrect alert activation in multiple regions.
- Work Around: None
- Next Update: Before 12/16 08:30 UTC
-Anmol