This post has been republished via RSS; it originally appeared at: Azure Database Support Blog articles.
Today, we worked on a service request that our customer reported a high latency running a single query. Following, I would like to share with you the lessons learned on that.
The first thing that we saw is our customer is running the following query.
Running this query we saw that this query took 400 seconds to complete. Why?
- Opening the XML of the execution plan or properties of SELECT operator we could see that several statistics have been updated during the execution of the query.
Checking the number of rows, plus the number of the statistics/indexes, configuration of the sample statistics, maxdop, SLO, buffer, Cardinality Estimation (SQL Server) - SQL Server | Microsoft Learn, etc.. of the database, we found that the almost time spent was updating the statistics. Also, we could seee other wait stats reported here sys.dm_os_wait_stats (Transact-SQL) - SQL Server | Microsoft Learn like PAGEIOLATCH, WAIT_ON_SYNC_STATISTICS_REFRESH, etc.
We have several options to mitigate, use NORECOMPUTE in the statistics if we have our own maintenance plan, if you are using partition use incremental statistics option, enable the auto update statistics asyncronous, etc.. but, all depends the cardinality estimation, windows maintenance for updating statistics, sample to use, etc..
Also, I would like to share that when Auto Update Statistics database option is enabled, SQL Server checks if the statistics are outdated every time if compiles or executes a query and updates them if needed. The auto update statistics database option is also enabled by default.
How SQL Server knows if the statistics are outdated based on the number of changes performed by INSERT, UPDATE, DELETE and MERGE commands, because SQL Server counts how many times the statistics columns were changed.
We have different scenarios where SQL Server will update the statistics, you could see more information here Statistics - SQL Server | Microsoft Learn
- If the table is empty, SQL Server outdates statistics when you add data.
- When a table has less than 500 rows, SQL Server outdates statistics after every 500 changes.
- Prior SQL Server 2016 or SQL Server 2016 with database compatibility level < 130. After the update will be every 500 + (20% of total number of rows in the table).
- Starting SQL Server 2016 with compatibility level 130. Statistics update threshold on large tables becomes dynamic and depends on the sieze of the table. The more rows the table has, the lower threshold is. On large tables with billions of rows, will just a small of a percentage of the total number of rows of the table.
Also, I would like to mention that using Partition, SQL Server does not maintain individual statistics at the partition level. There is always the 200 groups of the statistics histogram, regardless if the data is partitioned or not.
SQL Server 2014 and 2016 we have a new feature called incremental statistics, which allows us to create per-partitions statistics. When you enable it, SQL Server starts to track the number of statistics columns at the partition level and marks statistics as outdates. Subsequent statistics updates would refresh statistics on the individual partition rather the entire table. This behaviour needs to be enabled with statistics_incemental index and incremental statistics options respectively.