Learnings from the Trenches – Azure SQL Database Best Practice Performance recommendations

This post has been republished via RSS; it originally appeared at: Azure Database Support Blog articles.


This article will guide you through a series of recommendations to improve performance on your database. In addition, it will refer to various public documents that can complement the actions described in this article.

Many believe Azure SQL Database will have optimal performance due to its proven versatility on various environments, however, that's not always the case. Azure SQL Database requires that end users update statistics and find missing indexes with certain frequency, according to how much data manipulation happens on their tables. The less logical reads your queries make, the less resource consumption will be required from an Azure SQL Database. Updating statistics and finding missing indexes, are among other performance tuning database methods, mainly recommended to help you decrease the number of logical reads and improve general database performance.


I am a firm believer that there is no SQL Database unable to keep up with any given workload, but a poorly designed maintenance plan. In average, 80% – 90% of the performance related support cases I have been involved in, are created due to a non-existent maintenance plan or even the awareness said plan is needed.

Azure SQL Database unit of measure is DTU (Data Transaction Units), which is composed of CPU, LogIO and DataIO, the closer to 100% of consumption any of these DTU components get, the more issues your databases will experience. Additionally, this would also apply if your database were on a vCore service tier as the same internal metrics are used. 

To solve this, there are at least two different ways to approach this situation. The recommendation is for your maintenance routine to include the below action plan which includes updating statistics, finding missing indexes and frequent monitoring, or increase the service level to add more hardware capabilities to your database, while you work to decrease the work load footprint on your database. At least until you’re able to bring the situation under control; bear in mind, these recommendations will increase the DTU consumption, which it is suggested to increase the service tier while you complete your maintenance routine or perform it during a time of the day (or night) in which your application doesn’t require the horse-power.

The below article goes in detail how to identify performance issues, if you notice degradation on your database performance.
Monitor performance using DMVs - Azure SQL Database & SQL Managed Instance

Moreover, to monitor the database for high DTU components usage (CPU, LogIO, DataIO) you can use the following DMVs:

This query will give you the 5 minute averages of resource usage for the database(s) so you can see if there was an extended timeframe of resource usage or the size of the database.


-- Run against master

Select * from sys.resource_stats

where database_name = 'your_database_name'

Unlike the above query, this one only provides information for the past hour on resource usage and must be ran directly against the database.  It will provide a more granular view of the resource percentages used as it is broken into 15 second intervals.


-- Run against the database with the performance issue

Select * from sys.dm_db_resource_stats



Recommended Maintenance Plan

1.- Update all your tables data distribution statistics, with a 100% sampling rate (Fullscan). This data is used by the SQL query optimizer to select an execution plan for the queries, and it’s vital that the statistics are updated to get effective execution plans.

We recommend updating stats on a regular basis based on how often data changes or is inserted. It could be necessary to update them daily, every other day, or weekly.

Connect to your database and run the following to update all stats in the database in one step.


-- Update ALL Statistics WITH FULLSCAN

-- This will update all the statistics on all the tables in your database.

-- remove the comments from EXEC sp_executesql in order to have the commands actually update stats, instead of just printing them.



SELECT table_schema, table_name 

FROM information_schema.tables


OPEN updatestats

DECLARE @tableSchema NVARCHAR(128)

DECLARE @tableName NVARCHAR(128)

DECLARE @Statement NVARCHAR(300)

FETCH NEXT FROM updatestats INTO @tableSchema, @tableName



   SET @Statement = 'UPDATE STATISTICS '  + '[' + @tableSchema + ']' + '.' + '[' + @tableName + ']' + ' WITH FULLSCAN'

   PRINT @Statement -- comment this print statement to prevent it from printing whenever you are ready to execute the command below.

   --EXEC sp_executesql @Statement -- remove the comment on the beginning of this line to run the commands

   FETCH NEXT FROM updatestats INTO @tableSchema, @tableName


CLOSE updatestats

DEALLOCATE updatestats







2.- Detecting missing indexes.

After several hours of normal activity, check to see if the SQL engine reports any missing indexes. Proper indexing drastically reduces the time and read IOPS needed to query data.

The following query returns information from the DMVs (Dynamic Management Views) related to Missing indexes, which report the list of indexes that the Query Optimizer points out as necessary in order to improve the current average workloads.

The column "improvement_measure" is an indication of the estimated improvement that will occur if the index is created. It is a unitless number and its meaning is relative to the same number for the other indexes suggested in the list.

This measure is a combination of columns avg_total_user_cost, avg_user_impact, user_seeks, and user_scans in sys.dm_db_missing_index_group_stats. This query should be executed against the user database that is being analyzed.

---------------------- Missing Indexes -----------------------------------------------

DECLARE @runtime datetime

SET @runtime = GETDATE()

SELECT CONVERT (varchar, @runtime, 126) AS runtime, mig.index_group_handle, mid.index_handle, 

CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) AS improvement_measure, 

'CREATE INDEX missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) 

+ ' ON ' + mid.statement 

+ ' (' + ISNULL (mid.equality_columns,'') 

+ CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END + ISNULL (mid.inequality_columns, '')

+ ')'

+ ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement, 

migs.*, mid.database_id, mid.[object_id]

FROM sys.dm_db_missing_index_groups mig

INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle

WHERE CONVERT (decimal (28,1), migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans)) > 10     ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC



The column create_index_statement contains the CREATE INDEX sentences that you can directly execute to create the suggested index. The list is ordered by improvement_measure from highest to lowest impact. Normally we create the group of new indexes listed as having the most impact, relative to the others. You can change the “missing_index….” names for another name of your choosing.


In addition, you can also leverage the SQL Database Advisor in the Azure Portal, that allows you to review and implement some recommendations for existing Databases, that can improve performance for current workloads.

SQL Database Advisor using the Azure portal


SQL Database Advisor

To get recommendations a database needs to have about a day of usage, and there needs to be some activity. There also needs to be some consistent activity. The SQL Database Advisor can more easily optimize for consistent query patterns than it can for random spotty bursts of activity. If recommendations are not available, the Performance recommendation page should provide a message explaining why.

3.- Monitor the database behavior and top consumer queries/procedures.

You can leverage some system views and review the list of queries and/or procedures that recently have consumed the most resources (duration, CPU, reads, writes, etc).

This query should be executed against the user database that is being analyzed.

-----------------Top 10 consumer queries-------------------------------------


SUM(query_stats.total_elapsed_time) / SUM(query_stats.execution_count) /1000 AS "Avg Duration_ms",

SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) /1000 AS "Avg CPU_ms",

SUM(query_stats.total_logical_reads) / SUM(query_stats.execution_count) AS "Avg Logical Reads",

SUM(query_stats.total_logical_writes) / SUM(query_stats.execution_count) AS "Avg Logical Writes",

SUM(query_stats.total_physical_reads) / SUM(query_stats.execution_count) AS "Avg Physical Reads",

SUM(query_stats.execution_count) AS "Execution Count",

MIN(query_stats.statement_text) AS "Statement Text"


    (SELECT QS.*,

    SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,

    ((CASE statement_end_offset


        ELSE QS.statement_end_offset END

            - QS.statement_start_offset)/2) + 1) AS statement_text

     FROM sys.dm_exec_query_stats AS QS

     CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle) as ST) as query_stats

GROUP BY query_stats.query_hash




3a.- You can leverage QDS (Query Data Store) (recommended)

Query Store is a very powerful SQL engine feature designed to deliver insight into query performance by keeping track of different plans and execution statistics for the queries executed on the server.

It is a database-scoped persistent store of query workload history collecting query text, query plan, and runtime stats in the user's DB for later analysis.

Query Store captures the query plans but also capture the execution statistics. It captures this information by time window, allowing you to perform historical analysis even after SQL Server is restarted.

Monitoring Performance by Using the Query Store

Best Practices with the Query Store

You have several ways of accessing the information.

--By directly querying the QDS views, using T-SQL:


--Top Consumers (ordered by CPU by default) with SQL TEXT and other useful info.

--You can order by a different metric and narrow the query down to last x hours or days, or a specific timeframe.

--Top Consumers (ordered by CPU by default) with SQL TEXT and other useful info.

--You can order by a different metric and narrow the query down to last x hours or days, or a specific timeframe.



rs.count_executions as [Execution count],

rs.avg_duration/1000 as [Avg_Duration(ms)],

rs.max_duration/1000 as [Max_Duration(ms)],

rs.avg_cpu_time/1000 as [Avg_CPU_time(ms)],

rs.max_cpu_time/1000 as [Max_CPU_time(ms)],






qt.query_text_id, p.plan_id, GETUTCDATE() AS CurrentUTCTime, 


FROM sys.query_store_query_text AS qt 

JOIN sys.query_store_query AS q 

    ON qt.query_text_id = q.query_text_id 

JOIN sys.query_store_plan AS p 

    ON q.query_id = p.query_id 

JOIN sys.query_store_runtime_stats AS rs 

    ON p.plan_id = rs.plan_id

--WHERE rs.last_execution_time between '2/17/2016 13:00' and '2/17/2016 14:30'

--WHERE rs.last_execution_time > DATEADD(day, -7, GETUTCDATE())

ORDER BY rs.avg_cpu_time DESC;


--By using the GUI in SQL Server Management Studio:


We always recommend to use the latest versions of SSMS; you can download and install the latest version of SQL Server Management Studio.


-You can use Query Performance Insight in the Azure Portal to review DTU consumption over time, and what queries are running and consuming those resources.

Azure SQL Database Query Performance Insight

4.- The remaining alternative is to scale the database up to a higher Service Tier / Performance Level, effectively assigning more DTUs (processing power) to it.

You can also scale up for peak usage time or specific batch or process that you know needs more resources and later scale back down when it’s finished; should you have an specific budget to comply with.

SQL Database Pricing

Furthermore, please visit the below documentation which, not only explains in detail the above suggestions, but also provides more information on how to identify possible bottlenecks such as query performance, the use of batching when inserting data, tune your application and databases.

Detectable types of query performance bottlenecks in Azure SQL Database
Types of query performance issues in Azure SQL Database - Azure SQL | Microsoft Docs

How to use batching to improve Azure SQL Database and Azure SQL Managed Instance application performance
How to use batching to improve application performance - Azure SQL

Tune applications and databases for performance in Azure SQL Database and Azure SQL Managed Instance

Performance tuning guidance for applications and databases - Azure SQL Database & Azure SQL Managed Instance

Enjoy and I hope this helps.

Diego Nieto
Sr. Technical Advisor
Azure SQL Database Support team

Gathered by the Azure SQL Database team and positive results in use case.


REMEMBER: these articles are REPUBLISHED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.