Performance and Scale Improvement in SQL Server 2016

This post has been republished via RSS; it originally appeared at: SQL Server Integration Services (SSIS) articles.

First published on MSDN on Jun 02, 2015

Overall performance and scale Improvement

We have made significant improvement on performance from backend database, middle tier services to frontend UI including both WebUI and Excel UI. The performance is improved on all scenarios, especially on the master data loading, while the server side CPU and memory will be lower than previous version.

The default settings are optimized based on our test data, should be good for most of customer scenarios. There are few settings can be tweaks based on individual scenario.

1. Data Compression

We introduce the data compression on entity level. Default setting is enabled.

When data compression is enabled, all this entity related tables and indexes are compressed with SQL Row Level compression. It will significant reduce the Disk I/O when read or update the master data, especially when master data has millions of rows and/or has lots of NULL value columns.

But it will slightly increase the CPU usage on the SQL engine side, as well. Since major of the SQL operation is I/O bound, we set default value as True.

If you are facing CPU bound on the server, you may turn off data compression by Edit the Entity.

More information can be found at:

2. Dynamic Data Compression

We enforced the server enable the feature Dynamic Content Compression IIS feature. The default setting in the service is enable dynamic compression.

The MDS API is using xml format. Enable dynamic content compression will significant reduce the size the xml response and save the network I/O, while increase the CPU usage.

If you are facing CPU bound on the server, you may turn off data compression by put following setting in the web config.

<urlCompression doStaticCompression="true" doDynamicCompression="false" />

More information can be found at:

Although, both of above change will lower I/O usage by trading off CPU usage, we did great improvement on service side CPU as well. So we are expecting the CPU usage is lower than previous in general, even with compression enabled.

3. Index maintenance.

We found index fragmentation and increasing transaction log are the two main reasons why performance is degrading over time in many of user cases.

We introduce two SQL Job to do index maintenance and log maintenance.

We have a different blog for Log Maintenance.

The default schedule for Index Maintenance job is Weekly, Saturday 2AM. You can change/add more schedule in job property.

You can also manual kick run the index maintenance by execute store procedure udpDefragmentation.

Exec [mdm].[udpDefragmentation]

We recommend to run this sproc to maintain the indexes after each massive master data insert or update, or after new version is created from existing version.

Index with more than 30% fragmentation will be rebuild online. During the rebuild, the performance will be affected on the CRUD operation on the same table. Run the store procedure at off business hours if the performance degrade is a concern.

More information about index fragmentation problem can be found at:

For related information refer to:

Create an Entity (Master Data Services)

What’s New (Master Data Services)

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.