Meditation: Slow Inserts in SQL Server

This post has been republished via RSS; it originally appeared at: SQL Server Support articles.

First published on MSDN on Dec 06, 2012

This SQL Meditation contemplates on some common reasons why INSERT statements are taking a long time (slow)

 

 

 

 

 

1. Each Insert in a large batch is causing a log flush thus causing waiting

 

 

 

Is each insert a separate transaction or are those grouped in a transaction? If you review KB 230785 you will notice the section " Increasing performance " discusses how single INSERTs take a long time but when "batched" in a  transaction, they take significantly less time.  The reason for this is the number of log cache flushes: if you can minimize the log flushes, you can improve performance, but note that there is an upper limit after which "batching" too many inserts into a transaction can be detrimental (law of diminishing returns). Look for WRITELOG waittype here.

 


 

 

Solution:

 

Group multiple inserts into a transaction so that all inserts in that transaction will result in a single log flush. The example from the KB article 230785 causes a log flush for every 10 inserts (note the %10)

 

 

 



BEGIN TRAN
GO

INSERT INTO tblTest VALUES ('Test')
WHILE @@IDENTITY < 50
BEGIN
INSERT INTO tblTest VALUES ('Test')

if(0 = cast(@@IDENTITY as int) % 10)
BEGIN
PRINT 'Commit tran batch'
COMMIT TRAN
BEGIN TRAN
END
END
GO

COMMIT TRAN
GO

 

 

 

 

 

2. Each Insert is against a clustered-index primary key and which is also defined as an identity column, causing a natural hotspot (specific to inserts from multiple connections)

 

 

 

Look for multiple PAGELATCH_EX waits occurring from multiple sessions against on a particular page. Find out if a PK is defined as a Clustered Index and IDENTITY column. This will cause all new inserts to go against the same page - at the end of the clustered index because the data is ordered (as opposed to inserts going to different pages when data is not ordered). So this page will be a natural hotspot with threads waiting for PAGELATCH_EX. For more information see How to resolve last-page insert PAGELATCH_EX contention in SQL Server

 

 

 

Solution:

 

Move the clustered index to a column that is not the Primary key. Some candidates may be columns where ORDER BY or Joins are performed. Do not simply replace the clustered index with a non-clustered index and as a result remove the clustered index altogether. If so, the INSERT performance can suffer KB297861 (see 3 below). Make sure you place the clustered index on another column.

 


There is a fix in SQL 2005 that resolves slower inserts problem introduced by new page split logic. See 940545 FIX: The performance of INSERT operations against a table that contains an identity column may be slow in SQL Server 2005

Also, you can partition the table/clustered index to reduce (but not eliminate) the contention on a single data/index page. See http://sqlcat.com/sqlcat/b/technicalnotes/archive/2009/09/22/resolving-pagelatch-contention-on-highly-concurrent-insert-workloads-part-1.aspx

 

 

3. Inserts are slower against a Heap. (see 297861 PRB: Poor Performance on a Heap )

 

 

 

Inserts against a table with no clustered index (heap) are optimized for saving space, rather than performance. This means that SQL Server will spend more time searching for available space than when a clustered index is used. This is not a very frequent cause for slow inserts, but if you are trying to squeeze every ounce of performance, consider this reason.

 

 

 

Solution:

 

Create a clustered index on the table.

 

 

 

 

 

4. Slow I/O subsystem.

 

 

 

Check if response time of the I/O subsystem are suboptimal and if SQL is causing this or it is hardware problem. See Slow Disk IO Troubleshooting

 

 

 

Solution:

 

Remove the bottleneck causing the slow I/O subsystem

 

 

 

Namaste!

 

 

 

Joseph

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.