An internal error happened while generating a new DBTS for database ‘Db1’

This post has been republished via RSS; it originally appeared at: SQL Server Support articles.

First published on MSDN on Jun 21, 2018
Recently I worked on an issue that encountered this error and I had to research the source code to figure it out.

Msg 42029, Level 16, State 1, Line 3 An internal error happened while generating a new DBTS for database "Db1". Please retry the operation


This error has to do with the Database Timestamp (@@DBTS) value generated in each database

The logic is this: a thread that inserts or updates a timestamp column will keep an in-memory database structure (dbtable) updated with the current value . There is also a max timestamp value which defines the upper limit of an available range of time stamp values. The latter is increased by each thread if the current value has reached the max value . When the current and the max values become the same, it means the range of values has been used up and a new range maximum is created. The new range maximum is incremented by so many units : CPU count * 1000. So if you have 8 CPUs, each time a range increase happens it will be incremented by 8000 units . Once that max value is increased, the current thread (which requested the increase) can use one of the values within the range. However, due to high volume of timestamp updates or inserts,  other threads can use up all the entries up to the max value. As a result this thread needs to request another range increase . And if other threads reuse it again, before this thread can use one of the values, then the thread is back in the same state - requesting a range increase. After 10 retires of the same thing, the above error 42029 is raised and a message with the following text is written to the SQL Errorlog:

"Get new DBTS for database Db1. with physical db name %.*ls fails with too many retries."


As you can imagine this should be a rare case - to fail 10 times to get a value form the range you just created is very unlikely. But one scenario that can consume timestamps very quickly is a large UPDATE statement. To illustrate, if thread A is attempting an INSERT into a table with a timestamp, while at the same time thread B is performing an UPDATE of say hundreds of thousands of rows, then those timestamp value ranges will keep getting consumed by thread B while thread A is trying to get one value for its work. Again, to have this happen after 10 retries it would be rare, but it is possible.

To be sure, please understand that this error would occur very rarely because it happens due to a race for values. The logic in the code protects the values generated via lock mechanism so there is no issue there.

Solutions:

    1. Ignore and do nothing - this should happen very rarely

 

    1. If your application uses timestamp values a lot and you have large UPDATEs to timestamp tables, you may have to rethink your application logic.

 

    1. If nothing about your application logic can be changed, consider catching error 42029 in your try ... catch application logic and re-submitting the query.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.