SQL Server 2014 In-Memory OLTP discussions at PASS Summit 2013

This post has been republished via RSS; it originally appeared at: DataCAT articles.

First published on MSDN on Nov 12, 2013

Wow, what a great PASS Summit!  Great to see so many old friends and meet so many new ones! At PASS I had the opportunity to talk to a number of people about an exciting new feature we have in SQL Server 2014, In-Memory OLTP (codenamed "Hekaton"). We had some great discussions and got to listen to a fantastic keynote delivered by Dr. DeWitt on this technology: http://www.sqlpass.org/summit/2013/PASStv.aspx?watch=aW3-0G-SEj0

For this blog I just wanted to share some of the most common questions and discussions we had around SQL Server 2014 In-Memory OLTP at the Clinic and throughout the PASS Summit. Maybe it will help answer some questions you may have around In-Memory OLTP as well!

Question 1: Is In-Memory OLTP the solution to all my performance concerns?

In-Memory OLTP is aligned to address the common difficult bottlenecks which are experienced in high scale/low latency RDBMS systems. Problems such as latching and locking do not exists on memory-optimized tables. Placing T-SQL into Native Compiled stored procedures can help reduce the CPU time and improve the T-SQL execution time. However, there are a number of solutions which have performance issues which In-Memory OLTP cannot address, for example if the bottleneck isn't in the SQL Server engine or the code cannot be modified to take advantage of In-Memory OLTP. To help with this analysis we did create the AMR tool: http://blogs.technet.com/b/dataplatforminsider/archive/2013/09/17/new-amr-tool-simplifying-the-migration-to-in-memory-oltp.aspx . This is a great first step to investigating your solution and In-Memory OLTP. Overall, to summarize, the answer is unfortunately no, it isn’t quite that magical fix everything button.

Question 2: When the server fails, do I lose all my data?

The quick answer is NO, memory-optimized tables can still be fully durable! The tables can be created as SCHEMA_AND_DATA and will log the inserts/updates/deletes into the SQL Server database transaction log (and then the data files). SQL Server will write changes to the transaction log and execute recovery through checkpoint files by reading the data into memory (from disk) on recovery. Memory-optimized tables are supported with all 3 recovery models for SQL Server databases (full, bulk-logged and simple). Also of note, the logging of transactions going against In-Memory OLTP objects allows for integration with AlwaysOn Availability Groups so you can take advantage of HA/DR features as well with memory-optimized tables.  For further details on how transactional durability is achieved see: http://blogs.technet.com/b/dataplatforminsider/archive/2013/10/11/in-memory-oltp-how-durability-is-achieved-for-memory-optimized-tables.aspx .

Question 3: Do you support compression and/or encryption on the memory optimized tables?

No, these features currently are not supported. If you have specific requirements I would love to know more. Another consideration in particular around compression is, in many cases the data-set you should have in your memory-optimized tables is the very 'hot' OLTP data. Typically this subset of the data may be quite small compared to the overall data residing in the database. You can still use compression features (e.g. page compression or a clustered columnstore index) on standard disk-based SQL Server tables in the same database and move the data from memory-optimized tables into these compressed tables via simple insert..select T-SQL calls.

Question 4: What are some good and bad “scenarios” or architectures to consider In-Memory OLTP?

This could be a long answer and unfortunately there is some “it-depends” but let me suggest a few guidelines to follow when looking at your application and In-Memory OLTP:


  1. Make sure you can define and measure your goals and bottlenecks (AMR can help). This can help assess whether the problematic area is really in the data access and T-SQL execution layers which In-Memory OLTP looks to address.

  2. Remember, you can use the In-Memory OLTP engine as part of your overall solution (e.g. a table storing some data), it doesn’t have to be all the data, T-SQL or the entire database. Just focusing on those few objects which require the high performance OLTP (and are hitting bottlenecks) is great. Here is another example of a common architecture pattern using In-Memory OLTP with success: http://blogs.technet.com/b/dataplatforminsider/archive/2013/09/19/in-memory-oltp-common-design-pattern-high-data-input-rate-shock-absorber.aspx

  3. There are some scenarios where In-Memory OLTP won’t be the ideal solution, a few include:


    1. Hardware limitations: If you do not have enough memory on the server, memory-optimized tables are not backed by disk so this won’t work. As well, disk performance is still critical for log writes, checkpoints and recovery of SCHEMA_AND_DATA tables.

    2. You do not have the ability to modify code to take advantage of In-memory OLTP. At a minimum new tables must be created as memory-optimized. To optimize latency and CPU efficiency T-SQL will need to be moved into native compiled stored procedures.

    3. Non-OLTP workloads like long running data warehouse queries or XML/Full-Text queries against the memory-optimized tables are unlikely to be as efficient as other SQL Server technologies designed for these workloads.

    4. Your primary bottleneck is not contention, scale or latency specific to the components you migrate to In-Memory OLTP.

I hope this was helpful and informative. Here are a few more pointers to deep-dive into the technology:


  1. Continue to follow the data platform blog for continued content around In-Memory OLTP: http://blogs.technet.com/b/dataplatforminsider/ and http://blogs.technet.com/b/dataplatforminsiderarchive/2013/10/15/the-411-on-the-microsoft-sql-server-2014-in-memory-oltp-blog-series.aspx

  2. Deep-Dive paper on In-Memory OLTP for CTP2, written by Kalen Delaney: http://download.microsoft.com/download/5/F/8/5F8D223F-E08B-41CC-8CE5-95B79908A872/SQL_Server_2014_In-Memory_OLTP_TDM_White_Paper.pdf

  3. Download CTP2 and try for yourself: http://www.microsoft.com/en-us/sqlserver/sql-server-2014.

If you have more questions or feedback on In-Memory OLTP in SQL Server 2014 please let me know via this blog or find me on Twitter: @MikeW_SQLCAT


Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.