Archive
Using dm_tran_database_transactions and a First Step to Converting LSNs
In my last post, I showed how to use fn_dblog to read the transaction log. In this post, I will demonstrate using sys.dm_tran_database_transactions DMO to get information about executing transactions. You can find the complete result set for the DMO at this Books Online entry. For this post, I am interested in the following results:
SELECT DB_NAME(database_id), -- Return the database name database_transaction_log_bytes_used, -- Log bytes used database_transaction_log_bytes_reserved, -- Log bytes reserved master.dbo.fn_convertnumericlsntobinary(database_transaction_begin_lsn), -- Beginning lsn of the transaction master.dbo.fn_convertnumericlsntobinary(database_transaction_last_lsn) -- Last lsn of the transaction FROM sys.dm_tran_database_transactions WHERE database_transaction_begin_lsn IS NOT NULL -- Confirm that the transaction has AND database_transaction_last_lsn IS NOT NULL -- a start and end lsn AND database_transaction_state = 4 -- Indicates a transaction that has written to the transaction log AND database_id = DB_ID() -- Return rows for the current database
The results look like this:
You will notice that the lsns are being converted using fn_convertnumericlsntobinary. The code for this function is available here at step 3 of alternative approach 1. This is the first in a magical, multi-step odyssey to join the results of sys.dm_tran_database_transactions to fn_dblog.
My initial attempt was a straight join between the function and the DMO on transaction_id, which exists in both. I ran an explicit transaction without a commit, and compared the transaction_ids. The DMO returned the value 1200932 while the function returned 0005602A, the first decimal and the second hex. No big deal, just convert one of them. However, 1200932 converts to 125324 in hex. So, I was stumped but tried to make sense of it. Do I double it and add 30? After spending too much time trying to figure it out, I wrote Paul Randal (b|t) and explained my problem. His response? "The transaction IDs don't match - annoyingly."
So, with that bit of wisdom I set out to try another method. This involved using the lsn information from the DMO as input parameters to fn_dblog. My next post will show you more than you ever wanted to know about converting lsns.
Reading the Transaction Log Using fn_dblog
One of my favorite sections of the SQLskills Internals Immersion Event was logging and recovery. I've enjoyed internals since I started working as a COBOL programmer. Figuring out what SQL Server is doing under the covers makes me happy.
We covered a number of methods of interrogating the transaction log, and I have been working to put them together. In my next few blog posts, I will cover two of them and detail the challenges I faced joining the results.
I was familiar with the table-based function fn_dblog from reading SQL Server 2008 Internals. It takes two optional parameters, start and end log sequence number (lsn). Running this code:
SELECT * FROM fn_dblog(NULL,NULL)
will return the entire contents of the transaction log. Replacing the NULLs with an lsn will limit the result set. That query returns 116 columns, which is much more information than I need. Adding the following select list returned the data I was interested in:
SELECT SPID, -- The process ID for the transaction [Begin Time], -- The start time for the transaction [Current LSN], -- The lsn of this record [Previous LSN], -- Previous lsn for this record Operation, -- The operation performed by the log record Context, -- The resource affected by the operation [Log Reserve], -- The space reserved for rollback in bytes AllocUnitName, -- The name a the affected allocation unit [Page ID], -- The page ID of the affected page [Number of Locks], -- The number of locks held by the transaction [Lock Information] -- Information about the locks held and the resources locked FROM fn_dblog(NULL, NULL)
To generate the result set below, I ran a CHECKPOINT on a database in SIMPLE recovery model to clear out the transaction log. I then ran the following query:
INSERT INTO loginsert VALUES (1 ,'abcdefghij') GO
I've split the results into two parts. The first set contains everything up to the lock information:
You can see that SPID and Begin Time are only be populated for a BEGIN TRANSACTION (LOP_BEGIN_XACT in the log). The LOP_INSERT_ROWS operation shows the table inserted to and the page affected.
Finally, the Lock Information column demonstrates SQL Server's lock escalation behavior. In the second half of the result set:
you can see SQL Server acquiring three locks. First, an intent exclusive lock is taken on a table. Second, an intent exclusive lock on a page. Finally, an exclusive lock is taken on a key. I thought this was a very cool representation of lock escalation.
My next posts will cover the following topics:
- Using sys.dm_tran_database_transactions to see what transactions are executing on the system
- More information than you thought possible about convert lsns
- Joining those results to fn_dblog to the log records associated with each active transaction
SQL Server Log Space Reservation – Part II
In my last post, I wrote about SQL Server reserving space for open transactions to account for potential rollback. This reserved space is released once the transaction commits. You can track the amount of space reserved using the sys.dm_tran_database_transactions DMO. In this post, I will demonstrate how expensive this space reservation can be for long-running transactions.
I created two databases to allow me to track log space reservation for two processes. The first will update all values in a 100,000-row tables 10 times, running all updates within single transaction. The second will run each update in an explicit transaction. Here is the script to create the database:
CREATE DATABASE [DUDLOOPTRAN] CONTAINMENT = NONE ON PRIMARY ( NAME = N'DUDLOOPTRAN_Data', FILENAME = N'D:\Data\DUDLOOPTRAN_01.mdf' , SIZE = 2097152KB , FILEGROWTH = 1048576KB ) LOG ON ( NAME = N'DUDLOOPTRAN_log', FILENAME = N'D:\Data\DUDLOOPTRAN_01.ldf' , SIZE = 8388608KB , FILEGROWTH = 2097152KB ) GO ALTER DATABASE [DUDLOOPTRAN] SET COMPATIBILITY_LEVEL = 110 GO ALTER DATABASE [DUDLOOPTRAN] SET ANSI_NULL_DEFAULT OFF GO ALTER DATABASE [DUDLOOPTRAN] SET ANSI_NULLS OFF GO ALTER DATABASE [DUDLOOPTRAN] SET ANSI_PADDING OFF GO ALTER DATABASE [DUDLOOPTRAN] SET ANSI_WARNINGS OFF GO ALTER DATABASE [DUDLOOPTRAN] SET ARITHABORT OFF GO ALTER DATABASE [DUDLOOPTRAN] SET AUTO_CLOSE OFF GO ALTER DATABASE [DUDLOOPTRAN] SET AUTO_CREATE_STATISTICS ON GO ALTER DATABASE [DUDLOOPTRAN] SET AUTO_SHRINK OFF GO ALTER DATABASE [DUDLOOPTRAN] SET AUTO_UPDATE_STATISTICS ON GO ALTER DATABASE [DUDLOOPTRAN] SET CURSOR_CLOSE_ON_COMMIT OFF GO ALTER DATABASE [DUDLOOPTRAN] SET CURSOR_DEFAULT GLOBAL GO ALTER DATABASE [DUDLOOPTRAN] SET CONCAT_NULL_YIELDS_NULL OFF GO ALTER DATABASE [DUDLOOPTRAN] SET NUMERIC_ROUNDABORT OFF GO ALTER DATABASE [DUDLOOPTRAN] SET QUOTED_IDENTIFIER OFF GO ALTER DATABASE [DUDLOOPTRAN] SET RECURSIVE_TRIGGERS OFF GO ALTER DATABASE [DUDLOOPTRAN] SET DISABLE_BROKER GO ALTER DATABASE [DUDLOOPTRAN] SET AUTO_UPDATE_STATISTICS_ASYNC OFF GO ALTER DATABASE [DUDLOOPTRAN] SET DATE_CORRELATION_OPTIMIZATION OFF GO ALTER DATABASE [DUDLOOPTRAN] SET PARAMETERIZATION SIMPLE GO ALTER DATABASE [DUDLOOPTRAN] SET READ_COMMITTED_SNAPSHOT OFF GO ALTER DATABASE [DUDLOOPTRAN] SET READ_WRITE GO ALTER DATABASE [DUDLOOPTRAN] SET RECOVERY FULL GO ALTER DATABASE [DUDLOOPTRAN] SET MULTI_USER GO ALTER DATABASE [DUDLOOPTRAN] SET PAGE_VERIFY CHECKSUM GO ALTER DATABASE [DUDLOOPTRAN] SET TARGET_RECOVERY_TIME = 0 SECONDS GO USE [DUDLOOPTRAN] GO IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'PRIMARY') ALTER DATABASE [DUDLOOPTRAN] MODIFY FILEGROUP [PRIMARY] DEFAULT GO CREATE DATABASE [DUDONETRAN] CONTAINMENT = NONE ON PRIMARY ( NAME = N'DUDONETRAN_Data', FILENAME = N'D:\Data\DUDONETRAN_01.mdf' , SIZE = 2097152KB , FILEGROWTH = 1048576KB ) LOG ON ( NAME = N'DUDONETRAN_log', FILENAME = N'D:\Data\DUDONETRAN_01.ldf' , SIZE = 8388608KB , FILEGROWTH = 2097152KB ) GO ALTER DATABASE [DUDONETRAN] SET COMPATIBILITY_LEVEL = 110 GO ALTER DATABASE [DUDONETRAN] SET ANSI_NULL_DEFAULT OFF GO ALTER DATABASE [DUDONETRAN] SET ANSI_NULLS OFF GO ALTER DATABASE [DUDONETRAN] SET ANSI_PADDING OFF GO ALTER DATABASE [DUDONETRAN] SET ANSI_WARNINGS OFF GO ALTER DATABASE [DUDONETRAN] SET ARITHABORT OFF GO ALTER DATABASE [DUDONETRAN] SET AUTO_CLOSE OFF GO ALTER DATABASE [DUDONETRAN] SET AUTO_CREATE_STATISTICS ON GO ALTER DATABASE [DUDONETRAN] SET AUTO_SHRINK OFF GO ALTER DATABASE [DUDONETRAN] SET AUTO_UPDATE_STATISTICS ON GO ALTER DATABASE [DUDONETRAN] SET CURSOR_CLOSE_ON_COMMIT OFF GO ALTER DATABASE [DUDONETRAN] SET CURSOR_DEFAULT GLOBAL GO ALTER DATABASE [DUDONETRAN] SET CONCAT_NULL_YIELDS_NULL OFF GO ALTER DATABASE [DUDONETRAN] SET NUMERIC_ROUNDABORT OFF GO ALTER DATABASE [DUDONETRAN] SET QUOTED_IDENTIFIER OFF GO ALTER DATABASE [DUDONETRAN] SET RECURSIVE_TRIGGERS OFF GO ALTER DATABASE [DUDONETRAN] SET DISABLE_BROKER GO ALTER DATABASE [DUDONETRAN] SET AUTO_UPDATE_STATISTICS_ASYNC OFF GO ALTER DATABASE [DUDONETRAN] SET DATE_CORRELATION_OPTIMIZATION OFF GO ALTER DATABASE [DUDONETRAN] SET PARAMETERIZATION SIMPLE GO ALTER DATABASE [DUDONETRAN] SET READ_COMMITTED_SNAPSHOT OFF GO ALTER DATABASE [DUDONETRAN] SET READ_WRITE GO ALTER DATABASE [DUDONETRAN] SET RECOVERY FULL GO ALTER DATABASE [DUDONETRAN] SET MULTI_USER GO ALTER DATABASE [DUDONETRAN] SET PAGE_VERIFY CHECKSUM GO ALTER DATABASE [DUDONETRAN] SET TARGET_RECOVERY_TIME = 0 SECONDS GO USE [DUDONETRAN] GO IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'PRIMARY') ALTER DATABASE [DUDONETRAN] MODIFY FILEGROUP [PRIMARY] DEFAULT GO
And this is the code to create the tables:
USE DUDLOOPTRAN GO -- Create an populate a table in each database to use in the update loops CREATE TABLE looptranu (runnumber INT ,rundate DATETIME ,vartext VARCHAR(10) ,chartext CHAR(10)) USE DUDONETRAN GO CREATE TABLE onetranu (runnumber INT ,rundate DATETIME ,vartext VARCHAR(10) ,chartext CHAR(10)) USE DUDLOOPTRAN GO INSERT INTO looptranu VALUES (1 ,GETDATE() ,REPLICATE('a',10) ,REPLICATE('b',10)) GO 100000 USE DUDONETRAN GO INSERT INTO onetranu VALUES (1 ,GETDATE() ,REPLICATE('a',10) ,REPLICATE('b',10)) GO 100000
After creating the databases and tables, I ran the following scripts. First, this script runs the updates in a single transaction:
USE DUDONETRAN GO -- Declare @loopcount variable to run the loop -- Declare @textlen to vary the length of the text fields DECLARE @loopcount INT DECLARE @textlen TINYINT SET @loopcount = 1 -- Begin an explicit transaction that will remain open for the duration of the loop BEGIN TRAN WHILE @loopcount <= 10 BEGIN -- Use the modulus operator to set text length to the remainder of @loopcount / 10 SET @textlen = (@loopcount % 10) -- Update onetranu using the values described below UPDATE onetranu -- Set runnumber equal to @loopcount SET runnumber = @loopcount, -- Set rundate equal the current datetime rundate = GETDATE(), -- Set vartext to a string of a's, with the length determined by the @textlen variable vartext = REPLICATE('a',@textlen), -- Set chartext to a string of b's, with the length determined by the @textlen variable chartext = REPLICATE('b',@textlen) -- Increment @loopcount SET @loopcount = @loopcount + 1 END COMMIT
This code does each update in an explicit transaction:
USE DUDLOOPTRAN GO -- Declare @loopcount variable to run the loop -- Declare @textlen to vary the length of the text fields DECLARE @loopcount INT DECLARE @textlen TINYINT SET @loopcount = 1 -- Begin an explicit transaction that will remain open for the duration of the loop WHILE @loopcount <= 10 BEGIN BEGIN TRAN -- Use the modulus operator to set text length to the remainder of @loopcount / 10 SET @textlen = (@loopcount % 10) -- Update onetranu using the values described below UPDATE looptranu -- Set runnumber equal to @loopcount SET runnumber = @loopcount, -- Set rundate equal the current datetime rundate = GETDATE(), -- Set vartext to a string of a's, with the length determined by the @textlen variable vartext = REPLICATE('a',@textlen), -- Set chartext to a string of b's, with the length determined by the @textlen variable chartext = REPLICATE('b',@textlen) -- Increment @loopcount SET @loopcount = @loopcount + 1 COMMIT END
Finally, this code will insert the contents of sys.dm_tran_database_transactions into a temp table for analysis. Run this while the update scripts are running:
USE master GO -- Create a temp table to hold the result set from sys.dm_tran_database_transactions CREATE TABLE #transactions (tranid BIGINT ,databaseid INT ,trantype INT ,transtate INT ,recordcount INT ,bytesused BIGINT ,bytesreserved BIGINT) -- Run an infinite loop for the duration of each update script -- stopping it when the script completes WHILE 1 = 1 BEGIN INSERT INTO #transactions SELECT transaction_id AS [Tran ID], database_id AS [Database ID], database_transaction_type AS [Tran Type], database_transaction_state AS [Tran State], database_transaction_log_record_count AS [Log Record Count], database_transaction_log_bytes_used AS [Log Bytes Used], database_transaction_log_bytes_reserved AS [Log Bytes Reserved] FROM sys.dm_tran_database_transactions END -- Use the select statement below to see the results -- This query will return a single row per transaction -- for read/write activity (trantype = 1) -- that has generated log records (transtate = 4) SELECT tranid, MAX(recordcount) AS [Record Count], (MAX(bytesused)/1045876) AS [MB Used], (MAX(bytesreserved)/1045876) AS [MB Reserved] FROM #transactions WHERE databaseid = DB_ID('DUDONETRAN') -- Make sure to use the correct database name AND trantype = 1 AND transtate = 4 GROUP BY tranid -- Clean up the temp table after the run --DROP TABLE #transactions
Here are the results of the single transaction:
And here are the results of the explicit transactions in the loop:
As you can see, the total space used and reserved is approximately the same. The difference is the updates running in explicit transactions release the reserved space when each transaction commits. While the total reserved space for the explicit transaction is 344mb, 36mb is the most space reserved at any one time. However, the 342mb reserved for the single transaction is held for the duration of the transaction.
The examples I've shown here update 100,000 narrow rows 10 times. The cost of running updates or inserts to much wider rows millions of times would be exponentially greater. I hope these examples have demonstrated this.