Archive

Posts Tagged ‘DMO’

Determining Estimated Completion Time from sys.dm_exec_requests

November 21, 2012 2 comments

According to Books Online, the following list of commands will generate a percent complete and an estimated completion time in the sys.dm_exec_requests DMO:

Percent Complete Commands

You can use this to determine approximately how long a backup, restore or DBCC command will run. This query will get you a nicely formatted time value.

USE master
GO

SELECT
CASE
WHEN estimated_completion_time < 36000000
THEN '0'
ELSE ''
END
+ RTRIM(estimated_completion_time/1000/3600)
+ ':' + RIGHT('0' + RTRIM((estimated_completion_time/1000)%3600/60), 2)
+ ':' + RIGHT('0' + RTRIM((estimated_completion_time/1000)%60), 2) AS [Time Remaining],
percent_complete,
*
FROM sys.dm_exec_requests
WHERE percent_complete > 0

Calculating IO Deltas Using sys.dm_io_virtual_file_stats

August 2, 2012 Leave a comment

In a previous post, I showed how to calculate deltas for cumulative values from sys.dm_os_performance_counters. At the end of that post, I mentioned that the counter values are reset when the instance restarts, which can result in negative delta values. For the purposes of that script I did not care about negative values.

However, we are using cumulative values from sys.dm_io_virtual_file_stats to generate a report in Reporting Services, and the negative values are mucking up our graphs. The script I used to solve the problem is listed below. As usual, I hope you can put the ideas contained to use.

USE DUDDBA_MASTER
GO

-- Declare variables and create temp tables
DECLARE @sqlstr NVARCHAR(4000)
DECLARE @servername VARCHAR(40)
DECLARE @totalrows INT
DECLARE @currentrow INT
DECLARE @status INT
DECLARE @oldreadcnt BIGINT
DECLARE @newreadcnt BIGINT
DECLARE @newdate DATETIME
DECLARE @olddate DATETIME
DECLARE @mindate DATETIME
DECLARE @instancename VARCHAR(40)
DECLARE @tempstartup DATETIME
DECLARE @SQLString NVARCHAR(500);
DECLARE @ParmDefinition NVARCHAR(500);
DECLARE @startupdateout DATETIME


CREATE TABLE #serverlist 
(servername VARCHAR(40) NOT NULL)

CREATE TABLE #IODATA
(instancename VARCHAR(8)
,databasename VARCHAR(100)
,runtime DATETIME
,readcount BIGINT
,bytesread BIGINT
,writecount BIGINT
,byteswritten BIGINT)

CREATE TABLE #IODATANEW
(databasename VARCHAR(100)
,runtime DATETIME
,readcount BIGINT
,bytesread BIGINT
,writecount BIGINT
,byteswritten BIGINT)

CREATE TABLE #IODATAOLD
(databasename VARCHAR(100)
,runtime DATETIME
,readcount BIGINT
,bytesread BIGINT
,writecount BIGINT
,byteswritten BIGINT)

-- Initialize the @newdate and @olddate variables for the inner loop
SET @newdate = CONVERT(DATE,GETDATE())
SET @olddate = DATEADD(DD,-1,@newdate)

-- Build a list of linked servers for the outer loop, excluding non-2008 instances 
INSERT INTO #serverlist
SELECT 'DUL' + s1.DPC_SQL_CLUSTER_NM 
FROM DUT_POD_SQL_SERVER s1
INNER JOIN DUT_POD_SQL_VERSION s2
ON s1.DPC_SQL_CLUSTER_NM = s2.SVR_HOST_NM
WHERE s2.SVR_SQL_VERSION_TXT LIKE '%10.%' 

-- Store the @@ROWCOUNT value from the insert statement to control the outer loop
SET @totalrows = @@ROWCOUNT
SET @currentrow = 1

-- Execute the outer loop once for each linked server
WHILE @currentrow <= @totalrows
BEGIN

	-- Pull the first linked server name from the table
	SET @servername = (SELECT TOP 1 servername FROM #serverlist)
	
	-- Run the inner loop from the current date until 2012-01-01
	WHILE @olddate > '2012-01-01'
		BEGIN
			-- Build the dynamic sql string
			SET @sqlstr = 	
			'
			-- The DUT_FILE_IO table contains cumulative IO metrics from sys.dm_io_virtual_file_stats and is populated every 5 minutes
			-- The statement below pulls the values for the current day at midnight into #IODATANEW excluding non-user databases
			INSERT INTO #IODATANEW
			SELECT FIO_DATABASE_NM
			,FIO_RUN_TS
			,SUM(FIO_NBR_READS_CNT) AS [Total Reads]
			,SUM(FIO_BYTES_READ_CNT) AS [Total Bytes Read]
			,SUM(FIO_NBR_WRITES_CNT) AS [Total Writes]
			,SUM(FIO_BYTES_WRITE_CNT) AS [Total Bytes Written]
			FROM ' + @servername + '.duddba.dbo.DUT_FILE_IO 
			WHERE DATEPART(HH,FIO_RUN_TS) = 00
			AND DATEPART(MI,FIO_RUN_TS) = 00
			AND DATEPART(SS,FIO_RUN_TS) = 00
			AND CONVERT(DATE,FIO_RUN_TS) = ' + '''' + CONVERT(VARCHAR(30),@newdate,120)  + '''' +
			'
			AND FIO_DATABASE_NM NOT IN (''DUDDBA'',''DUDREPO'',''DUDDUMMY'',''DUDTRACE'',''master'',''model'',''msdb'')
			GROUP BY FIO_DATABASE_NM, FIO_RUN_TS
			ORDER BY FIO_RUN_TS DESC, FIO_DATABASE_NM

			-- The statement below pulls the values for the previous day at midnight into #IODATAOLD excluding non-user databases			
			INSERT INTO #IODATAOLD
			SELECT FIO_DATABASE_NM
			,FIO_RUN_TS
			,SUM(FIO_NBR_READS_CNT) AS [Total Reads]
			,SUM(FIO_BYTES_READ_CNT) AS [Total Bytes Read]
			,SUM(FIO_NBR_WRITES_CNT) AS [Total Writes]
			,SUM(FIO_BYTES_WRITE_CNT) AS [Total Bytes Written]
			FROM ' + @servername + '.duddba.dbo.DUT_FILE_IO 
			WHERE DATEPART(HH,FIO_RUN_TS) = 00
			AND DATEPART(MI,FIO_RUN_TS) = 00
			AND DATEPART(SS,FIO_RUN_TS) = 00
			AND CONVERT(DATE,FIO_RUN_TS) = ' + '''' + CONVERT(VARCHAR(30),@olddate,120)  + '''' +
			'
			AND FIO_DATABASE_NM NOT IN (''DUDDBA'',''DUDREPO'',''DUDDUMMY'',''DUDTRACE'',''master'',''model'',''msdb'')
			GROUP BY FIO_DATABASE_NM, FIO_RUN_TS
			ORDER BY FIO_RUN_TS DESC, FIO_DATABASE_NM
			
			
			-- The insert statement below calculates the delta values for the IO metrics collected
			-- The series of CASE statements accounts for the possibility of a negative value
			-- If the delta value is positive, it is used
			-- If it is negative, the raw value is used
			-- This is not the ideal solution, because it does not account for the cumulative values
			-- prior to the restart but it is good enough for the purposes of the reports we are running

			INSERT INTO #IODATA
			SELECT ' + '''' + SUBSTRING(@servername,4,8) + '''' +',n.databasename, n.runtime, 
			CASE
				WHEN ((n.readcount - o.readcount) > 0)
			THEN
				(n.readcount - o.readcount) 
			ELSE
				n.readcount
			END,
			CASE
				WHEN ((n.bytesread - o.bytesread)/1045876 > 0)
			THEN
				(n.bytesread - o.bytesread)
			ELSE
				n.bytesread
			END,
			CASE
				WHEN ((n.writecount - o.writecount) > 0)
			THEN
				(n.writecount - o.writecount)
			ELSE
				n.writecount
			END,
			CASE
				WHEN ((n.byteswritten - o.byteswritten)/1045876 > 0)
			THEN
				(n.byteswritten - o.byteswritten)/1045876 
			ELSE
			n.byteswritten
			END              
			FROM #IODATANEW n  
			INNER JOIN #IODATAOLD o  
			ON n.runtime = ' + '''' + CONVERT(VARCHAR(30),@newdate,120)  + '''' +
			'AND n.databasename = o.databasename
			WHERE o.runtime = ' + '''' + CONVERT(VARCHAR(30),@olddate,120)  + '''' +
			'
			'
			
			-- Execute the dynamic SQL string
			EXEC(@sqlstr)
			
			--Reset the new and old date variables
			SET @newdate = @olddate
			SET @olddate = DATEADD(DD,-1,@olddate)
			
			-- Clear out the new and old temp tables
			DELETE FROM #IODATANEW
			DELETE FROM #IODATAOLD
			
		END
	
	-- Insert the results for each linked server into the permanent table			
	INSERT INTO DUT_FILE_IO_DELTA
	SELECT * FROM #IODATA
	
	-- Delete the current linked server from the #serverlist table	
	DELETE FROM #serverlist WHERE servername = @servername

	-- Increment the loop counter
	SET @currentrow = @currentrow + 1
	
	-- Clear out the #IODATA table
	DELETE FROM #IODATA

	-- Decrement the new and old date variables	
	SET @newdate = CONVERT(DATE,GETDATE())
	SET @olddate = DATEADD(DD,-1,@newdate)
	
END


-- Clean up the temp tables at the end of the script
DROP TABLE #serverlist
DROP TABLE #IODATA
DROP TABLE #IODATAOLD
DROP TABLE #IODATANEW

Categories: DMO, SQL Server, T-SQL Tags: , ,

Tracking and Calculating Wait Times – Part II

July 3, 2012 1 comment

In my last post, I described the process we use to collect wait information on our systems. Because SQL Server stores cumulative values, the raw wait time data is not very useful. The following script shows how to pull daily data and calculate the deltas.

This script was written for a vended application that has been experiencing slow response times. The vendor asked for wait time information to see if there was correlation between increased wait times and the slow down.

USE [Your Database Name]
GO

-- Create the necessary temp tables and variables

CREATE TABLE #waits
(waittype VARCHAR(100)
,waitdate DATETIME
,waittime BIGINT)

CREATE TABLE #waitsnew
(waittypenew VARCHAR(100)
,waitdatenew DATETIME
,waittimenew BIGINT)

CREATE TABLE #waitsold
(waittypeold VARCHAR(100)
,waitdateold DATETIME
,waittimeold BIGINT)

CREATE TABLE #waitsdelta
(waittypedelta VARCHAR(100)
,waitdatedelta DATETIME
,waittimedelta BIGINT)

DECLARE @olddate DATETIME
DECLARE @newdate DATETIME
DECLARE @totalcount INT
DECLARE @oldcount INT
DECLARE @newcount INT
DECLARE @loopcount INT

-- initialize the @loopcount variable
SET @loopcount = 1

-- populate the #waits temp table with data for the date range desired
-- DATEDIFF(dd, 0, WAT_CAPTURE_DT)) returns the date without the time
-- because the wait time value is cumulative, the max value will be the 
-- total count for the day
-- exclude system waits using WHERE clause from Jonathan Kehayias' blog post
-- http://www.simple-talk.com/sql/performance/a-performance-troubleshooting-methodology-for-sql-server/

/****** Script for SelectTopNRows command from SSMS ******/
INSERT INTO #waits
SELECT WAT_WAIT_TYPE_NM, DATEADD(dd, 0, DATEDIFF(dd, 0, WAT_CAPTURE_DT)), MAX(WAT_WAIT_TM_MS_NBR) FROM [Your Database Name].[dbo].[Your Table Name]
WHERE WAT_CAPTURE_DT > '2012-06-01'
AND WAT_WAIT_TYPE_NM NOT IN -- filter out additional irrelevant waits
( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT', 
'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT', 
'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
'RESOURCE_QUEUE' )
AND WAT_WAIT_TM_MS_NBR > 0
GROUP BY WAT_WAIT_TYPE_NM, DATEADD(dd, 0, DATEDIFF(dd, 0, WAT_CAPTURE_DT))
ORDER BY 2, 3 DESC 

-- set the loop limit equal to the number of rows returned above
SET @totalcount = @@ROWCOUNT

-- run a loop to calculate the delta value for each day
WHILE @loopcount <= @totalcount
BEGIN

-- select the minimum date from the #waits table
SELECT @olddate = MIN(waitdate)
FROM #waits

-- select the next date from the #waits table
SET @newdate = DATEADD(dd,1,@olddate)

-- insert the row for the old date into #waitsold
INSERT INTO #waitsold
SELECT * FROM #waits
WHERE waitdate = @olddate

-- set @oldcount equal the row count 
SET @oldcount = @@ROWCOUNT

-- insert the row for the new date into #waits
INSERT INTO #waitsnew
SELECT * FROM #waits
WHERE waitdate = @newdate

-- join #waitsnew and #waitsold, inserting the wait type, date 
-- and delta between the new and old times into #waitsdelta
INSERT INTO #waitsdelta
SELECT n.waittypenew, n.waitdatenew, (n.waittimenew - o.waittimeold)
FROM #waitsnew n
INNER JOIN #waitsold o
ON n.waitdatenew = @newdate
AND n.waittypenew = o.waittypeold
WHERE o.waitdateold = @olddate

-- delete the old data from #waits
DELETE FROM #waits
WHERE waitdate = @olddate

-- reset the old and new dates to the next pair of values
SET @olddate = @newdate
SET @newdate = DATEADD(DD,1,@newdate)

-- increment the @loopcount 
SET @loopcount = @loopcount + @oldcount

END

-- select the delta values from #waitsdelta
SELECT waittypedelta, waitdatedelta, (waittimedelta) AS waittimedelta
FROM #waitsdelta
ORDER BY waittypedelta, waitdatedelta

-- clean up temp tables
DROP TABLE #waits
DROP TABLE #waitsold
DROP TABLE #waitsnew
DROP TABLE #waitsdelta

One thing to keep in mind is that the wait time values are zeroes out when the instance restarts. So if the instance restarts, you will end up with negative values for your wait times. Because I am running this as an ad hoc query, I can work around this possibility. If you wanted to automate the process, I would suggest taking this into consideration.

I learned a lot working on this problem, and I hope someone out there can make use of it.

Categories: DMO, SQL Server, SQL Waits, T-SQL Tags: , ,

Tracking and Calculating Wait Times – Part I

June 20, 2012 1 comment

Several weeks ago, I took a call from a vended application investigating slowness that occurred on a regular basis on Wednesdays. Members of my team had looked at the performance of the app from a SQL Server perspective during the slow period and could not find anything out of line. The app had also worked with the Windows, Storage and Network teams, and no red flags were found. The vendor had asked if we could provide information about SQL Server wait times.

The data they were interested in is available from sys.dm_os_waiting_tasks, but the wait time values are cumulative so the raw values are not very useful. Fortunately, one of my former co-workers (I owe him a beer) set up a process that created the table scripted below:

USE [Your Database Name]
GO

/****** Object:  Table [dbo].[Your Table Name]    Script Date: 06/18/2012 21:27:17 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Your Table Name]
([WAT_WAIT_TYPE_NM] [nvarchar](120) NOT NULL,
 [WAT_WAITING_TASKS_NBR] [bigint] NOT NULL,
 [WAT_WAIT_TM_MS_NBR] [bigint] NOT NULL,
 [WAT_MAX_WAIT_TIME_MS_NBR] [bigint] NOT NULL,
 [WAT_SIGNAL_WAIT_TIME_MS_NBR] [bigint] NOT NULL,
 [WAT_CAPTURE_DT] [datetime] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[Your Table Name] ADD  DEFAULT (getdate()) FOR [WAT_CAPTURE_DT]
GO

The process runs the following stored procedure hourly from an agent job:

USE [Your Database Name]
GO
/****** Object:  StoredProcedure [dbo].[Your Procedure Name]    Script Date: 06/18/2012 21:28:15 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[Your Procedure Name]
AS
SET NOCOUNT ON ;

        INSERT INTO [dbo].[DUT_WAIT_STATS] (
                        WAT_WAIT_TYPE_NM,
                        WAT_WAITING_TASKS_NBR,
                        WAT_WAIT_TM_MS_NBR,
                        WAT_MAX_WAIT_TIME_MS_NBR,
                        WAT_SIGNAL_WAIT_TIME_MS_NBR,
                        WAT_CAPTURE_DT)

		SELECT  [wait_type],
		        [waiting_tasks_count],
				[wait_time_ms],
                [max_wait_time_ms],
		        [signal_wait_time_ms],
				GETDATE()
		FROM sys.dm_os_wait_stats

GO

I would recommend using a process like this to provide the ability to troubleshoot wait times historically as it is simple to set up and is very lightweight. We are storing 6 months’ worth of data and the table is less than 4mb in size.

This gave me the data I was looking for, but because the wait times are cumulative I still needed to calculate the deltas. I will cover the process I put together in my next post.

Joining sys.dm_tran_database_transactions to fn_dblog Results

June 11, 2012 Leave a comment

In my previous two posts, Reading the Transaction Log Using fn_dblog and Using dm_tran_database_transactions and a First Step to Converting LSNs, I demonstrated using these DMOs to see what transactions are executing and to read the transaction log. In this post, I’ll show how to join those results together. Here is the script I used:

USE [Your Database Name Here]
GO

--  Declare variables
DECLARE @startlsn VARBINARY(15)
DECLARE @lastlsn VARBINARY(15)
DECLARE @LSN_START NVARCHAR(50)
DECLARE @LSN_END NVARCHAR(50)
DECLARE @dbname NVARCHAR(100)
DECLARE @bytesused BIGINT
DECLARE @bytesreserved BIGINT
DECLARE @loopcount INT
DECLARE @looplimit INT

-- Initialize the loop counter
SET @loopcount = 1

-- Create a temp table to hold all of the active transactions from
-- sys.dm_tran_database_transactions
CREATE TABLE #tranresults
(dbname NVARCHAR(100)
,bytesused BIGINT
,bytesreserved BIGINT
,startlsn VARBINARY(15)
,endlsn VARBINARY(15))

-- Create a temp table to hold data about each transaction and the associated log records
CREATE TABLE #logresults
(dbname NVARCHAR(100)
,bytesused BIGINT
,bytesreserved BIGINT
,spid INT
,begintime NVARCHAR(48)
,currentlsn VARCHAR(46)
,previouslsn VARCHAR(46)
,transactionid VARCHAR(28)
,operation NVARCHAR(62)
,context VARCHAR(62)
,logreserve INT
,allocunitname NVARCHAR(774)
,pageid NVARCHAR(28)
,numberoflocks SMALLINT
,lockinformation NVARCHAR(512))

-- Insert data from sys.dm_tran_database_transaction for all transactions that have
-- generated log records
-- Notice the use of the fn_convertnumericlsntobinary function
-- This is necessary for subsequent SUBSTRING operations
INSERT INTO #tranresults
SELECT DB_NAME(database_id),
database_transaction_log_bytes_used,
database_transaction_log_bytes_reserved,
master.dbo.fn_convertnumericlsntobinary(database_transaction_begin_lsn),
master.dbo.fn_convertnumericlsntobinary(database_transaction_last_lsn)
FROM sys.dm_tran_database_transactions
WHERE database_transaction_begin_lsn IS NOT NULL
AND database_transaction_last_lsn IS NOT NULL
AND database_transaction_state = 4
AND database_id = DB_ID()

-- Set the @looplimit variable to the number of rows selected from
-- sys.dm_tran_database_transactions
SET @looplimit = @@ROWCOUNT

-- Run through this loop once for each row selected from
-- sys.dm_tran_database_transactions
WHILE @loopcount <= @looplimit
BEGIN

-- Select data from the first row into variables for later use
	SELECT TOP 1
	@dbname = dbname,
	@bytesused = bytesused,
	@bytesreserved = bytesreserved,
	@startlsn = startlsn,
	@lastlsn = endlsn
	FROM #tranresults

-- Convert the starting and ending lsn for use in the fn_dblog function below
-- The following code is necessary because fn_dblog expects the lsn as colon-delimited
-- numeric values, represented as VARCHAR
-- I initially thought I could get away with converting the numeric lsn to VARCHAR and
-- running the SUBSTRING operations on it, but the numeric value is variable in length

	SET @LSN_START = CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@startlsn,1,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@startlsn,5,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@startlsn,9,2)))

	SET @LSN_END = CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@lastlsn,1,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@lastlsn,5,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@lastlsn,9,2)))

	SELECT @LSN_START
	SELECT @LSN_END

-- Now having re-reconverted the lsns, exit our hero, stage left
-- For transactions with a non-null start and end lsn, insert required data into
-- the #logresults temp table
	IF (@LSN_START IS NOT NULL) AND (@LSN_END IS NOT NULL)
	BEGIN
	INSERT INTO #logresults
	SELECT
	@dbname AS [Database],
	@bytesused AS [Bytes Used],
	@bytesreserved AS [Bytes Reserved],
	SPID,
	[Begin Time],
	[Current LSN],
	[Previous LSN],
	[Transaction ID],
	Operation,
	Context,
	[Log Reserve],
	AllocUnitName,
	[Page ID],
	[Number of Locks],
	[Lock Information]
	FROM ::fn_dblog(@LSN_START, @LSN_END)
	END

-- Delete the top row from the #tranresults temp table
	DELETE TOP (1) FROM #tranresults
	SET @loopcount = @loopcount + 1

END

SELECT * FROM #logresults
ORDER BY currentlsn

-- Clean up the temp tables
DROP TABLE #logresults
DROP TABLE #tranresults

The comments do a good job of explaining what the script is doing, but I want to clarify a few things. Because I was unable to join the DMOs on transaction ID, I decided to use the begin and end lsn values from the DMV as input parameters to fn_dblog. And here’s where the fun began. The DMV stores the lsns as numeric values. The function expects a colon-delimited VARCHAR string made up of numbers. For example, if the DMV stores an lsn value of 4242000000012300002, fn_dblog will expect 4242:123:2.

To do the the conversion, I followed these steps:

  1. Converted the values to hex using fn_convertnumericlsntobinary;
  2. Substringed each of the colon-delimited parts of the lsn, converted to INT;
  3. Converted the INT values to VARCHAR;
  4. Concatenated the VARCHAR values with colons.

And so, having re-reconverted the lsns, our hero is ready to pass them into fn_dblog. The results of fn_dblog are inserted to the temp table for each set of lsns, which is selected from at the end of the script.

A few points about the result set: fn_dblog is database-specific and the result set only contains log records for currently active transactions. Because the tran log acts like a stream of consciousness for the database, you may see additional LOP_BEGIN_XACT records following the LOP_BEGINXACT for the first transaction returned. Here is an example showing a transaction beginning and committing (highlighted in yellow) while another transaction is open. Notice that the currentlsn values are sequential while the previouslsn values are grouped by transactionid.


I hope this series of posts provides insight into the transaction log and how to relate it to active transactions on an instance.

Using dm_tran_database_transactions and a First Step to Converting LSNs

June 1, 2012 1 comment

In my last post, I showed how to use fn_dblog to read the transaction log. In this post, I will demonstrate using sys.dm_tran_database_transactions DMO to get information about executing transactions. You can find the complete result set for the DMO at this Books Online entry. For this post, I am interested in the following results:

SELECT DB_NAME(database_id),                                                -- Return the database name
database_transaction_log_bytes_used,                                        -- Log bytes used
database_transaction_log_bytes_reserved,                                    -- Log bytes reserved
master.dbo.fn_convertnumericlsntobinary(database_transaction_begin_lsn),    -- Beginning lsn of the transaction
master.dbo.fn_convertnumericlsntobinary(database_transaction_last_lsn)      -- Last lsn of the transaction
FROM sys.dm_tran_database_transactions 
WHERE database_transaction_begin_lsn IS NOT NULL                            -- Confirm that the transaction has 
AND database_transaction_last_lsn IS NOT NULL                               -- a start and end lsn
AND database_transaction_state = 4                                          -- Indicates a transaction that has written to the transaction log
AND database_id = DB_ID()                                                   -- Return rows for the current database

The results look like this:

dm_tran_database_transactions results

 

You will notice that the lsns are being converted using fn_convertnumericlsntobinary. The code for this function is available here at step 3 of alternative approach 1. This is the first in a magical, multi-step odyssey to join the results of sys.dm_tran_database_transactions to fn_dblog.

My initial attempt was a straight join between the function and the DMO on transaction_id, which exists in both. I ran an explicit transaction without a commit, and compared the transaction_ids. The DMO returned the value 1200932 while the function returned 0005602A, the first decimal and the second hex. No big deal, just convert one of them. However, 1200932 converts to 125324 in hex. So, I was stumped but tried to make sense of it. Do I double it and add 30? After spending too much time trying to figure it out, I wrote Paul Randal (b|t) and explained my problem. His response? “The transaction IDs don’t match – annoyingly.”

So, with that bit of wisdom I set out to try another method. This involved using the lsn information from the DMO as input parameters to fn_dblog. My next post will show you more than you ever wanted to know about converting lsns.

Reading the Transaction Log Using fn_dblog

May 25, 2012 3 comments

One of my favorite sections of the SQLskills Internals Immersion Event was logging and recovery. I’ve enjoyed internals since I started working as a COBOL programmer. Figuring out what SQL Server is doing under the covers makes me happy.

We covered a number of methods of interrogating the transaction log, and I have been working to put them together. In my next few blog posts, I will cover two of them and detail the challenges I faced joining the results.

I was familiar with the table-based function fn_dblog from reading SQL Server 2008 Internals. It takes two optional parameters, start and end log sequence number (lsn). Running this code:

SELECT * FROM fn_dblog(NULL,NULL)

will return the entire contents of the transaction log. Replacing the NULLs with an lsn will limit the result set. That query returns 116 columns, which is much more information than I need. Adding the following select list returned the data I was interested in:

SELECT 
SPID,                         -- The process ID for the transaction
[Begin Time],                 -- The start time for the transaction
[Current LSN],                -- The lsn of this record            
[Previous LSN],               -- Previous lsn for this record
Operation,                    -- The operation performed by the log record
Context,                      -- The resource affected by the operation
[Log Reserve],                -- The space reserved for rollback in bytes
AllocUnitName,                -- The name a the affected allocation unit
[Page ID],                    -- The page ID of the affected page
[Number of Locks],            -- The number of locks held by the transaction
[Lock Information]            -- Information about the locks held and the resources locked
FROM fn_dblog(NULL, NULL) 

To generate the result set below, I ran a CHECKPOINT on a database in SIMPLE recovery model to clear out the transaction log. I then ran the following query:

INSERT INTO loginsert
VALUES
(1
,'abcdefghij')
GO

I’ve split the results into two parts. The first set contains everything up to the lock information:

fn_dblog results part one

 

You can see that SPID and Begin Time are only be populated for a BEGIN TRANSACTION (LOP_BEGIN_XACT in the log). The LOP_INSERT_ROWS operation shows the table inserted to and the page affected.

Finally, the Lock Information column demonstrates SQL Server’s lock escalation behavior. In the second half of the result set:

fn_dblog result part two

 

you can see SQL Server acquiring three locks. First, an intent exclusive lock is taken on a table. Second, an intent exclusive lock on a page. Finally, an exclusive lock is taken on a key. I thought this was a very cool representation of lock escalation.

My next posts will cover the following topics:

  • Using sys.dm_tran_database_transactions to see what transactions are executing on the system
  • More information than you thought possible about convert lsns
  • Joining those results to fn_dblog to the log records associated with each active transaction