Archive

Archive for the ‘DMO’ Category

Determining Estimated Completion Time from sys.dm_exec_requests

November 21, 2012 2 comments

According to Books Online, the following list of commands will generate a percent complete and an estimated completion time in the sys.dm_exec_requests DMO:

Percent Complete Commands

You can use this to determine approximately how long a backup, restore or DBCC command will run. This query will get you a nicely formatted time value.

USE master
GO

SELECT
CASE
WHEN estimated_completion_time < 36000000
THEN '0'
ELSE ''
END
+ RTRIM(estimated_completion_time/1000/3600)
+ ':' + RIGHT('0' + RTRIM((estimated_completion_time/1000)%3600/60), 2)
+ ':' + RIGHT('0' + RTRIM((estimated_completion_time/1000)%60), 2) AS [Time Remaining],
percent_complete,
*
FROM sys.dm_exec_requests
WHERE percent_complete > 0

Chicago SQL Server User Group October Meeting

October 15, 2012 1 comment

Last Thursday was the October meeting of the Chicago SQL Server User Group. This was the fourth meeting that Aaron Lowe (b|t) and I have run and things have gone remarkably smoothly. Many thanks to Bill Lescher (t), Ross LoForte and all of the folks at the Microsoft MTC for providing us a solid foundation and great facilities to work with.

The sponsor for this month’s meeting was SWC Technology Partners. Erica Lee and Lowry Kozlowski spoke about the company and the positions they have open. Thanks to Erica, Lowry and SWC for their time and for the pizza.

That brought us to the presentation portion of the evening. I was on deck to give my first presentation in front of a user group, entitled Interrogating the Transaction Log. While I hadn’t practiced as much as I would have liked, I’d run through the various sections enough to be confidently nervous. I feel like I did a decent job, but would have definitely benefited from some additional practice. The only glitch I ran into was that my laptop refused to go into duplicate presentation mode, which meant I had to keep looking over my shoulder during the demos. Despite that, the feedback I received was positive and a number of people told me they had learned something new, which was why I wanted to present in the first place.

Thanks to everyone who attended the meeting and provided feedback. Further thanks to Jes Borland (b|t), Paul Randal (b|t), Kimberly Tripp (b|t), Brent Ozar (b|t), Aaron Lowe and all of the others who have provided encouragement, inspiration and advice.

The August Chicago SQL Server User Group Meeting

August 10, 2012 Leave a comment

Last night was the August meeting of the Chicago SQL Server User Group. It was the second meeting Aaron Lowe and I have run. Ira Whiteside of Melissa Data was kind enough to sponsor the meeting and was scheduled to present. Unfortunately, Ira ran into car problems on the way down and could not make it. In Ira’s absence, Aaron and I decided that we would each do a presentation. Aaron presented on Writing Professional Database Code, which was excellent. Statement terminators and version control for database code are two of Aaron’s suggestions I will bring back to my team.

I started off the meeting with a short presentation introducing Adam Machanic’s sp_whoisactive. If you do any troubleshooting of SQL Server issues, sp_whoisactive is a tool you should be using. It consolidates data from sp_who2, sys.dm_exec_requests and a number of other DMOs to give you a quick and complete picture of how each process is performing on your system. You can pull the execution plan for each process and have the option to generate deltas for a number of metrics over a period of time. The best part about the procedure is that it is free for download. The fact that Adam Machanic provides the code for free is a testament to the strength of the SQL Server community.

If you live in the Chicago area and are not registered for the user group, you can do so here. You will receive e-mail notification for upcoming meetings. We are looking for group members to present at upcoming meetings. If you are interested, you can e-mail me here. I hope to see you at an upcoming meeting.

Categories: CSSUG, DMO, sp_whoisactive Tags: ,

Calculating IO Deltas Using sys.dm_io_virtual_file_stats

August 2, 2012 Leave a comment

In a previous post, I showed how to calculate deltas for cumulative values from sys.dm_os_performance_counters. At the end of that post, I mentioned that the counter values are reset when the instance restarts, which can result in negative delta values. For the purposes of that script I did not care about negative values.

However, we are using cumulative values from sys.dm_io_virtual_file_stats to generate a report in Reporting Services, and the negative values are mucking up our graphs. The script I used to solve the problem is listed below. As usual, I hope you can put the ideas contained to use.

USE DUDDBA_MASTER
GO

-- Declare variables and create temp tables
DECLARE @sqlstr NVARCHAR(4000)
DECLARE @servername VARCHAR(40)
DECLARE @totalrows INT
DECLARE @currentrow INT
DECLARE @status INT
DECLARE @oldreadcnt BIGINT
DECLARE @newreadcnt BIGINT
DECLARE @newdate DATETIME
DECLARE @olddate DATETIME
DECLARE @mindate DATETIME
DECLARE @instancename VARCHAR(40)
DECLARE @tempstartup DATETIME
DECLARE @SQLString NVARCHAR(500);
DECLARE @ParmDefinition NVARCHAR(500);
DECLARE @startupdateout DATETIME


CREATE TABLE #serverlist 
(servername VARCHAR(40) NOT NULL)

CREATE TABLE #IODATA
(instancename VARCHAR(8)
,databasename VARCHAR(100)
,runtime DATETIME
,readcount BIGINT
,bytesread BIGINT
,writecount BIGINT
,byteswritten BIGINT)

CREATE TABLE #IODATANEW
(databasename VARCHAR(100)
,runtime DATETIME
,readcount BIGINT
,bytesread BIGINT
,writecount BIGINT
,byteswritten BIGINT)

CREATE TABLE #IODATAOLD
(databasename VARCHAR(100)
,runtime DATETIME
,readcount BIGINT
,bytesread BIGINT
,writecount BIGINT
,byteswritten BIGINT)

-- Initialize the @newdate and @olddate variables for the inner loop
SET @newdate = CONVERT(DATE,GETDATE())
SET @olddate = DATEADD(DD,-1,@newdate)

-- Build a list of linked servers for the outer loop, excluding non-2008 instances 
INSERT INTO #serverlist
SELECT 'DUL' + s1.DPC_SQL_CLUSTER_NM 
FROM DUT_POD_SQL_SERVER s1
INNER JOIN DUT_POD_SQL_VERSION s2
ON s1.DPC_SQL_CLUSTER_NM = s2.SVR_HOST_NM
WHERE s2.SVR_SQL_VERSION_TXT LIKE '%10.%' 

-- Store the @@ROWCOUNT value from the insert statement to control the outer loop
SET @totalrows = @@ROWCOUNT
SET @currentrow = 1

-- Execute the outer loop once for each linked server
WHILE @currentrow <= @totalrows
BEGIN

	-- Pull the first linked server name from the table
	SET @servername = (SELECT TOP 1 servername FROM #serverlist)
	
	-- Run the inner loop from the current date until 2012-01-01
	WHILE @olddate > '2012-01-01'
		BEGIN
			-- Build the dynamic sql string
			SET @sqlstr = 	
			'
			-- The DUT_FILE_IO table contains cumulative IO metrics from sys.dm_io_virtual_file_stats and is populated every 5 minutes
			-- The statement below pulls the values for the current day at midnight into #IODATANEW excluding non-user databases
			INSERT INTO #IODATANEW
			SELECT FIO_DATABASE_NM
			,FIO_RUN_TS
			,SUM(FIO_NBR_READS_CNT) AS [Total Reads]
			,SUM(FIO_BYTES_READ_CNT) AS [Total Bytes Read]
			,SUM(FIO_NBR_WRITES_CNT) AS [Total Writes]
			,SUM(FIO_BYTES_WRITE_CNT) AS [Total Bytes Written]
			FROM ' + @servername + '.duddba.dbo.DUT_FILE_IO 
			WHERE DATEPART(HH,FIO_RUN_TS) = 00
			AND DATEPART(MI,FIO_RUN_TS) = 00
			AND DATEPART(SS,FIO_RUN_TS) = 00
			AND CONVERT(DATE,FIO_RUN_TS) = ' + '''' + CONVERT(VARCHAR(30),@newdate,120)  + '''' +
			'
			AND FIO_DATABASE_NM NOT IN (''DUDDBA'',''DUDREPO'',''DUDDUMMY'',''DUDTRACE'',''master'',''model'',''msdb'')
			GROUP BY FIO_DATABASE_NM, FIO_RUN_TS
			ORDER BY FIO_RUN_TS DESC, FIO_DATABASE_NM

			-- The statement below pulls the values for the previous day at midnight into #IODATAOLD excluding non-user databases			
			INSERT INTO #IODATAOLD
			SELECT FIO_DATABASE_NM
			,FIO_RUN_TS
			,SUM(FIO_NBR_READS_CNT) AS [Total Reads]
			,SUM(FIO_BYTES_READ_CNT) AS [Total Bytes Read]
			,SUM(FIO_NBR_WRITES_CNT) AS [Total Writes]
			,SUM(FIO_BYTES_WRITE_CNT) AS [Total Bytes Written]
			FROM ' + @servername + '.duddba.dbo.DUT_FILE_IO 
			WHERE DATEPART(HH,FIO_RUN_TS) = 00
			AND DATEPART(MI,FIO_RUN_TS) = 00
			AND DATEPART(SS,FIO_RUN_TS) = 00
			AND CONVERT(DATE,FIO_RUN_TS) = ' + '''' + CONVERT(VARCHAR(30),@olddate,120)  + '''' +
			'
			AND FIO_DATABASE_NM NOT IN (''DUDDBA'',''DUDREPO'',''DUDDUMMY'',''DUDTRACE'',''master'',''model'',''msdb'')
			GROUP BY FIO_DATABASE_NM, FIO_RUN_TS
			ORDER BY FIO_RUN_TS DESC, FIO_DATABASE_NM
			
			
			-- The insert statement below calculates the delta values for the IO metrics collected
			-- The series of CASE statements accounts for the possibility of a negative value
			-- If the delta value is positive, it is used
			-- If it is negative, the raw value is used
			-- This is not the ideal solution, because it does not account for the cumulative values
			-- prior to the restart but it is good enough for the purposes of the reports we are running

			INSERT INTO #IODATA
			SELECT ' + '''' + SUBSTRING(@servername,4,8) + '''' +',n.databasename, n.runtime, 
			CASE
				WHEN ((n.readcount - o.readcount) > 0)
			THEN
				(n.readcount - o.readcount) 
			ELSE
				n.readcount
			END,
			CASE
				WHEN ((n.bytesread - o.bytesread)/1045876 > 0)
			THEN
				(n.bytesread - o.bytesread)
			ELSE
				n.bytesread
			END,
			CASE
				WHEN ((n.writecount - o.writecount) > 0)
			THEN
				(n.writecount - o.writecount)
			ELSE
				n.writecount
			END,
			CASE
				WHEN ((n.byteswritten - o.byteswritten)/1045876 > 0)
			THEN
				(n.byteswritten - o.byteswritten)/1045876 
			ELSE
			n.byteswritten
			END              
			FROM #IODATANEW n  
			INNER JOIN #IODATAOLD o  
			ON n.runtime = ' + '''' + CONVERT(VARCHAR(30),@newdate,120)  + '''' +
			'AND n.databasename = o.databasename
			WHERE o.runtime = ' + '''' + CONVERT(VARCHAR(30),@olddate,120)  + '''' +
			'
			'
			
			-- Execute the dynamic SQL string
			EXEC(@sqlstr)
			
			--Reset the new and old date variables
			SET @newdate = @olddate
			SET @olddate = DATEADD(DD,-1,@olddate)
			
			-- Clear out the new and old temp tables
			DELETE FROM #IODATANEW
			DELETE FROM #IODATAOLD
			
		END
	
	-- Insert the results for each linked server into the permanent table			
	INSERT INTO DUT_FILE_IO_DELTA
	SELECT * FROM #IODATA
	
	-- Delete the current linked server from the #serverlist table	
	DELETE FROM #serverlist WHERE servername = @servername

	-- Increment the loop counter
	SET @currentrow = @currentrow + 1
	
	-- Clear out the #IODATA table
	DELETE FROM #IODATA

	-- Decrement the new and old date variables	
	SET @newdate = CONVERT(DATE,GETDATE())
	SET @olddate = DATEADD(DD,-1,@newdate)
	
END


-- Clean up the temp tables at the end of the script
DROP TABLE #serverlist
DROP TABLE #IODATA
DROP TABLE #IODATAOLD
DROP TABLE #IODATANEW

Categories: DMO, SQL Server, T-SQL Tags: , ,

Tracking and Calculating Wait Times – Part II

July 3, 2012 1 comment

In my last post, I described the process we use to collect wait information on our systems. Because SQL Server stores cumulative values, the raw wait time data is not very useful. The following script shows how to pull daily data and calculate the deltas.

This script was written for a vended application that has been experiencing slow response times. The vendor asked for wait time information to see if there was correlation between increased wait times and the slow down.

USE [Your Database Name]
GO

-- Create the necessary temp tables and variables

CREATE TABLE #waits
(waittype VARCHAR(100)
,waitdate DATETIME
,waittime BIGINT)

CREATE TABLE #waitsnew
(waittypenew VARCHAR(100)
,waitdatenew DATETIME
,waittimenew BIGINT)

CREATE TABLE #waitsold
(waittypeold VARCHAR(100)
,waitdateold DATETIME
,waittimeold BIGINT)

CREATE TABLE #waitsdelta
(waittypedelta VARCHAR(100)
,waitdatedelta DATETIME
,waittimedelta BIGINT)

DECLARE @olddate DATETIME
DECLARE @newdate DATETIME
DECLARE @totalcount INT
DECLARE @oldcount INT
DECLARE @newcount INT
DECLARE @loopcount INT

-- initialize the @loopcount variable
SET @loopcount = 1

-- populate the #waits temp table with data for the date range desired
-- DATEDIFF(dd, 0, WAT_CAPTURE_DT)) returns the date without the time
-- because the wait time value is cumulative, the max value will be the 
-- total count for the day
-- exclude system waits using WHERE clause from Jonathan Kehayias' blog post
-- http://www.simple-talk.com/sql/performance/a-performance-troubleshooting-methodology-for-sql-server/

/****** Script for SelectTopNRows command from SSMS ******/
INSERT INTO #waits
SELECT WAT_WAIT_TYPE_NM, DATEADD(dd, 0, DATEDIFF(dd, 0, WAT_CAPTURE_DT)), MAX(WAT_WAIT_TM_MS_NBR) FROM [Your Database Name].[dbo].[Your Table Name]
WHERE WAT_CAPTURE_DT > '2012-06-01'
AND WAT_WAIT_TYPE_NM NOT IN -- filter out additional irrelevant waits
( 'SLEEP_TASK', 'BROKER_TASK_STOP', 'BROKER_TO_FLUSH',
'SQLTRACE_BUFFER_FLUSH','CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT', 
'LAZYWRITER_SLEEP', 'SLEEP_SYSTEMTASK', 'SLEEP_BPOOL_FLUSH',
'BROKER_EVENTHANDLER', 'XE_DISPATCHER_WAIT', 'FT_IFTSHC_MUTEX',
'CHECKPOINT_QUEUE', 'FT_IFTS_SCHEDULER_IDLE_WAIT', 
'BROKER_TRANSMITTER', 'FT_IFTSHC_MUTEX', 'KSOURCE_WAKEUP',
'LAZYWRITER_SLEEP', 'LOGMGR_QUEUE', 'ONDEMAND_TASK_QUEUE',
'REQUEST_FOR_DEADLOCK_SEARCH', 'XE_TIMER_EVENT', 'BAD_PAGE_PROCESS',
'DBMIRROR_EVENTS_QUEUE', 'BROKER_RECEIVE_WAITFOR',
'PREEMPTIVE_OS_GETPROCADDRESS', 'PREEMPTIVE_OS_AUTHENTICATIONOPS',
'WAITFOR', 'DISPATCHER_QUEUE_SEMAPHORE', 'XE_DISPATCHER_JOIN',
'RESOURCE_QUEUE' )
AND WAT_WAIT_TM_MS_NBR > 0
GROUP BY WAT_WAIT_TYPE_NM, DATEADD(dd, 0, DATEDIFF(dd, 0, WAT_CAPTURE_DT))
ORDER BY 2, 3 DESC 

-- set the loop limit equal to the number of rows returned above
SET @totalcount = @@ROWCOUNT

-- run a loop to calculate the delta value for each day
WHILE @loopcount <= @totalcount
BEGIN

-- select the minimum date from the #waits table
SELECT @olddate = MIN(waitdate)
FROM #waits

-- select the next date from the #waits table
SET @newdate = DATEADD(dd,1,@olddate)

-- insert the row for the old date into #waitsold
INSERT INTO #waitsold
SELECT * FROM #waits
WHERE waitdate = @olddate

-- set @oldcount equal the row count 
SET @oldcount = @@ROWCOUNT

-- insert the row for the new date into #waits
INSERT INTO #waitsnew
SELECT * FROM #waits
WHERE waitdate = @newdate

-- join #waitsnew and #waitsold, inserting the wait type, date 
-- and delta between the new and old times into #waitsdelta
INSERT INTO #waitsdelta
SELECT n.waittypenew, n.waitdatenew, (n.waittimenew - o.waittimeold)
FROM #waitsnew n
INNER JOIN #waitsold o
ON n.waitdatenew = @newdate
AND n.waittypenew = o.waittypeold
WHERE o.waitdateold = @olddate

-- delete the old data from #waits
DELETE FROM #waits
WHERE waitdate = @olddate

-- reset the old and new dates to the next pair of values
SET @olddate = @newdate
SET @newdate = DATEADD(DD,1,@newdate)

-- increment the @loopcount 
SET @loopcount = @loopcount + @oldcount

END

-- select the delta values from #waitsdelta
SELECT waittypedelta, waitdatedelta, (waittimedelta) AS waittimedelta
FROM #waitsdelta
ORDER BY waittypedelta, waitdatedelta

-- clean up temp tables
DROP TABLE #waits
DROP TABLE #waitsold
DROP TABLE #waitsnew
DROP TABLE #waitsdelta

One thing to keep in mind is that the wait time values are zeroes out when the instance restarts. So if the instance restarts, you will end up with negative values for your wait times. Because I am running this as an ad hoc query, I can work around this possibility. If you wanted to automate the process, I would suggest taking this into consideration.

I learned a lot working on this problem, and I hope someone out there can make use of it.

Categories: DMO, SQL Server, SQL Waits, T-SQL Tags: , ,

Tracking and Calculating Wait Times – Part I

June 20, 2012 1 comment

Several weeks ago, I took a call from a vended application investigating slowness that occurred on a regular basis on Wednesdays. Members of my team had looked at the performance of the app from a SQL Server perspective during the slow period and could not find anything out of line. The app had also worked with the Windows, Storage and Network teams, and no red flags were found. The vendor had asked if we could provide information about SQL Server wait times.

The data they were interested in is available from sys.dm_os_waiting_tasks, but the wait time values are cumulative so the raw values are not very useful. Fortunately, one of my former co-workers (I owe him a beer) set up a process that created the table scripted below:

USE [Your Database Name]
GO

/****** Object:  Table [dbo].[Your Table Name]    Script Date: 06/18/2012 21:27:17 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Your Table Name]
([WAT_WAIT_TYPE_NM] [nvarchar](120) NOT NULL,
 [WAT_WAITING_TASKS_NBR] [bigint] NOT NULL,
 [WAT_WAIT_TM_MS_NBR] [bigint] NOT NULL,
 [WAT_MAX_WAIT_TIME_MS_NBR] [bigint] NOT NULL,
 [WAT_SIGNAL_WAIT_TIME_MS_NBR] [bigint] NOT NULL,
 [WAT_CAPTURE_DT] [datetime] NOT NULL
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[Your Table Name] ADD  DEFAULT (getdate()) FOR [WAT_CAPTURE_DT]
GO

The process runs the following stored procedure hourly from an agent job:

USE [Your Database Name]
GO
/****** Object:  StoredProcedure [dbo].[Your Procedure Name]    Script Date: 06/18/2012 21:28:15 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE PROCEDURE [dbo].[Your Procedure Name]
AS
SET NOCOUNT ON ;

        INSERT INTO [dbo].[DUT_WAIT_STATS] (
                        WAT_WAIT_TYPE_NM,
                        WAT_WAITING_TASKS_NBR,
                        WAT_WAIT_TM_MS_NBR,
                        WAT_MAX_WAIT_TIME_MS_NBR,
                        WAT_SIGNAL_WAIT_TIME_MS_NBR,
                        WAT_CAPTURE_DT)

		SELECT  [wait_type],
		        [waiting_tasks_count],
				[wait_time_ms],
                [max_wait_time_ms],
		        [signal_wait_time_ms],
				GETDATE()
		FROM sys.dm_os_wait_stats

GO

I would recommend using a process like this to provide the ability to troubleshoot wait times historically as it is simple to set up and is very lightweight. We are storing 6 months’ worth of data and the table is less than 4mb in size.

This gave me the data I was looking for, but because the wait times are cumulative I still needed to calculate the deltas. I will cover the process I put together in my next post.

Joining sys.dm_tran_database_transactions to fn_dblog Results

June 11, 2012 Leave a comment

In my previous two posts, Reading the Transaction Log Using fn_dblog and Using dm_tran_database_transactions and a First Step to Converting LSNs, I demonstrated using these DMOs to see what transactions are executing and to read the transaction log. In this post, I’ll show how to join those results together. Here is the script I used:

USE [Your Database Name Here]
GO

--  Declare variables
DECLARE @startlsn VARBINARY(15)
DECLARE @lastlsn VARBINARY(15)
DECLARE @LSN_START NVARCHAR(50)
DECLARE @LSN_END NVARCHAR(50)
DECLARE @dbname NVARCHAR(100)
DECLARE @bytesused BIGINT
DECLARE @bytesreserved BIGINT
DECLARE @loopcount INT
DECLARE @looplimit INT

-- Initialize the loop counter
SET @loopcount = 1

-- Create a temp table to hold all of the active transactions from
-- sys.dm_tran_database_transactions
CREATE TABLE #tranresults
(dbname NVARCHAR(100)
,bytesused BIGINT
,bytesreserved BIGINT
,startlsn VARBINARY(15)
,endlsn VARBINARY(15))

-- Create a temp table to hold data about each transaction and the associated log records
CREATE TABLE #logresults
(dbname NVARCHAR(100)
,bytesused BIGINT
,bytesreserved BIGINT
,spid INT
,begintime NVARCHAR(48)
,currentlsn VARCHAR(46)
,previouslsn VARCHAR(46)
,transactionid VARCHAR(28)
,operation NVARCHAR(62)
,context VARCHAR(62)
,logreserve INT
,allocunitname NVARCHAR(774)
,pageid NVARCHAR(28)
,numberoflocks SMALLINT
,lockinformation NVARCHAR(512))

-- Insert data from sys.dm_tran_database_transaction for all transactions that have
-- generated log records
-- Notice the use of the fn_convertnumericlsntobinary function
-- This is necessary for subsequent SUBSTRING operations
INSERT INTO #tranresults
SELECT DB_NAME(database_id),
database_transaction_log_bytes_used,
database_transaction_log_bytes_reserved,
master.dbo.fn_convertnumericlsntobinary(database_transaction_begin_lsn),
master.dbo.fn_convertnumericlsntobinary(database_transaction_last_lsn)
FROM sys.dm_tran_database_transactions
WHERE database_transaction_begin_lsn IS NOT NULL
AND database_transaction_last_lsn IS NOT NULL
AND database_transaction_state = 4
AND database_id = DB_ID()

-- Set the @looplimit variable to the number of rows selected from
-- sys.dm_tran_database_transactions
SET @looplimit = @@ROWCOUNT

-- Run through this loop once for each row selected from
-- sys.dm_tran_database_transactions
WHILE @loopcount <= @looplimit
BEGIN

-- Select data from the first row into variables for later use
	SELECT TOP 1
	@dbname = dbname,
	@bytesused = bytesused,
	@bytesreserved = bytesreserved,
	@startlsn = startlsn,
	@lastlsn = endlsn
	FROM #tranresults

-- Convert the starting and ending lsn for use in the fn_dblog function below
-- The following code is necessary because fn_dblog expects the lsn as colon-delimited
-- numeric values, represented as VARCHAR
-- I initially thought I could get away with converting the numeric lsn to VARCHAR and
-- running the SUBSTRING operations on it, but the numeric value is variable in length

	SET @LSN_START = CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@startlsn,1,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@startlsn,5,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@startlsn,9,2)))

	SET @LSN_END = CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@lastlsn,1,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@lastlsn,5,4))) + ':' +
	CONVERT(VARCHAR(10),CONVERT(INT,SUBSTRING(@lastlsn,9,2)))

	SELECT @LSN_START
	SELECT @LSN_END

-- Now having re-reconverted the lsns, exit our hero, stage left
-- For transactions with a non-null start and end lsn, insert required data into
-- the #logresults temp table
	IF (@LSN_START IS NOT NULL) AND (@LSN_END IS NOT NULL)
	BEGIN
	INSERT INTO #logresults
	SELECT
	@dbname AS [Database],
	@bytesused AS [Bytes Used],
	@bytesreserved AS [Bytes Reserved],
	SPID,
	[Begin Time],
	[Current LSN],
	[Previous LSN],
	[Transaction ID],
	Operation,
	Context,
	[Log Reserve],
	AllocUnitName,
	[Page ID],
	[Number of Locks],
	[Lock Information]
	FROM ::fn_dblog(@LSN_START, @LSN_END)
	END

-- Delete the top row from the #tranresults temp table
	DELETE TOP (1) FROM #tranresults
	SET @loopcount = @loopcount + 1

END

SELECT * FROM #logresults
ORDER BY currentlsn

-- Clean up the temp tables
DROP TABLE #logresults
DROP TABLE #tranresults

The comments do a good job of explaining what the script is doing, but I want to clarify a few things. Because I was unable to join the DMOs on transaction ID, I decided to use the begin and end lsn values from the DMV as input parameters to fn_dblog. And here’s where the fun began. The DMV stores the lsns as numeric values. The function expects a colon-delimited VARCHAR string made up of numbers. For example, if the DMV stores an lsn value of 4242000000012300002, fn_dblog will expect 4242:123:2.

To do the the conversion, I followed these steps:

  1. Converted the values to hex using fn_convertnumericlsntobinary;
  2. Substringed each of the colon-delimited parts of the lsn, converted to INT;
  3. Converted the INT values to VARCHAR;
  4. Concatenated the VARCHAR values with colons.

And so, having re-reconverted the lsns, our hero is ready to pass them into fn_dblog. The results of fn_dblog are inserted to the temp table for each set of lsns, which is selected from at the end of the script.

A few points about the result set: fn_dblog is database-specific and the result set only contains log records for currently active transactions. Because the tran log acts like a stream of consciousness for the database, you may see additional LOP_BEGIN_XACT records following the LOP_BEGINXACT for the first transaction returned. Here is an example showing a transaction beginning and committing (highlighted in yellow) while another transaction is open. Notice that the currentlsn values are sequential while the previouslsn values are grouped by transactionid.


I hope this series of posts provides insight into the transaction log and how to relate it to active transactions on an instance.