Archive

Archive for April, 2012

SQL Server Log Space Reservation – Part II

April 30, 2012 2 comments

In my last post, I wrote about SQL Server reserving space for open transactions to account for potential rollback. This reserved space is released once the transaction commits. You can track the amount of space reserved using the sys.dm_tran_database_transactions DMO. In this post, I will demonstrate how expensive this space reservation can be for long-running transactions.

I created two databases to allow me to track log space reservation for two processes. The first will update all values in a 100,000-row tables 10 times, running all updates within single transaction. The second will run each update in an explicit transaction. Here is the script to create the database:

CREATE DATABASE [DUDLOOPTRAN]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'DUDLOOPTRAN_Data', FILENAME = N'D:\Data\DUDLOOPTRAN_01.mdf' , SIZE = 2097152KB , FILEGROWTH = 1048576KB )
 LOG ON 
( NAME = N'DUDLOOPTRAN_log', FILENAME = N'D:\Data\DUDLOOPTRAN_01.ldf' , SIZE = 8388608KB , FILEGROWTH = 2097152KB )
GO
ALTER DATABASE [DUDLOOPTRAN] SET COMPATIBILITY_LEVEL = 110
GO
ALTER DATABASE [DUDLOOPTRAN] SET ANSI_NULL_DEFAULT OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET ANSI_NULLS OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET ANSI_PADDING OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET ANSI_WARNINGS OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET ARITHABORT OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET AUTO_CLOSE OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET AUTO_CREATE_STATISTICS ON 
GO
ALTER DATABASE [DUDLOOPTRAN] SET AUTO_SHRINK OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET AUTO_UPDATE_STATISTICS ON 
GO
ALTER DATABASE [DUDLOOPTRAN] SET CURSOR_CLOSE_ON_COMMIT OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET CURSOR_DEFAULT  GLOBAL 
GO
ALTER DATABASE [DUDLOOPTRAN] SET CONCAT_NULL_YIELDS_NULL OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET NUMERIC_ROUNDABORT OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET QUOTED_IDENTIFIER OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET RECURSIVE_TRIGGERS OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET  DISABLE_BROKER 
GO
ALTER DATABASE [DUDLOOPTRAN] SET AUTO_UPDATE_STATISTICS_ASYNC OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET DATE_CORRELATION_OPTIMIZATION OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET PARAMETERIZATION SIMPLE 
GO
ALTER DATABASE [DUDLOOPTRAN] SET READ_COMMITTED_SNAPSHOT OFF 
GO
ALTER DATABASE [DUDLOOPTRAN] SET  READ_WRITE 
GO
ALTER DATABASE [DUDLOOPTRAN] SET RECOVERY FULL 
GO
ALTER DATABASE [DUDLOOPTRAN] SET  MULTI_USER 
GO
ALTER DATABASE [DUDLOOPTRAN] SET PAGE_VERIFY CHECKSUM  
GO
ALTER DATABASE [DUDLOOPTRAN] SET TARGET_RECOVERY_TIME = 0 SECONDS 
GO
USE [DUDLOOPTRAN]
GO
IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'PRIMARY') ALTER DATABASE [DUDLOOPTRAN] MODIFY FILEGROUP [PRIMARY] DEFAULT
GO

CREATE DATABASE [DUDONETRAN]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'DUDONETRAN_Data', FILENAME = N'D:\Data\DUDONETRAN_01.mdf' , SIZE = 2097152KB , FILEGROWTH = 1048576KB )
 LOG ON 
( NAME = N'DUDONETRAN_log', FILENAME = N'D:\Data\DUDONETRAN_01.ldf' , SIZE = 8388608KB , FILEGROWTH = 2097152KB )
GO
ALTER DATABASE [DUDONETRAN] SET COMPATIBILITY_LEVEL = 110
GO
ALTER DATABASE [DUDONETRAN] SET ANSI_NULL_DEFAULT OFF 
GO
ALTER DATABASE [DUDONETRAN] SET ANSI_NULLS OFF 
GO
ALTER DATABASE [DUDONETRAN] SET ANSI_PADDING OFF 
GO
ALTER DATABASE [DUDONETRAN] SET ANSI_WARNINGS OFF 
GO
ALTER DATABASE [DUDONETRAN] SET ARITHABORT OFF 
GO
ALTER DATABASE [DUDONETRAN] SET AUTO_CLOSE OFF 
GO
ALTER DATABASE [DUDONETRAN] SET AUTO_CREATE_STATISTICS ON 
GO
ALTER DATABASE [DUDONETRAN] SET AUTO_SHRINK OFF 
GO
ALTER DATABASE [DUDONETRAN] SET AUTO_UPDATE_STATISTICS ON 
GO
ALTER DATABASE [DUDONETRAN] SET CURSOR_CLOSE_ON_COMMIT OFF 
GO
ALTER DATABASE [DUDONETRAN] SET CURSOR_DEFAULT  GLOBAL 
GO
ALTER DATABASE [DUDONETRAN] SET CONCAT_NULL_YIELDS_NULL OFF 
GO
ALTER DATABASE [DUDONETRAN] SET NUMERIC_ROUNDABORT OFF 
GO
ALTER DATABASE [DUDONETRAN] SET QUOTED_IDENTIFIER OFF 
GO
ALTER DATABASE [DUDONETRAN] SET RECURSIVE_TRIGGERS OFF 
GO
ALTER DATABASE [DUDONETRAN] SET  DISABLE_BROKER 
GO
ALTER DATABASE [DUDONETRAN] SET AUTO_UPDATE_STATISTICS_ASYNC OFF 
GO
ALTER DATABASE [DUDONETRAN] SET DATE_CORRELATION_OPTIMIZATION OFF 
GO
ALTER DATABASE [DUDONETRAN] SET PARAMETERIZATION SIMPLE 
GO
ALTER DATABASE [DUDONETRAN] SET READ_COMMITTED_SNAPSHOT OFF 
GO
ALTER DATABASE [DUDONETRAN] SET  READ_WRITE 
GO
ALTER DATABASE [DUDONETRAN] SET RECOVERY FULL 
GO
ALTER DATABASE [DUDONETRAN] SET  MULTI_USER 
GO
ALTER DATABASE [DUDONETRAN] SET PAGE_VERIFY CHECKSUM  
GO
ALTER DATABASE [DUDONETRAN] SET TARGET_RECOVERY_TIME = 0 SECONDS 
GO
USE [DUDONETRAN]
GO
IF NOT EXISTS (SELECT name FROM sys.filegroups WHERE is_default=1 AND name = N'PRIMARY') ALTER DATABASE [DUDONETRAN] MODIFY FILEGROUP [PRIMARY] DEFAULT
GO

And this is the code to create the tables:

USE DUDLOOPTRAN
GO

-- Create an populate a table in each database to use in the update loops

CREATE TABLE looptranu
(runnumber INT
,rundate DATETIME
,vartext VARCHAR(10)
,chartext CHAR(10))

USE DUDONETRAN
GO

CREATE TABLE onetranu
(runnumber INT
,rundate DATETIME
,vartext VARCHAR(10)
,chartext CHAR(10))

USE DUDLOOPTRAN
GO

INSERT INTO looptranu
VALUES
(1
,GETDATE()
,REPLICATE('a',10)
,REPLICATE('b',10))
GO 100000

USE DUDONETRAN
GO

INSERT INTO onetranu
VALUES
(1
,GETDATE()
,REPLICATE('a',10)
,REPLICATE('b',10))
GO 100000

After creating the databases and tables, I ran the following scripts. First, this script runs the updates in a single transaction:

USE DUDONETRAN
GO

-- Declare @loopcount variable to run the loop
-- Declare @textlen to vary the length of the text fields

DECLARE @loopcount INT
DECLARE @textlen TINYINT
SET @loopcount = 1

-- Begin an explicit transaction that will remain open for the duration of the loop
BEGIN TRAN

WHILE @loopcount <= 10
BEGIN

	-- Use the modulus operator to set text length to the remainder of @loopcount / 10

	SET @textlen = (@loopcount % 10)
	
	-- Update onetranu using the values described below
	UPDATE onetranu
	-- Set runnumber equal to @loopcount
	SET runnumber = @loopcount, 
	-- Set rundate equal the current datetime
	rundate = GETDATE(), 
	-- Set vartext to a string of a's, with the length determined by the @textlen variable
	vartext = REPLICATE('a',@textlen), 
	-- Set chartext to a string of b's, with the length determined by the @textlen variable
	chartext = REPLICATE('b',@textlen)

	-- Increment @loopcount
	SET @loopcount = @loopcount + 1

END

COMMIT

This code does each update in an explicit transaction:

USE DUDLOOPTRAN
GO

-- Declare @loopcount variable to run the loop
-- Declare @textlen to vary the length of the text fields

DECLARE @loopcount INT
DECLARE @textlen TINYINT
SET @loopcount = 1

-- Begin an explicit transaction that will remain open for the duration of the loop


WHILE @loopcount <= 10
BEGIN
	
	BEGIN TRAN
	-- Use the modulus operator to set text length to the remainder of @loopcount / 10

	SET @textlen = (@loopcount % 10)
	
	-- Update onetranu using the values described below
	UPDATE looptranu
	-- Set runnumber equal to @loopcount
	SET runnumber = @loopcount, 
	-- Set rundate equal the current datetime
	rundate = GETDATE(), 
	-- Set vartext to a string of a's, with the length determined by the @textlen variable
	vartext = REPLICATE('a',@textlen), 
	-- Set chartext to a string of b's, with the length determined by the @textlen variable
	chartext = REPLICATE('b',@textlen)

	-- Increment @loopcount
	SET @loopcount = @loopcount + 1
	
	COMMIT
	
END

Finally, this code will insert the contents of sys.dm_tran_database_transactions into a temp table for analysis. Run this while the update scripts are running:

USE master
GO

-- Create a temp table to hold the result set from sys.dm_tran_database_transactions

CREATE TABLE #transactions
(tranid BIGINT
,databaseid INT
,trantype INT
,transtate INT
,recordcount INT
,bytesused BIGINT
,bytesreserved BIGINT)

-- Run an infinite loop for the duration of each update script
-- stopping it when the script completes

WHILE 1 = 1
BEGIN

	INSERT INTO #transactions
	SELECT transaction_id AS [Tran ID], 
	database_id AS [Database ID], 
	database_transaction_type AS [Tran Type], 
	database_transaction_state AS [Tran State], 
	database_transaction_log_record_count AS [Log Record Count],
	database_transaction_log_bytes_used AS [Log Bytes Used],
	database_transaction_log_bytes_reserved AS [Log Bytes Reserved]
	FROM sys.dm_tran_database_transactions

END

-- Use the select statement below to see the results
-- This query will return a single row per transaction
-- for read/write activity (trantype = 1)
-- that has generated log records (transtate = 4)

SELECT tranid, 
MAX(recordcount) AS [Record Count], 
(MAX(bytesused)/1045876) AS [MB Used], 
(MAX(bytesreserved)/1045876) AS [MB Reserved] 
FROM #transactions
WHERE databaseid = DB_ID('DUDONETRAN')  -- Make sure to use the correct database name
AND trantype = 1
AND transtate = 4
GROUP BY tranid

-- Clean up the temp table after the run

--DROP TABLE #transactions

Here are the results of the single transaction:

DUDONETRAN Results

And here are the results of the explicit transactions in the loop:
DUDLOOPTRAN Results

 
As you can see, the total space used and reserved is approximately the same. The difference is the updates running in explicit transactions release the reserved space when each transaction commits. While the total reserved space for the explicit transaction is 344mb, 36mb is the most space reserved at any one time. However, the 342mb reserved for the single transaction is held for the duration of the transaction.

The examples I’ve shown here update 100,000 narrow rows 10 times. The cost of running updates or inserts to much wider rows millions of times would be exponentially greater. I hope these examples have demonstrated this.

SQL Server Log Space Reservation – Part I

April 27, 2012 1 comment

My biggest challenge in blogging about the SQLskills Internals Immersion Event is picking a topic to start with. I am going to choose log file space consumption because of an issue that occurred this week in our test environment.

An application ran a job that inserted over 400 million in a single transaction. After the job ran for more than 24 hours, the application requested the job be killed because it was “taking too long.” Go figure. The app was advised the rollback could take as long as the initial job run, and they decided to have the process killed. The rollback ended up taking about the same amount of time. When SQL Server rolls back a transaction, it generates and executes an anti-operation for each record in the log. In the case of 400 million inserts, 400 million deletes needed to be generated and executed.

While I was aware of this behavior going into the class, I learned that this scenario is even more costly because SQL Server reserves space in the log for any potential rollback activity. This means all of the log records for the inserts are written into the log and SQL reserves additional space for the rollback, more than doubling the amount of space needed. You can see the amount space reserved by a transaction in the sys.dm_tran_database_transactions dynamic management object in column database_transaction_log_space_reserved. The reserved space is released once the transaction commits. Additionally, because all of the log records for the large transaction are needed for a rollback, SQL Server cannot reuse the log file and it will continue to grow.

Both of these problems can be solved by running the single transaction in smaller batches. Committing more frequently will release the reserved undo space. It will also increase the probability SQL Server can reuse the existing log file. In my next post I will compare the log file use and reservation single long-running transaction to shorter, batched transactions performing the same activity.

SQL Friends Lunch – Friday, May 18th

April 25, 2012 Leave a comment

I have survived the week-long SQLskills Internals Immersion Event, having spent the better part of the weekend recovering. I am still organizing my notes, but expect to have a long series of blog posts up starting this week. It was the best, most challenging training I have taken, and I cannot recommend it highly enough.

I wanted to let people know about the second SQLFriends lunch on May 18th, 2012. The lunch will be hosted by SQL MVP Ted Krueger (blog | twitter). You can register for the lunch here. I attended the inaugural event hosted by Brent Ozar, and it was a great opportunity to talk with a SQL MVP and meet other DBAs.

The lunch is just down the road from SQL Saturday #119. As of today, registration is still open. I attended my first SQL Saturday in Chicago last year and it was great. The sessions were first-rate and I met some great people who are involved in the community. Register today!

Guest Blogger – Jes Borland

April 19, 2012 2 comments

I saw Jes give this presentation at SQL Saturday last year. She was the first person to make me think I could write my own blog. She has recently joined Brent Ozar PLF. You can find the link to her blog below and I recommend it highly. She is a constant source of inspiration and I am very happy to have her post here.

My Favorite DMOs

First, let me thank Frank for letting me do a guest post here! I’ve gotten to know Frank over the past couple of years through SQL Saturday events and user group meetings. I was very excited to hear he’d started blogging! You can follow my blogs regularly at LessThanDot.com.

When I read his blog about sys.dm_os_performance_counters, I started thinking about my favorite DMOs (dynamic management objects) in SQL Server. If you are not familiar with them, they are views and functions that can be queried, like any other object, to return information about your servers and database. They were first introduced in SQL Server 2005, and have been expanded and improved in each version since. They cover a wide range of topics, from database information to what is currently happening on the server. They are an invaluable tool for me, and many other DBAs.

Here are my top five DMOs!

sys.dm_exec_requests – This returns a row for each request that is executing on an instance of SQL Server. There is a wealth of information here that can help you identify processes, troubleshoot problems, and more.

Using the status column will allow you to filter on specific statues, like Running or Sleeping. The database_id column can help you identify what is being run in a single database. The blocking_session_id is useful for determining what session may be blocking another. If there is a system wait, the wait_type column can identify what type it is. Other helpful information like cpu_time, reads, writes, and total_elapsed_time will tell you how long the request has been running.

sys.dm_exec_sessions – This returns a row for each session on a SQL Server instance. It can be helpful in identifying how many sessions are open and which clients are connected.

Session_id will identify a particular session. The status of the session can be identified using status. You can tell how long the session has been active using cpu_time and total_elapsed_time.   The resources used can be determined by using reads and writes.

sys.dm_exec_sql_text – Using this DMF, you can see what SQL text was executed. You can use sys.dm_exec_requests to find a specific sql_handle to pass in. You’ll be able to see the dbid, the ID of the database the statement is running in (or was compiled in). The text column shows the exact query.

You should be using this instead of the old function, fn_get_sql, which Microsoft states will be deprecated in a future version of SQL Server.

Here is a script I refer to as my “activity” script. It uses sys.dm_exec_requests, sys.dm_exec_sessions, and sys.dm_exec_sql_text to show user activity on an instance. You can use this as a base to filter further, by database, CPU, reads, or writes.

SELECT r.session_id,
r.blocking_session_id,
r.wait_type,
se.host_name,
se.login_name,
r.database_id,
db_name(r.database_id) as dbname,
r.status,
r.command,
start_time,
GETDATE() as CurrentTime,
r.cpu_time,
r.total_elapsed_time,
r.reads,
r.logical_reads,
r.writes,
s.[text] as sql_text,
p.query_plan as query_plan,
r.plan_handle
FROM sys.dm_exec_requests r
inner join sys.dm_exec_sessions se on r.session_id = se.session_id
outer apply sys.dm_exec_sql_text(r.sql_handle) s
outer apply sys.dm_exec_query_plan(r.plan_handle) p
WHERE r.session_id <> @@spid
    and se.is_user_process=1
ORDER BY r.cpu_time DESC;

sys.dm_db_index_physical_stats – This invaluable DMV gives you size and fragmentation information about indexes. This can be helpful in identifying one index that needs to be reorganized or rebuilt, or put into a script that runs weekly to determine the percentage of fragmentation and reorganize or rebuild anything above a certain threshold. It’s been a frequently-used tool in my DBA toolbox.

You pass arguments such as database_id, object_id, index_id, partition_number, and mode in. The information returned will be very helpful. Index_type_desc will tell you what type of index you’re working with. Page_count will give you the total count of index or data pages. The avg_fragmentation_in_percent tells you the logical fragmentation (out-of-order pages) percent; fragment_count will tell you the number of fragments.

This query will use sys.dm_db_index_physical_stats to retrieve information about all indexes on all objects in a database. You can filter down using different arguments into the function, such as a specific database, object, or even index. By filtering on indexes with a specific fragmentation percent or higher, you can figure out which to rebuild or reorganize, as an example.

SELECT ips.[object_id] AS ObjectId,
ss.name AS SchemaName,
so.name AS TableName,
ips.index_id AS IndexId,
si.name AS IndexName,
si.allow_page_locks AS AllowPageLocks,
avg_fragmentation_in_percent AS FragPercent,
record_count AS RecordCount,
partition_number AS PartitionNumber,
index_type_desc AS IndexType,
alloc_unit_type_desc AS AllocUnitType
FROM sys.dm_db_index_physical_stats (NULL, NULL, NULL, NULL, 'SAMPLED') AS ips
join sys.objects AS so ON ips.[object_id] = so.[object_id]
join sys.schemas AS ss ON ss.[schema_id] = so.[schema_id]
join sys.indexes AS si ON ips.[object_id] = si.[object_id] and ips.index_id = si.index_id
ORDER BY so.name,
si.name,
avg_fragmentation_in_percent DESC;

sys.dm_os_wait_stats – When determining “Why is my query/session/server running slow?”, are you turning to wait stats to answer that question? They are one of several tools available for troubleshooting issues on a SQL instance. As always, having a baseline to work off of is not only useful, but necessary. Ted Krueger blogged about one way to do this, and Erin Stellato also has a post with a great list of resources.

When querying sys.dm_os_wait_stats, the wait_type column will tell you what the wait type is. The list of types can be found here. Wait_time_ms will be the total wait time in milliseconds for all tasks that have had this wait type. Wait_tasks_count tells you how many tasks have encountered this wait type. Note: these statistics are reset each time SQL Server restarts! This means that the numbers you are seeing are cumulative since the last restart. A large number isn’t necessarily bad – something a baseline will be able to tell you.

This query will show you the waits on an instance, ordered by the type that has the longest accumulated wait time. It could be sorted and filtered further, on specific wait types or total wait times.

SELECT wait_type,
waiting_tasks_count,
wait_time_ms,
max_wait_time_ms,
signal_wait_time_ms,
wait_time_ms - signal_wait_time_ms as total_wait_time
FROM sys.dm_os_wait_stats
WHERE wait_time_ms &gt; 0
ORDER BY max_wait_time_ms DESC

DMOs: A Must-Have Piece of Knowledge

This is only a small taste of the many DMOs offered. They exist for Service Brokers, replication, CLR, database mirroring, Filestream, full-text search, and much more. A full list can be found here.

I consider DMOs an essential part of administering SQL Server. If you are administering one or one hundred servers, become familiar with the many DMVs offered, and how they can help you. They are incredibly flexible and will give you insight into many facets of your server.

Using an UPDATE Statement to Update a Column and Set a Variable

April 17, 2012 1 comment

I am a day and a half into the SQLskills Internals Immersion Event and I expect to have a lot more to post once I’ve recovered. Kimberly Tripp (blog | twitter) covered a feature of the UPDATE statement that I was unaware of. It allows you to update a column and set a variable with the updated value in a single statement. Here is an example:


-- Create a test table
CREATE TABLE UpdateTest
(id INT
,firstname VARCHAR(30)
,lastname VARCHAR(30)
,balance MONEY)

-- Insert a row into the test table
INSERT INTO UpdateTest
VALUES
(1
,'Skreeby'
,'Looby'
,1234.56)

-- Declare variables for use below
DECLARE @balance MONEY
DECLARE @debit MONEY
SET @debit = 500.00

-- Update table, updating the balance column equal to balance - @debit and setting
-- the @balance at the same time
-- NOTE: If more than one row is updated, your variable will contain the column value 
-- from the last row updated

UPDATE UpdateTest
SET @balance = balance = balance - @debit
WHERE id = 1

-- Select the variable and table
SELECT @balance AS [Balance]

SELECT * FROM UpdateTest

Here are the results:

Balance Results

 
I had no idea you could do this with an update statement and wanted to get it out there.

My Experience With the SQL Server Community – Standing On the Shoulders of Giants

April 14, 2012 Leave a comment

If I have seen further it is by standing on the shoulders of giants. -Isaac Newton

I have been working as a SQL Server DBA since November of 2007. I have been involved in the SQL Server community since my first PASS conference in 2008. Over the past four years I have had the opportunity to read, see and meet some of the best DBAs in the world. Here is a short and incomplete list:

All of these people played a role in convincing me that I could write my own blog. More importantly, all of them show me new and cool things about SQL Server pretty much every day. The fact that DBAs of this caliber are giving back to the community is a constant source of inspiration.

Next week I will be attending the SQLSkills Immersion Event on internals. The opportunity to to spend a week learning from Paul Randal, Kimberly Tripp and Jonathan Kehayias is the most exciting professional opportunity I’ve ever had.

May 19th is SQL Saturday #119. I will be presenting to DeVry students on Creating Your First Database. It will be my first time presenting outside of work, and I am really excited about it.

It has been almost five years since I started working as a DBA. I’ve met some great people and learned a lot of cool stuff. I hope that I can repay one-tenth of what I’ve been given in the years to come.

Reading the xp_readerrorlog Using a WHILE Loop

April 13, 2012 1 comment

Last week, I needed to search through all of the SQL Server error logs on an instance. I have used xp_readerrorlog before, using the information found here. We retain 99 log files, which is the maximum number of files that can be kept. I needed to search through all of these files, so I put together the following script. This is another example of using a WHILE loop in gathering information from SQL Server.

USE master
GO

-- Create temp table to hold the output of the error logs

CREATE TABLE #errorlog
(errdate DATETIME
,errproc SYSNAME
,errtext VARCHAR(4000))

-- Declare and initialize the @loopcount variables

DECLARE @loopcount INT
SET @loopcount = 1

-- Loop through the error logs
-- We keep the 99 logs, which the maximum
-- Change the loop limit accordingly

WHILE @loopcount <= 99
BEGIN

	-- Insert the results of xp_readerrorlog into the temp table
	
	INSERT INTO #errorlog
	
	-- You can find the parameters for xp_readerrorlog at the following link
	-- http://www.mssqltips.com/sqlservertip/1476/reading-the-sql-server-log-files-using-tsql/
	-- The parms I use below are as follows:
	-- @loopcount - file number, with 0 being the current file
	-- 1 - indicates the SQL error log - a 2 would query the SQL Agent log
	-- 'DELETE' - a search argument
	
	EXEC xp_readerrorlog @loopcount, 1, 'DELETE'
	
	-- Increment your loop counter
	
	SET @loopcount = @loopcount + 1
END

-- Select the results of your loop
-- Alternately, you can dump the entire log file and then use WHERE predicates to 
-- select what you are interested in

SELECT * FROM #errorlog

-- Clean up your temp table
DROP TABLE #errorlog