Generate ALTER Statements for Database Moves

October 13, 2014 1 comment

Last week, I posted Identifying Corrupt Rows USING DBCC CHECKDB and DBCC PAGE. I wrote the scripts in that post to identify corrupt rows in a client’s database. As we continued to investigate, things got strange. Repeated runs of DBCC CHECKDB were generating different results. Each one showed serious corruption, but on different pages with each run. In an effort to remove the storage system from the list of potential causes, I needed to move to data and log files for the database to a a different storage array. This post from TechNet gave me the steps I needed to run.

In this case, only one database had to be moved. To move more than one database, I wrote the following script to generate the statements specified in the TechNet article. In addition to the ALTER DATABASE statements, the script checks to see if the database is part of an availability group. If it is, a statement is generated to remove it. As adding a database to an availability group is far more complicated than removing it, I will leave that to the user.

As usual, I hope you find this useful. And remember, executing code you find on the internet against your production instance always has the potential to become a resume updating event. Use caution and test.

/*---------------------------------------------------------------
Created By - Frank Gill
Created On - 2014-10-13
 
Script to generate ALTER DATABASE statement to move a database to
a different storage location
---------------------------------------------------------------*/

USE master;

SET NOCOUNT ON;

/* Drop temp table if it exists */

IF OBJECT_ID('tempdb..#dbfiles') IS NOT NULL
BEGIN

	DROP TABLE #dbfiles;

END

/* Declare local variables and create temp table */
DECLARE @dbname SYSNAME;
DECLARE @agname SYSNAME;
DECLARE @sqlstr NVARCHAR(4000);
DECLARE @physicalname NVARCHAR(260);
DECLARE @logicalname SYSNAME;
DECLARE @filetype TINYINT;
DECLARE @datapath NVARCHAR(1000);
DECLARE @logpath NVARCHAR(1000);
DECLARE @agdb BIT = 0;
DECLARE @loopcount INT = 1;
DECLARE @looplimit INT;

CREATE TABLE #dbfiles
(logicalname SYSNAME
,physicalname NVARCHAR(260)
,filetype TINYINT)

/* Initialize variables */
SET @dbname = 'Your_Database';
SET @datapath = 'Your_Data_Path';
SET @logpath = 'Your_Log_Path';

/* If the database is part of an availability group, generate a statement to remove it prior to taking the database offline */
IF(SELECT replica_ID FROM sys.databases WHERE name = @dbname) IS NOT NULL
BEGIN

	PRINT '/* This database is part of an availability group and must be removed prior to taking the database offline
   You will need to add the database back to the availability group after the database files are moved */'

	SELECT @agname = g.name FROM sys.databases d
	INNER JOIN sys.availability_replicas r
	ON d.replica_id = r.replica_id
	INNER JOIN sys.availability_groups g
	ON g.group_id = r.group_id
	WHERE d.name = @dbname;

	SET @sqlstr = 'ALTER AVAILABILITY GROUP [' + @agname + ']
REMOVE DATABASE [' + @dbname + '];';

	PRINT @sqlstr;
	
	SET @agdb = 1;

END

/* Generate a statement to set the database offline */
SET @sqlstr = 'ALTER DATABASE ' + @dbname + '
SET OFFLINE WITH ROLLBACK IMMEDIATE';

PRINT @sqlstr;

/* Get a list of database files for your database and insert them into the temp table for use in the loop */
SET @sqlstr = 'INSERT INTO #dbfiles
SELECT name, physical_name, type FROM ' + @dbname + '.sys.database_files;'

EXEC sp_executesql @sqlstr;

/* Set the loop limit to the number of files returned from the insert statement */
SET @looplimit = @@ROWCOUNT;

WHILE @loopcount <= @looplimit
BEGIN

	/* Get the first set of physical and logical file name and generate an ALTER DATABASE statement using the file type to determine the new physical location */

	SELECT TOP 1 @logicalname = logicalname, @physicalname =  REVERSE(SUBSTRING(REVERSE(physicalname),1,(CHARINDEX('\',REVERSE(physicalname))-1))), @filetype = filetype FROM #dbfiles;

	IF @filetype = 0
	BEGIN

 SET @sqlstr = 'ALTER DATABASE ' + @dbname + '
MODIFY FILE 
(NAME = ' + @logicalname + ' 
,FILENAME = ' + @datapath + @physicalname + '''' + ');';
 
 END
 ELSE IF @filetype = 1
 BEGIN

 SET @sqlstr = 'ALTER DATABASE ' + @dbname + '
MODIFY FILE 
(NAME = ' + @logicalname + ' 
,FILENAME = ' + @logpath + @physicalname + '''' + ');';
 END

	PRINT @sqlstr;

	SET @loopcount += 1;

	DELETE TOP (1) FROM #dbfiles 

END

/* Generate a statement to bring the database back online */
SET @sqlstr = 'ALTER DATABASE ' + @dbname + '
SET ONLINE';

PRINT @sqlstr;


IF @agdb = 1
BEGIN

	PRINT '/* Reminder!  ' + @dbname + 'is part of an availability group 
You will need to add the database back to the availability group if necessary */'

END

Identifying Corrupt Rows Using DBCC CHECKDB and DBCC PAGE

October 6, 2014 1 comment

Four months ago, when I blogged about giving all the presentations, things were starting to ramp up pat my new job. I’ve really enjoyed the last few months, but the combination of my presenting schedule and the responsibilities of the new gig has left little time for blogging.

This week, I ran into a corruption issue that gave me a chance to make a deeper dive into internals than I have in a while. It also gave me the opportunity to develop a script to dump the results of DBCC PAGE to a table.

This week, we discovered corruption in a client’s database we were migrating. Digging deeper into the results of DBCC CHECKDB, I found that six objects in the database were corrupted badly enough to require data loss. One of those objects contained 47 irreparable pages.

In an effort to provide the client as much information as possible about the corrupted data, I set out to identify the rows contained on the corrupt pages. Here are the scripts I used.

First, I needed to create tables to hold the results of DBCC CHECKDB and DBCC PAGE. For the PageResults table, I guessed at the column definitions. For DBCCResults, I made all of the numeric columns BIGINT to prevent conversion errors.

USE [Your_Database]
GO

/* Create table to hold the results of DBCC CHECKDB */

/****** Object:  Table [dbo].[DBCCResults]    Script Date: 9/30/2014 11:00:47 AM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SET ANSI_PADDING ON
GO

IF OBJECT_ID('Your_Database..DBCCResults') IS NOT NULL
BEGIN

	DROP TABLE DBCCResults;

END

IF OBJECT_ID('Your_Database..PageResults') IS NOT NULL
BEGIN

	DROP TABLE PageResults;

END

CREATE TABLE PageResults
(ParentObject VARCHAR(100)
,[Object] VARCHAR(1000)
,[Field] VARCHAR(100)
,[VALUE] VARCHAR(1000))

CREATE TABLE [dbo].[DBCCResults](
	[Error] [bigint] NULL,
	[Level] [bigint] NULL,
	[State] [bigint] NULL,
	[MessageText] [varchar](7000) NULL,
	[RepairLevel] [varchar](7000) NULL,
	[Status] [bigint] NULL,
	[DbId] [bigint] NULL,
	[DbFragID] [bigint] NULL,
	[ObjId] [bigint] NULL,
	[IndId] [bigint] NULL,
	[PartID] [bigint] NULL,
	[AllocID] [bigint] NULL,
	[RidDbid] [bigint] NULL,
	[RidPruid] [bigint] NULL,
	[File] [bigint] NULL,
	[Page] [bigint] NULL,
	[Slot] [bigint] NULL,
	[RefDbid] [bigint] NULL,
	[RefPruId] [bigint] NULL,
	[RefFile] [bigint] NULL,
	[RefPage] [bigint] NULL,
	[RefSlot] [bigint] NULL,
	[Allocation] [bigint] NULL
) ON [PRIMARY]

GO

After creating the tables, I needed to generate DBCC CHECKDB statements for each database. The script below uses sys.databases as the driver, and you can modify the results with an appropriate WHERE clause. The script is set to print the DBCC CHECKDB commands. Replacing the PRINT statement with EXEC sp_executesql will execute the statement after generation. Upon execution, the results of DBCC CHECKDB will be inserted into the DBCC CHECKDB table.

USE master;
GO

/* Drop temp table if it exists */

IF OBJECT_ID('tempdb..#databases') IS NOT NULL
BEGIN

	DROP TABLE #databases

END

/* Declare local variables and create temp tables */
DECLARE @sqlstr NVARCHAR(2000);
DECLARE @dbname SYSNAME;
DECLARE @loopcount TINYINT = 1;
DECLARE @looplimit TINYINT;

CREATE TABLE #databases
(dbname SYSNAME);

/* This will generate statements for all user databases 
   Change the WHERE clause to limit or expand the results */
INSERT INTO #databases
SELECT name FROM master.sys.databases
WHERE name > 4;

/* Get the loop limit */
SELECT @looplimit = @@ROWCOUNT;

/* Build the DBCC CHECKDB statement for each database 
   This code will print each statement
   Replace the PRINT statement with EXEC sp_executesql @sql str to execute the code */
WHILE @loopcount <= @looplimit
BEGIN

	SELECT TOP 1 @dbname = dbname FROM #databases;

	SELECT @sqlstr = 'INSERT INTO Your_Database..DBCCResults EXEC(''DBCC CHECKDB(' + @dbname + ') WITH TABLERESULTS, NO_INFOMSGS'')';

	PRINT @sqlstr;

	SELECT @loopcount += 1;

	DELETE FROM #databases WHERE dbname = @dbname;

END


Once, the DBCCResults table was populated, I needed to dump the contents of the irreparable pages. For corrupted pages, DBCC CHECKDB returns two possible RepairLevels — repair_rebuild indicates the page can be rebuilt to fix the corruption; repair_allow_data_loss means that the page is corrupted so badly it cannot be repaired. The only option is to delete the page and restore consistency to the object structure. This link from Paul Randal provides a much more detailed explanation.

USE Your_Database;

/* Drop temp tables if they exist */
SET NOCOUNT ON;

IF OBJECT_ID('tempdb..#DBCCPages') IS NOT NULL
BEGIN
	
	DROP TABLE #DBCCPages;

END

IF OBJECT_ID('tempdb..#PageResults') IS NOT NULL
BEGIN
	
	DROP TABLE #PageResults;

END

/* Create temp tables */
CREATE TABLE #PageResults
(ParentObject VARCHAR(100)
,[Object] VARCHAR(1000)
,[Field] VARCHAR(100)
,[VALUE] VARCHAR(1000))

/* Declare local variables */
DECLARE @loopcount INT = 1;
DECLARE @looplimit INT;
DECLARE @sqlstr NVARCHAR(4000);
DECLARE @pagenum BIGINT;

/* Select information about the corrupt data
   This example selects rows with a RepairLevel of repair_allow_data_loss for a single object */
SELECT DISTINCT O.name, T.RepairLevel, T.IndId, T.PartID, T.Page 
INTO #DBCCPages 
FROM DBCCResults T
INNER JOIN Your_Database.sys.objects O
ON t.ObjId = O.object_id
WHERE RepairLevel = 'repair_allow_data_loss'
AND O.name = 'Your_Table'
ORDER BY O.name, T.Page;

/* Set the loop limit */
SET @looplimit = @@ROWCOUNT;

/* Build a DBCC PAGE statement for each corrupt page and execute it
   Insert the results into the #PageResults temp table */
WHILE @loopcount <= @looplimit
BEGIN

	SELECT TOP 1 @pagenum = Page FROM #DBCCPages

	SET @sqlstr = 'DBCC PAGE (Your_Database,1,' + CAST(@pagenum AS NVARCHAR) + ',3) WITH TABLERESULTS'

	INSERT INTO PageResults
	EXEC sp_executesql @sqlstr;

	SET @loopcount += 1;

	DELETE FROM #DBCCPages WHERE Page = @pagenum;
	
END


Because I needed to provide the client with a list of rows affected by the corruption, I had to dump all pages identified as repair_allow_data_loss. The script below queries DBCC Results to return a list of PageIDs for the irreparable pages. It runs DBCC PAGE…WITH TABLERESULTS for each PageID and writes the results to the PageResults table. Here is a sample of the output:

DBCCPAGEWithTableResults

You can see that the results contain a set of rows for each slot, or table row, contained on the page. I needed to take each set of those rows and pivot them into a single row. The script below does that. You will need to add a MIN(CASE statement for each row you want to pivot.

USE Your_Database;
/* Select data from PageResults to return the key value for each row contained on a corrupt page */

SELECT ParentObject,
     MIN(CASE Field WHEN 'col1' THEN VALUE END) AS col1,
     MIN(CASE Field WHEN 'col2' THEN VALUE END) AS col2,
     MIN(CASE Field WHEN 'col3' THEN VALUE END) AS col3,
     MIN(CASE Field WHEN 'col4' THEN VALUE END) AS col4,
     MIN(CASE Field WHEN 'col5' THEN VALUE END) AS col5
INTO Your_Database..PageResults
   FROM #PageResults
   GROUP BY ParentObject
   ORDER BY col1, col2, col3, col4, col5; 

I hope this post provides more insight into the tools and methods available for analyzing corruption and determining its extent.

Presenting – The Odyssey Continues

GIVE ALL THE PRESENTATIONS
As I’ve mentioned before, I set a goal at the beginning of the year to average a presentation a month in 2014. My presentation, Designing a Recovery Strategy, at the Wisconsin SQL Server User Group this month put me halfway there.

In June, I am presenting Designing a Recovery Strategy again at SQL Saturday #307 in Iowa City on the 7th. And then, for the first time ever, I will be presenting twice on the same day at SQL Saturday #286 in Louisville on the 21st. I’ll be doing Designing a Recovery Strategy and Interrogating the Transaction Log (now with 2014!) If you are close to either event, I highly recommend attending. It is a great opportunity to get a free day of training and meet other members of the SQL community.

I close out the month by presenting Interrogating the Transaction Log at MADPASS June 25th. It has been a wild ride, but I’ve enjoyed every minute of it. Presenting is a great way to give back to the SQL community and I learn more than I thought possible every time I put a presentation together.

sp_AutomateDBRestore – Now With Table-Valued Parameters!

After several weeks of procrastination, I have finished the latest version of sp_AutomateDBRestore. The previous versions of the procedure built the restore statements using sys.database_files for the restored database, msdb.dbp.backupset, and msdb.dbo.backupmediafamily. This process works great if the instance is up and running, but if the instance is unavailable it won’t work.

To handle this circumstance, I have added a table-valued parameter to the procedure allowing a list of backup files and backup file types to be passed in. This way, you can generate a restore statement as long as you have the backup files available. The proc uses RESTORE FILELISTONLY and RESTORE HEADERONLY statements to retrieve the physical and logical file names. Additionally, I’ve added parameters @backupfilepath and @logfilepath to allow a restore of the database to a different physical file than the source database. Here is the code to create a user-defined table type, followed by a call using the new parameters:

USE [master]
GO

CREATE TYPE [dbo].[backupfiletype] AS TABLE(
	[backupfilename] [varchar](255) NULL,
	[backupfiletype] [varchar](30) NULL
)
GO
DECLARE @backuptvp AS backupfiletype

INSERT INTO @backuptvp
VALUES
('C:\Backup\WIN-MK86UPCRGFF\multifile\FULL\WIN-MK86UPCRGFF_multifile_FULL_20140513_091120.bak','FULL'),
('C:\Backup\WIN-MK86UPCRGFF\multifile\LOG\WIN-MK86UPCRGFF_multifile_LOG_20140513_091135.trn','LOG'),
('C:\Backup\WIN-MK86UPCRGFF\multifile\LOG\WIN-MK86UPCRGFF_multifile_LOG_20140513_091203.trn','LOG'),
('C:\Backup\WIN-MK86UPCRGFF\multifile\LOG\WIN-MK86UPCRGFF_multifile_LOG_20140513_091354.trn','LOG')


DECLARE	@return_value int

EXEC	@return_value = [dbo].[sp_automateDBRestores]
		@sourcedbname = multifile,
		@restoredbname = multifile,
		@droprestoredb = 0,
		@noexec = 1,
		@datafilepath = 'C:\testrestore\',
		@logfilepath = 'C:\testrestore\',
		@TVP = @backuptvp

SELECT	'Return Value' = @return_value

GO

You can find the code for the updated procedure at my Presentation Slides and Scripts page under Wisconsin SQL Server User Group. As usual, feel free to leave any questions or comments in the comments section for the post.

Categories: PASS, SQL Server, T-SQL Tags: , ,

SQL Saturday #291

Speaker dinner at the Schnitzel Platz (MarkV is an instigator) -Thanks to Dave Mattingly for the photo

Speaker dinner at the Schnitzel Platz (MarkV is an instigator) -Thanks to Dave Mattingly for the photo


SQL Saturday #291 took place this past Saturday at DeVry University in Addison, Illinois. It was my eighth SQL Saturday, and it was the first one I helped run. I want to thank Bill Lescher (t), Wendy Pastrick (b|t), Bob Pusateri (b|t), Jamie Samsel, and Andy Yun (b|t) for all of their hard work. I had a great time and have heard positive feedback from all of the attendees I’ve spoken with.

The hard work these folks put in wouldn’t mean anything without the speakers who volunteer their time and pay their own way to attend these events. I especially want to thank Jim Dorame (b|t), Merrill Aldrich (b|t), Eric Boyd (b|t), and Grant Fritchey (b|t). With only 15 minutes’ notice, Jim stepped in to replace a speaker who did not arrive, moving from his scheduled noon slot to 9:15. Then Jim, Merrill, Eric, and Grant held a PowerShell panel in Jim’s original slot. Their willingness to take on extra responsibility epitomizes the SQL community for me, and went a long way to making the event such a success.

Finally, I want to thank everyone who attended SQL Saturday #291. Without you, the SQL community would not exist. I spoke to a number of first-time attendees, and I hope they all come back again. The SQL community has given me more than I could ever give back. I’ve made great friends, learned from world-renowned experts, and grown personally and professionally. I met a current co-worker at SQL Saturday Chicago last year and my work in the community had a lot to do with my getting that job.

If you attended SQL Saturday and got something out of it, consider giving back. Presenting, blogging, volunteering, and answering questions on forums are all great ways to give back. If you haven’t already, register for your local PASS user group. If you don’t have a local user group, consider starting one. I’ve run the Chicago SQL Server User Group since July of 2012 and have had a blast. Feel free to contact me if you have any questions.

Track VLF Usage with usp_VLFTracker

April 24, 2014 Leave a comment

When SQL Server allocates or grows a transaction log file, it creates a number of virtual log files (VLFs) within that allocation. When a transaction log record is written to a VLF, its status changes from free to active. In SIMPLE recovery model, the VLF will remain active until the next checkpoint is run and no log records are part of active transactions. In FULL recovery model, the VLF will remain active until all transaction log records in the VLF are no longer needed. Log records are needed if they are part of an open transaction, have not been backed up by a transaction log backup, or are needed for mirroring or log replication. You can read more about transaction log architecture here.

DBCC LOGINFO() returns information about each of the VLFs in a database’s transaction log, including the status. DBCC LOGINFO() will return a status of 2 for active VLFs and a status of 0 for free VLFs. By tracking VLFs rate of status change, I can get an idea of how much activity is being written to the transaction log.

I needed to determine the rate databases on an instance were generating log records. To do this, I wrote the stored procedure below to track the status of virtual log files in each database log file using DBCC LOGINFO(). I have a job scheduled every 15 minutes to write the results to a table for analysis.

The result set of DBCC LOGINFO() added the recoveryunitid column, making separate temp tables necessary for SQL Server 2008 and SQL Server 2012. I initially tried using conditional logic to create the version-specific schema but found SQL Servedr will not let you run two CREATE TABLE statements for the same object in a script. Thanks to Keith Buck for the suggestion of creating both temp tables and using conditional logic to determine which table to use.

This version uses Aaron Bertrand’s (b|t) sp_foreachdb, which is on my short list of the coolest things on the Internet. I highly recommend you use it. If you cannot, the procedure is easily modified to use sp_msforeachdb. Though I really recommend you use it (here are some reasons you should).

/*---------------------------------------------------------------
Created By - Frank Gill
Created On - 2014-04-23

usp_VLFTracker

Procedure dumps the contents of DBCC LOGINFO() to a temp table
for each database on the instance. The output of DBCC LOGINFO()
changed with SQL Server 2012, making two different temp tables
necessary. After the results of DBCC LOGINFO() are gathered for 
each database, they are written to a permanent table for 
analysis.
---------------------------------------------------------------*/


USE yourdbname
GO

-- Drop stored procedure if it already exists
IF EXISTS (
  SELECT * 
    FROM INFORMATION_SCHEMA.ROUTINES 
   WHERE SPECIFIC_SCHEMA = N'dbo'
     AND SPECIFIC_NAME = N'usp_VLFTracker' 
)
   DROP PROCEDURE dbo.usp_VLFTracker
GO

CREATE PROCEDURE dbo.usp_VLFTracker

AS

	CREATE TABLE #vlf2008
	(fileid INT
	,filesize BIGINT
	,startoffset BIGINT
	,fseqno INT
	,vlfstatus INT
	,parity INT
	,createlsn NUMERIC(25,0))
	
	CREATE TABLE #vlf2012
	(recoveryunitid INT
	,fileid INT
	,filesize BIGINT
	,startoffset BIGINT
	,fseqno INT
	,vlfstatus INT
	,parity INT
	,createlsn NUMERIC(25,0))

	CREATE TABLE #vlfdb
	(dbname SYSNAME
	,currentts DATETIME
	,fileid INT
	,filesize BIGINT
	,vlfstatus INT)
	
	DECLARE @version VARCHAR(20)
	
	SELECT @version = CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)

	IF SUBSTRING(@version,1,2) = '10'
	BEGIN
		
		EXEC sp_foreachdb @command = N' 
		USE ?
		INSERT INTO #vlf2008
		EXEC(''DBCC LOGINFO()'')
		INSERT INTO #vlfdb
		SELECT DB_NAME(), CURRENT_TIMESTAMP, fileid, filesize, vlfstatus FROM #vlf2008

		DELETE FROM #vlf2008
		;'
		
	END
	ELSE IF SUBSTRING(@version,1,2) = '11'
	BEGIN
		
		EXEC sp_foreachdb @command = N' 
		USE ?
		INSERT INTO #vlf2012
		EXEC(''DBCC LOGINFO()'')
		INSERT INTO #vlfdb
		SELECT DB_NAME(), CURRENT_TIMESTAMP, fileid, filesize, vlfstatus FROM #vlf2012

		DELETE FROM #vlf2012
		;'
	
	END
	INSERT INTO youdatabasename..yourtablename
	SELECT dbname, currentts, vlfstatus, COUNT(*) AS vlfcount FROM #vlfdb
	GROUP BY dbname, currentts, vlfstatus
	ORDER BY dbname, vlfstatus

	DROP TABLE #vlf2008
	DROP TABLE #vlf2012
	DROP TABLE #vlfdb
	GO


GO

SQL Milestones

April 13, 2014 Leave a comment

The past few months have been full of firsts for me.

  • I started working at Apparatus February 17th. I wanted opportunities to learn new things, and I have not been disappointed.
  • Last night, I gave my fourth presentation of the year. With user groups lined up from now through August, I am on pace to meet my goal of averaging one presentation per month this year.
  • March marked the second anniversary of this blog. In that time, I’ve written 61 posts. While this is well short of my initial goal of a post a week, I’m hoping the constant learning opportunities at Apparatus will help me pick up the pace.
  • I hit 1,000 views on this blog for the first time in a month. I am really excited about this, but at the same time, it shows me how much room I have to grow.

It is just about four years since I attended my first SQL Saturday. Thanks to Jes Borland (b|t) for inspiring me to make my voice heard. And it’s just about three years since I attended SQL Skills’ Interals Immersion Event. Thanks to Paul Randal (b|t) and Kimberly Tripp (b|t) for giving me my first opportunity to present on SQL Server outside of work. Finally, I want to thank the members of the SQL community who have taught, helped, and inspired me. There are too many of you to mention, but you’ve made the past four years the most enjoyable and fulfilling of my life.

thankyouhc

Categories: Career, Presenting Tags: , ,
Follow

Get every new post delivered to your Inbox.