Archive

Author Archive

DBCC SHRINKDATABASE Log Space Usage

August 21, 2017 Leave a comment

Yesterday, I was running a health assessment for a client. They are running a weekly maintenance plan that is shrinking all of their data files. After I picked myself up off the floor, I searched the web for “Paul Randal shrink” and hit on Paul’s excellent post Why you should not shrink your data files. In the post, Paul (b|t) demonstrates the effect of DBCC SHRINKDATABASE on index fragmentation. After the demo script, Paul writes, “As well as introducing index fragmentation, data file shrink also generates a lot of I/O, uses a lot of CPU and generates *loads* (emphasis Paul’s) of transaction log.”

This led me to ask the question, “How much is *loads*?”. To find an answer, I made the following modification to Paul’s script:

DECLARE @maxlsn NVARCHAR(46);
SELECT @maxlsn = CONCAT(N'0x',MAX([Current LSN])) FROM fn_dblog(NULL,NULL);

-- Shrink the database
DBCC SHRINKDATABASE ([DBMaint2008]);

SELECT SUM([Log Record Length]) AS ShrinkLogSpace
FROM fn_dblog(@maxlsn,NULL);

My additions use the table-valued function fn_dblog to return information from the transaction log.

DECLARE @maxlsn NVARCHAR(46);
SELECT @maxlsn = CONCAT(N'0x',MAX([Current LSN])) FROM fn_dblog(NULL,NULL);

The code above returns the log sequence number (LSN) of the last record in the transaction log and stores it in variable @maxlsn.

SELECT SUM([Log Record Length]) AS ShrinkLogSpace
FROM fn_dblog(@maxlsn,NULL);

This code uses @maxlsn from the first query to return all log records resulting from the DBCC SHRINKDATABASE command, and return the total log space used in bytes.  In Paul’s demo script, there is 10MB of space available to shrink when DBCC SHRINKDATABASE runs.  The total log space used by the shrink is 16MB.

So, in this example, *loads* is 60% of the space you are reclaiming with the shrink operation.  Yet another reason to not shrink your data files.

Many thanks to Paul for his post.

Categories: Uncategorized

My SQL Saturday Season Begins

January 26, 2017 Leave a comment

We are nearly a month into the new year and my SQL Saturday season is about to start. I will be presenting Creating an Azure SQL Database Using PowerShell.  This will be my third year in Cleveland and I am looking forward to another great time. You can find more information and register here.

SQL Saturday Chicago is taking place Saturday, March 11. I will be helping organize and run the event, so I will not be presenting. I did that a few years ago, and it is just too many balls in the air. The event is already at capacity, but you can add yourself to the waitlist here.  If you sign up for either pre-conference session, you can send your registration receipt to sqlsaturday600@sqlpass.org and you will be registered for SQL Saturday. The pre-conference sessions are Tuning Your Biggest Queries with Adam Machanic and Complete Primer for SQL Server Infrastructure and the Cloud.

Finally, SQL Saturday Madison takes place Saturday, April 8. This will be my fifth straight time in Madison and they put on an amazing event. The venue is in the heart of the University of Wisconsin campus and is one of the more beautiful SQL Saturdays I’ve been to. You can find more information and register here.

If these events aren’t geographically desirable, you can find other SQL Saturday events worldwide here. If you do make it to any of the three, be sure to say hello. Meeting new people is one of the best parts of SQL Saturdays.

Categories: Uncategorized

Monitoring and Alerting for Availability Groups and Why I Love PASS Summit

October 31, 2016 Leave a comment

I have returned from my eighth PASS Summit and, as in years past, they just keep getting better.  I am amazed and humbled that I am a part of the PASS community.  PASS has given me the opportunity to meet and learn from many of the best and brightest in the SQL world.  When I meet first-timers, I tell them to talk to people, no matter who they are.  If you share an interest, they will share with you.  If you have never been to PASS Summit, I recommend it.  If you cannot make Summit, get involved with your local PASS Chapter or SQL Saturday.  Virtual Chapters are another great way to find out what PASS has to offer.

One of the highlights of this year’s Summit was Shawn Meyers’ (t) presentation on Monitoring and Alerting of Availability Groups. (You will need to be logged in to the PASS website to access the link.)  I have worked with Availability Group since its release in SQL Server 2012 and Shawn provided the best solution for monitoring and alerting.

Shawn suggested implementing three alerts:

  • 1480 – AG Role Change
  • 35264 – AG Data Movement Suspended
  • 35265 – AG Data Movement Resumed

Additionally, Shawn provided an MSDN link with recommendations for using Policy-Based Management (PBM) to monitor AGs.  The custom policies monitor the recovery time objective (RTO) and recovery point objective (RPO) for your availability groups.  Microsoft recommends 600 seconds, or 10 minutes, for RTO, and 3600 seconds, or 60 minutes, for RPO. You can set your own values when defining the policy conditions.

I have scripted out the PBM conditions, policies, and SQL Server Agent alerts.  The conditions and policies can be used as-is, because they use system schedules.  The alerts need to updated to replace the TestOperator with an SQL Agent operator of your own.

I hope you can make use of these scripts.  And I encourage you to get involved with PASS. You will be amazed at what it can do for you.

agmonitoringandalertingscripts

Identifying Object Name for CREATE and ALTER Using fn_dblog()

August 9, 2016 Leave a comment

Last week, Jes Borland (b|t) asked me a question about the transaction log. Those of you who have read my blog or seen me present know that this is my favorite SQL Server topic. Jes’s question was: “For a transaction with a Transaction Name of CREATE/ALTER VIEW, can the name of the view affected by the CREATE or ALTER statement be identified from the log.”

To check, I ran a transaction log backup on a test database on my system to minimize the number of active transaction log records. I created a view in my test database and ran:

SELECT * FROM fn_dblog(NULL,NULL)

fn_dblog() is a table-based function that returns the active transaction records for the database it is executed against. The two NULL parameters are a Start and End LSN number. Looking at the results, the Transaction Name of CREATE/ALTER VIEW showed up on the LOP_BEGIN_XACT log record.

CreateAlterView

The next log record for Transaction ID 0000:000007e6 contains an OBJECT_ID in the result set, highlighted below. In the string 9:98099390:0, 9 is the database id of the object’s database and 98099390 is the object id.

LockInformationView

 

This is the object id of the view that was created.  So, Jes’s question was answered.  But this led me to one of my other favorite SQL Server topics: string manipulation.  The following script will identify all transactions for a particular Transaction Name and return the object name affected.  The comments provide additional information about the functionality.

USE YourDatabase;

/* Declare local variables and drop temp table if it exists. */

IF CHARINDEX('2016',@@VERSION) > 0
BEGIN

	DROP TABLE IF EXISTS #logrecords;

END
ELSE
BEGIN

	IF OBJECT_ID('tempdb..#logrecords') IS NOT NULL
	BEGIN

		DROP TABLE #logrecords;

	END
END

/* Declare local variables */
DECLARE @tranname NVARCHAR(66);
DECLARE @tranid NVARCHAR(28);
DECLARE @loopcount INT = 1;
DECLARE @looplimit INT;

/* Set @tranname to the value you are looking for
   This works for CREATE/ALTER VIEW, CREATE TABLE, and ALTER TABLE
   Currently researching other possibilities */
SELECT @tranname = 'ALTER TABLE';

/* Get all log records associated with the transaction name specified
   The results contain a row number per transaction, so all occurrences
   of the transaction name will be found */
SELECT  ROW_NUMBER() OVER(PARTITION BY [Transaction ID] ORDER BY [Current LSN]) AS Row,
    [Current LSN], [Transaction ID], [Transaction Name], operation, Context, AllocUnitName, AllocUnitId, PartitionId, [Lock Information]
INTO #logrecords
FROM fn_dblog(NULL,NULL)
WHERE [Transaction ID] IN
	(SELECT [Transaction ID]
	FROM fn_dblog(NULL,NULL)
	WHERE [Transaction Name] = @tranname);

SELECT @looplimit = COUNT(*) FROM #logrecords
WHERE [Transaction Name] = @tranname;

/* The object id for the object affected is contained in the [Lock Information] column of the second log record of the transaction
   This WHILE loop finds the second row for each transaction and does lots of string manipulation magic to return the object id
   from a string like this:
   HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 9:146099561:0
   Once it finds it, it returns the object name */
WHILE @loopcount <= @looplimit
BEGIN

	SELECT TOP 1 @tranid = [Transaction ID]
	FROM #logrecords

	DECLARE @lockinfo NVARCHAR(300);
	DECLARE @startingposition INT;
	DECLARE @endingposition INT;
	SELECT @lockinfo = REVERSE([Lock Information]), @startingposition = (CHARINDEX(':',REVERSE([Lock Information])) + 1), @endingposition = CHARINDEX(':',REVERSE([Lock Information]),(CHARINDEX(':',REVERSE([Lock Information])) + 1))
	FROM #logrecords
	WHERE Row = 2
	AND [Transaction ID] = @tranid;	

	SELECT OBJECT_NAME(REVERSE(SUBSTRING(@lockinfo,(@startingposition),(@endingposition - @startingposition)))) AS ObjectName;

	DELETE FROM #logrecords
	WHERE [Transaction ID] = @tranid;

	SELECT @loopcount += 1;

END

So far, I’ve tested the script for the following Transaction Names:

CREATE TABLE
ALTER TABLE
CREATE/ALTER VIEW

It does not work for a DROP, because the object id returned for the Lock Information column no longer exists after the DROP.

Please let me know if you have any comments or questions about the script.

NOTE: The tests that I ran selected from a transaction log containing several hundred records.  In the wild, transaction logs can contain millions of records.  This code will search the entire transaction log and find every occurrence of the Transaction Name you are looking for.  Use caution when running against a production database with a large log.

 

Identifying In-Memory Objects When Querying the Transaction Log

July 18, 2016 Leave a comment

Continuing on the theme of SQL Server 2016, my boss sent me a copy of Kalen Delaney’s (b|tSQL Server 2016 In-Memory Internals white paper.  I delved into in-memory internals when SQL Server 2014 released, but then other projects got in the way of my delving time.

I created an in-memory table to explore the contents of the in-memory log records.  To do this, I used the table-based function fn_dblog_xtp. If you are familar with fn_dblog, fn_dblog_xtp contains the same column list, with 12 new columns added. A sample SELECT statement is shown below, with the new columns highlighted:

USE yourdatabase;

SELECT
f.[Current LSN],
f.Operation,
f.Context,
f.[Transaction ID],
f.operation_desc,
f.tx_end_timestamp,
f.total_size,
f.xtp_object_id
f.newrow_identity,
f.newrow_data,
f.newrow_datasize,
f.oldrow_begin_timestamp,
f.oldrow_identity,
f.oldrow_key_data,
f.oldrow_key_datasize,
f.xtp_description,
f.[Log Record Fixed Length],
f.[Log Record Length],
f.[Flag Bits],
f.[Log Reserve],
f.[Log Record]
FROM fn_dblog_xtp(NULL,NULL)
WHERE Operation LIKE '%HK%';

The two NULL parameters being passed are the start and end Log Sequence Numbers (LSN). Passing NULLs will return all log records. The WHERE Operation LIKE ‘%HK%’ will return only log records related to in-memory tables.

Transactions against an in-memory table will return a single row in the result set from fn_dblog, while the result set from fn_dblog_xtp will contain a row for all activity. For example, if you insert 100 rows into an in-memory table, fn_dblog will contain 3 records: a BEGIN TRANSACTION, a single row for the INSERT, and a COMMIT TRANSACTION. fn_dblog_xtp will contain 102 records: a BEGIN TRANSACTION, and a row for each row inserted, and a COMMIT TRANSACTION.

One of the new columns in fn_dblog_xtp is xtp_object_id. I tried to join this to the sys.objects table to return the object name, but the values didn’t match. After banging my head against my monitor for a while, I posed the question to #sqlhelp on Twitter. Andreas Wolter (b|t) responded that a correlation can be made using DMV sys.memory_optimized_tables_internal_attributes.  Excited, I tried to join fn_dblog_xtp to the DMV, but was disappointed.

On further examiniation, I determined the xtp_object_id from fn_dblog_xtp was a very large positive number while xtp_object_id from the DMV was a very large negative number. I looked closer and found that the first in-memory object created in a database will receives two xtp_object_id values, 2147483648 and -2147483648. Both of these values are increment for additional in-memory objects. So your second in-memory object will receive values of 2147483649 and -2147483647.

So, to bring a very long story to a close, you can join fn_dblog_xtp to the DMV using the following query:

SELECT 
f.[Current LSN],
f.Operation,
f.Context,
f.[Transaction ID],
f.operation_desc,
f.tx_end_timestamp,
f.total_size,
OBJECT_NAME(m.object_id) AS ObjectName,
f.newrow_identity,
f.newrow_data,
f.newrow_datasize,
f.oldrow_begin_timestamp,
f.oldrow_identity,
f.oldrow_key_data,
f.oldrow_key_datasize,
f.xtp_description,
f.[Log Record Fixed Length],
f.[Log Record Length],
f.[Flag Bits],
f.[Log Reserve],
f.[Log Record]
FROM fn_dblog_xtp(NULL,NULL) AS f
INNER JOIN sys.memory_optimized_tables_internal_attributes m
ON (f.xtp_object_id - 2147483648) = (m.xtp_object_id + 2147483648) 
WHERE Operation LIKE '%HK%'

Once again, I hope this saves some folks frustration interrogating the transaction log for in-memory objects.

Categories: Uncategorized

Fun With READ_ONLY_ROUTING_LIST Syntax

July 13, 2016 Leave a comment

I finally am getting to work with SQL Server 2016 and spent the past two days building out an Availability Group to test the new read-only replica functionality.  This allows connections with an ApplicationIntent of ReadOnly to round-robin between all instances in the READ_ONLY_ROUTING_LIST, load balancing the read-only activity.

For my test, I built out three replicas and set all three to be readable secondaries.  Using Microsoft’s documentation on configuring read-only routing, I coded the following:

ALTER AVAILABILITY GROUP SKREEBYHA
MODIFY REPLICA ON 'AGNODE1\SKREEBYAG'
WITH
(
PRIMARY_ROLE
(
READ_ONLY_ROUTING_LIST=('AGNODE2\SKREEBYAG','AGNODE3\SKREEBYAG')
)
);

To test the new setup, I used the SQLCMD examples from Eric Russo’s blog post on Load Balancing 101.  Running the following command:

SQLCMD -S SKREEBYLIST -K ReadOnly -d testdb -Q "SELECT @@SERVERNAME"

returned AGNODE2\SKREEBYAG in each case. I could not figure out what I was doing wrong. After much frustration, I noticed the routing check query at the bottom of Eric’s post:

SELECT ar_Primary.replica_server_name 'Primary',
ar_secondary.replica_server_name 'Route_to',
ror.routing_priority 'Priority'
FROM sys.availability_read_only_routing_lists ror
INNER JOIN sys.availability_replicas ar_Primary
ON ror.replica_id = ar_Primary.replica_id
INNER JOIN sys.availability_replicas ar_secondary
ON ror.read_only_replica_id = ar_secondary.replica_id
INNER JOIN sys.availability_groups ag
ON ag.group_id = ar_Primary.group_id
WHERE ag.name = 'SKREEBYHA'
ORDER BY 1,3

Running it returned the following results:

RoutingListResults

So, in technical terms, something was screwy.  Looking at Eric’s post again, I noticed his third example for setting a routing list:

ALTER AVAILABILITY GROUP SQL2016AG
MODIFY REPLICA ON 'MyNode3'
WITH
(
PRIMARY_ROLE
(
READ_ONLY_ROUTING_LIST= (('MyNode4','MyNode1','MyNode2'))
)
)

Notice the second set of parentheses around the READ_ONLY_ROUTING_LIST value. Based on this, I changed my code to:

ALTER AVAILABILITY GROUP SKREEBYHA
MODIFY REPLICA ON 'AGNODE1\SKREEBYAG'
WITH
(
PRIMARY_ROLE
(
READ_ONLY_ROUTING_LIST=(('AGNODE2\SKREEBYAG','AGNODE3\SKREEBYAG'))
)
);

and, voila, the routing list check query returned the following:

RoutingListResultsFixed

Rerunning the SQLCMD test, the read-only connections round-robined between AGNODE2 and AGNODE3.

Many thanks to Eric Russo (b|t) for a great blog post.  I hope this saves you some frustration when working with read-only routing.

 

 

SQL Saturday Columbus

It’s been a while since my last blog post.  Since then, I’ve been to PASS Summit, which was the best one yet.  I’ve presented at SQL Saturdays in Portland, Cleveland, and Iowa City. And, most excitingly, started a new job with BlueMetal.  I am in the middle of week 3 and am really excited about the new challenge.

I’ll be presenting on Using PowerShell to Automate Your Restore Strategy at SQL Saturday Columbus on July 16th.  It will be my first time in Columbus, Ohio and I am looking forward to seeing old friends, making new ones, and teaching and learning.  You can register and find more information at the link above.  If you are in the area, don’t miss the opportunity to get a free day of training while networking with your peers in the SQL Server community.