We are nearly a month into the new year and my SQL Saturday season is about to start. I will be presenting Creating an Azure SQL Database Using PowerShell. This will be my third year in Cleveland and I am looking forward to another great time. You can find more information and register here.
SQL Saturday Chicago is taking place Saturday, March 11. I will be helping organize and run the event, so I will not be presenting. I did that a few years ago, and it is just too many balls in the air. The event is already at capacity, but you can add yourself to the waitlist here. If you sign up for either pre-conference session, you can send your registration receipt to firstname.lastname@example.org and you will be registered for SQL Saturday. The pre-conference sessions are Tuning Your Biggest Queries with Adam Machanic and Complete Primer for SQL Server Infrastructure and the Cloud.
Finally, SQL Saturday Madison takes place Saturday, April 8. This will be my fifth straight time in Madison and they put on an amazing event. The venue is in the heart of the University of Wisconsin campus and is one of the more beautiful SQL Saturdays I’ve been to. You can find more information and register here.
If these events aren’t geographically desirable, you can find other SQL Saturday events worldwide here. If you do make it to any of the three, be sure to say hello. Meeting new people is one of the best parts of SQL Saturdays.
I have returned from my eighth PASS Summit and, as in years past, they just keep getting better. I am amazed and humbled that I am a part of the PASS community. PASS has given me the opportunity to meet and learn from many of the best and brightest in the SQL world. When I meet first-timers, I tell them to talk to people, no matter who they are. If you share an interest, they will share with you. If you have never been to PASS Summit, I recommend it. If you cannot make Summit, get involved with your local PASS Chapter or SQL Saturday. Virtual Chapters are another great way to find out what PASS has to offer.
One of the highlights of this year’s Summit was Shawn Meyers’ (t) presentation on Monitoring and Alerting of Availability Groups. (You will need to be logged in to the PASS website to access the link.) I have worked with Availability Group since its release in SQL Server 2012 and Shawn provided the best solution for monitoring and alerting.
Shawn suggested implementing three alerts:
- 1480 – AG Role Change
- 35264 – AG Data Movement Suspended
- 35265 – AG Data Movement Resumed
Additionally, Shawn provided an MSDN link with recommendations for using Policy-Based Management (PBM) to monitor AGs. The custom policies monitor the recovery time objective (RTO) and recovery point objective (RPO) for your availability groups. Microsoft recommends 600 seconds, or 10 minutes, for RTO, and 3600 seconds, or 60 minutes, for RPO. You can set your own values when defining the policy conditions.
I have scripted out the PBM conditions, policies, and SQL Server Agent alerts. The conditions and policies can be used as-is, because they use system schedules. The alerts need to updated to replace the TestOperator with an SQL Agent operator of your own.
I hope you can make use of these scripts. And I encourage you to get involved with PASS. You will be amazed at what it can do for you.
Last week, Jes Borland (b|t) asked me a question about the transaction log. Those of you who have read my blog or seen me present know that this is my favorite SQL Server topic. Jes’s question was: “For a transaction with a Transaction Name of CREATE/ALTER VIEW, can the name of the view affected by the CREATE or ALTER statement be identified from the log.”
To check, I ran a transaction log backup on a test database on my system to minimize the number of active transaction log records. I created a view in my test database and ran:
SELECT * FROM fn_dblog(NULL,NULL)
fn_dblog() is a table-based function that returns the active transaction records for the database it is executed against. The two NULL parameters are a Start and End LSN number. Looking at the results, the Transaction Name of CREATE/ALTER VIEW showed up on the LOP_BEGIN_XACT log record.
The next log record for Transaction ID 0000:000007e6 contains an OBJECT_ID in the result set, highlighted below. In the string 9:98099390:0, 9 is the database id of the object’s database and 98099390 is the object id.
This is the object id of the view that was created. So, Jes’s question was answered. But this led me to one of my other favorite SQL Server topics: string manipulation. The following script will identify all transactions for a particular Transaction Name and return the object name affected. The comments provide additional information about the functionality.
USE YourDatabase; /* Declare local variables and drop temp table if it exists. */ IF CHARINDEX('2016',@@VERSION) &gt; 0 BEGIN DROP TABLE IF EXISTS #logrecords; END ELSE BEGIN IF OBJECT_ID('tempdb..#logrecords') IS NOT NULL BEGIN DROP TABLE #logrecords; END END /* Declare local variables */ DECLARE @tranname NVARCHAR(66); DECLARE @tranid NVARCHAR(28); DECLARE @loopcount INT = 1; DECLARE @looplimit INT; /* Set @tranname to the value you are looking for This works for CREATE/ALTER VIEW, CREATE TABLE, and ALTER TABLE Currently researching other possibilities */ SELECT @tranname = 'ALTER TABLE'; /* Get all log records associated with the transaction name specified The results contain a row number per transaction, so all occurrences of the transaction name will be found */ SELECT ROW_NUMBER() OVER(PARTITION BY [Transaction ID] ORDER BY [Current LSN]) AS Row, [Current LSN], [Transaction ID], [Transaction Name], operation, Context, AllocUnitName, AllocUnitId, PartitionId, [Lock Information] INTO #logrecords FROM fn_dblog(NULL,NULL) WHERE [Transaction ID] IN (SELECT [Transaction ID] FROM fn_dblog(NULL,NULL) WHERE [Transaction Name] = @tranname); SELECT @looplimit = COUNT(*) FROM #logrecords WHERE [Transaction Name] = @tranname; /* The object id for the object affected is contained in the [Lock Information] column of the second log record of the transaction This WHILE loop finds the second row for each transaction and does lots of string manipulation magic to return the object id from a string like this: HoBt 0:ACQUIRE_LOCK_SCH_M OBJECT: 9:146099561:0 Once it finds it, it returns the object name */ WHILE @loopcount &lt;= @looplimit BEGIN SELECT TOP 1 @tranid = [Transaction ID] FROM #logrecords DECLARE @lockinfo NVARCHAR(300); DECLARE @startingposition INT; DECLARE @endingposition INT; SELECT @lockinfo = REVERSE([Lock Information]), @startingposition = (CHARINDEX(':',REVERSE([Lock Information])) + 1), @endingposition = CHARINDEX(':',REVERSE([Lock Information]),(CHARINDEX(':',REVERSE([Lock Information])) + 1)) FROM #logrecords WHERE Row = 2 AND [Transaction ID] = @tranid; SELECT OBJECT_NAME(REVERSE(SUBSTRING(@lockinfo,(@startingposition),(@endingposition - @startingposition)))) AS ObjectName; DELETE FROM #logrecords WHERE [Transaction ID] = @tranid; SELECT @loopcount += 1; END
So far, I’ve tested the script for the following Transaction Names:
It does not work for a DROP, because the object id returned for the Lock Information column no longer exists after the DROP.
Please let me know if you have any comments or questions about the script.
NOTE: The tests that I ran selected from a transaction log containing several hundred records. In the wild, transaction logs can contain millions of records. This code will search the entire transaction log and find every occurrence of the Transaction Name you are looking for. Use caution when running against a production database with a large log.
Continuing on the theme of SQL Server 2016, my boss sent me a copy of Kalen Delaney’s (b|t) SQL Server 2016 In-Memory Internals white paper. I delved into in-memory internals when SQL Server 2014 released, but then other projects got in the way of my delving time.
I created an in-memory table to explore the contents of the in-memory log records. To do this, I used the table-based function fn_dblog_xtp. If you are familar with fn_dblog, fn_dblog_xtp contains the same column list, with 12 new columns added. A sample SELECT statement is shown below, with the new columns highlighted:
USE yourdatabase; SELECT f.[Current LSN], f.Operation, f.Context, f.[Transaction ID], f.operation_desc, f.tx_end_timestamp, f.total_size, f.xtp_object_id f.newrow_identity, f.newrow_data, f.newrow_datasize, f.oldrow_begin_timestamp, f.oldrow_identity, f.oldrow_key_data, f.oldrow_key_datasize, f.xtp_description, f.[Log Record Fixed Length], f.[Log Record Length], f.[Flag Bits], f.[Log Reserve], f.[Log Record] FROM fn_dblog_xtp(NULL,NULL) WHERE Operation LIKE '%HK%';
The two NULL parameters being passed are the start and end Log Sequence Numbers (LSN). Passing NULLs will return all log records. The WHERE Operation LIKE ‘%HK%’ will return only log records related to in-memory tables.
Transactions against an in-memory table will return a single row in the result set from fn_dblog, while the result set from fn_dblog_xtp will contain a row for all activity. For example, if you insert 100 rows into an in-memory table, fn_dblog will contain 3 records: a BEGIN TRANSACTION, a single row for the INSERT, and a COMMIT TRANSACTION. fn_dblog_xtp will contain 102 records: a BEGIN TRANSACTION, and a row for each row inserted, and a COMMIT TRANSACTION.
One of the new columns in fn_dblog_xtp is xtp_object_id. I tried to join this to the sys.objects table to return the object name, but the values didn’t match. After banging my head against my monitor for a while, I posed the question to #sqlhelp on Twitter. Andreas Wolter (b|t) responded that a correlation can be made using DMV sys.memory_optimized_tables_internal_attributes. Excited, I tried to join fn_dblog_xtp to the DMV, but was disappointed.
On further examiniation, I determined the xtp_object_id from fn_dblog_xtp was a very large positive number while xtp_object_id from the DMV was a very large negative number. I looked closer and found that the first in-memory object created in a database will receives two xtp_object_id values, 2147483648 and -2147483648. Both of these values are increment for additional in-memory objects. So your second in-memory object will receive values of 2147483649 and -2147483647.
So, to bring a very long story to a close, you can join fn_dblog_xtp to the DMV using the following query:
SELECT f.[Current LSN], f.Operation, f.Context, f.[Transaction ID], f.operation_desc, f.tx_end_timestamp, f.total_size, OBJECT_NAME(m.object_id) AS ObjectName, f.newrow_identity, f.newrow_data, f.newrow_datasize, f.oldrow_begin_timestamp, f.oldrow_identity, f.oldrow_key_data, f.oldrow_key_datasize, f.xtp_description, f.[Log Record Fixed Length], f.[Log Record Length], f.[Flag Bits], f.[Log Reserve], f.[Log Record] FROM fn_dblog_xtp(NULL,NULL) AS f INNER JOIN sys.memory_optimized_tables_internal_attributes m ON (f.xtp_object_id - 2147483648) = (m.xtp_object_id + 2147483648) WHERE Operation LIKE '%HK%'
Once again, I hope this saves some folks frustration interrogating the transaction log for in-memory objects.
I finally am getting to work with SQL Server 2016 and spent the past two days building out an Availability Group to test the new read-only replica functionality. This allows connections with an ApplicationIntent of ReadOnly to round-robin between all instances in the READ_ONLY_ROUTING_LIST, load balancing the read-only activity.
For my test, I built out three replicas and set all three to be readable secondaries. Using Microsoft’s documentation on configuring read-only routing, I coded the following:
ALTER AVAILABILITY GROUP SKREEBYHA MODIFY REPLICA ON 'AGNODE1\SKREEBYAG' WITH ( PRIMARY_ROLE ( READ_ONLY_ROUTING_LIST=('AGNODE2\SKREEBYAG','AGNODE3\SKREEBYAG') ) );
To test the new setup, I used the SQLCMD examples from Eric Russo’s blog post on Load Balancing 101. Running the following command:
SQLCMD -S SKREEBYLIST -K ReadOnly -d testdb -Q "SELECT @@SERVERNAME"
returned AGNODE2\SKREEBYAG in each case. I could not figure out what I was doing wrong. After much frustration, I noticed the routing check query at the bottom of Eric’s post:
SELECT ar_Primary.replica_server_name 'Primary', ar_secondary.replica_server_name 'Route_to', ror.routing_priority 'Priority' FROM sys.availability_read_only_routing_lists ror INNER JOIN sys.availability_replicas ar_Primary ON ror.replica_id = ar_Primary.replica_id INNER JOIN sys.availability_replicas ar_secondary ON ror.read_only_replica_id = ar_secondary.replica_id INNER JOIN sys.availability_groups ag ON ag.group_id = ar_Primary.group_id WHERE ag.name = 'SKREEBYHA' ORDER BY 1,3
Running it returned the following results:
So, in technical terms, something was screwy. Looking at Eric’s post again, I noticed his third example for setting a routing list:
ALTER AVAILABILITY GROUP SQL2016AG MODIFY REPLICA ON 'MyNode3' WITH ( PRIMARY_ROLE ( READ_ONLY_ROUTING_LIST= (('MyNode4','MyNode1','MyNode2')) ) )
Notice the second set of parentheses around the READ_ONLY_ROUTING_LIST value. Based on this, I changed my code to:
ALTER AVAILABILITY GROUP SKREEBYHA MODIFY REPLICA ON 'AGNODE1\SKREEBYAG' WITH ( PRIMARY_ROLE ( READ_ONLY_ROUTING_LIST=(('AGNODE2\SKREEBYAG','AGNODE3\SKREEBYAG')) ) );
and, voila, the routing list check query returned the following:
Rerunning the SQLCMD test, the read-only connections round-robined between AGNODE2 and AGNODE3.
It’s been a while since my last blog post. Since then, I’ve been to PASS Summit, which was the best one yet. I’ve presented at SQL Saturdays in Portland, Cleveland, and Iowa City. And, most excitingly, started a new job with BlueMetal. I am in the middle of week 3 and am really excited about the new challenge.
I’ll be presenting on Using PowerShell to Automate Your Restore Strategy at SQL Saturday Columbus on July 16th. It will be my first time in Columbus, Ohio and I am looking forward to seeing old friends, making new ones, and teaching and learning. You can register and find more information at the link above. If you are in the area, don’t miss the opportunity to get a free day of training while networking with your peers in the SQL Server community.
Previously, I’ve blogged about returning formatted estimated_completion_time values from sys.dm_exec_requests and getting sortable output from sp_whoisactive. In this post I combine the two, joining the two result sets together. In my environment, we use Ola Hallengren’s Maintenance Solution to run our backups. Because the backup job defaults to run backups for all user databases, it can be difficult to tell which database is currently being backed up.
USE [master] GO --Drop procedure if it exists IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sp_join_whoisactive_execrequests]') AND type in (N'P', N'PC')) DROP PROCEDURE [dbo].sp_join_whoisactive_execrequests GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO /* ============================================= -- Author: Frank Gill -- Create date: 2015-10-27 -- Description: This procedure executes Adam Machanic's sp_whoisactive, dumps the results to a temp table, and then joins those results to -- sys.dm_exec_requests. This will return estimated completion time for the following commands along with the statement being executed -- in a single pass: -- ALTER INDEX REORGANIZE -- AUTO_SHRINK option with ALTER DATABASE -- BACKUP DATABASE -- DBCC CHECKDB -- DBCC CHECKFILEGROUP -- DBCC CHECKTABLE -- DBCC INDEXDEFRAG -- DBCC SHRINKDATABASE -- DBCC SHRINKFILE -- RECOVERY -- RESTORE DATABASE, -- ROLLBACK -- TDE ENCRYPTION -- ============================================= */ CREATE PROCEDURE sp_join_whoisactive_execrequests AS BEGIN IF OBJECT_ID('tempdb..#WhoIsActive') IS NOT NULL BEGIN SELECT 'Dropping' DROP TABLE #WhoIsActive END CREATE TABLE #WhoIsActive ([dd hh:mm:ss.mss] VARCHAR(20) ,[dd hh:mm:ss.mss (avg)] VARCHAR(20) ,[session_id] SMALLINT ,[sql_text] XML ,[sql_command] XML ,[login_name] SYSNAME ,[wait_info] NVARCHAR(4000) ,[tran_log_writes] NVARCHAR(4000) ,[CPU] VARCHAR(30) ,[tempdb_allocations] VARCHAR(30) ,[tempdb_current] VARCHAR(30) ,[blocking_session_id] SMALLINT ,[blocked_session_count] VARCHAR(30) ,[reads] VARCHAR(30) ,[writes] VARCHAR(30) ,[physical_reads] VARCHAR(30) ,[query_plan] XML ,[used_memory] VARCHAR(30) ,[status] VARCHAR(30) ,[tran_start_time] DATETIME ,[open_tran_count] VARCHAR(30) ,[percent_complete] VARCHAR(30) ,[host_name] SYSNAME ,[database_name] SYSNAME ,[program_name] SYSNAME ,[start_time] DATETIME ,[login_time] DATETIME ,[request_id] INT ,[collection_time] DATETIME) /* Execute sp_whoisactive and write the result set to the temp table created above */ EXEC master.dbo.sp_WhoIsActive @get_plans = 2, @get_outer_command = 1, @get_transaction_info = 1, @get_avg_time = 1, @find_block_leaders = 1, @destination_table = #WhoIsActive /* Join the #whoisactive temp table to sys.dm_exec_requests to get estimated completion time and query information in one pass */ SELECT CASE WHEN ((r.estimated_completion_time/1000)%3600/60) < 10 THEN '0' + CONVERT(VARCHAR(10),(r.estimated_completion_time/1000)%3600/60) ELSE CONVERT(VARCHAR(10),(r.estimated_completion_time/1000)%3600/60) END + ':' + CASE WHEN ((r.estimated_completion_time/1000)%60) < 10 THEN '0' + CONVERT(VARCHAR(10),(r.estimated_completion_time/1000)%60) ELSE CONVERT(VARCHAR(10),(r.estimated_completion_time/1000)%60) END AS [Time Remaining], r.percent_complete, r.session_id, w.login_name, w.[host_name], w.sql_command, w.sql_text FROM #WhoIsActive w RIGHT OUTER JOIN sys.dm_exec_requests r ON r.session_id = w.session_id WHERE r.percent_complete > 0; END
This procedure takes the output from sp_whoisactive and joins it to sys.dm_exec_requests on session_id, returning the estimated completion time and current backup statement in a single pass. Here is a sample result set from the procedure (Click to Enlarge):
The sql_command column contains the full command being run, in this example Ola Hallengren’s DatabaseBackup procedure. The sql_text column contains the actual SQL statement being run, BACKUP DATABASE [AdventureWorks2012] TO DISK…. The DatabaseBackup procedure runs database backups in alphabetical order based on the database name, so this allows tracking a job progress.
Many thanks to Adam Machanic (b|t) for sp_whoisactive, Ola Hallengren (b|t) for his Maintenance Solution, and Aaron Bertrand (b|t) for his advice on formatting the estimated completion time more elegantly. I finally had an opportunity to use it.
I hope you can put this procedure to good use. Let me know if you have any questions.