Archive

Archive for March, 2012

Calculating Deltas for sys.dm_os_performance_counters, Part II

March 23, 2012 Leave a comment

In my last post, I showed how to calculate the values of per-second metrics from the sys.dm_os_performance_counters DMO. In this post, I will show how to save those metrics and calculate the page reads/sec value for different time frames.

For this demonstration, I created a table with the following definition:


CREATE TABLE pageread_history
(insertdate DATETIME
,pagereads BIGINT)

I then created a job, scheduled to execute every minute, to run the following code:

INSERT INTO pageread_history
SELECT GETDATE(), cntr_value
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Page reads/sec'

Finally, in order to generate some non-trivial values, I executed the following code:


USE primenumbers
GO

-- Clear the buffer cache before each execution
DBCC DROPCLEANBUFFERS

-- Select every row from a million-row table
SELECT * FROM primenumbers

-- Execute the query 100 times
GO 100

This query will force SQL Server to read all of the pages in the table from disk every time the query executes, generating page read metrics. DBCC DROPCLEANBUFFERS requires SA access and clears the buffer cache, forcing physical reads. It should NOT be run on a production system or any system on which you care about performance. I ran this on my localhost in order to generate some page read volume.

To calculate page reads/sec using the values stored in the table, I created the following stored procedure:

USE testdb
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author:		Frank Gill
-- Create date: 2012-03-18
-- Description:	Calculate page reads/sec based on input date values
-- =============================================
ALTER PROCEDURE DUP_CALCULATE_PAGE_READS_SEC
	-- @mindate and @maxdate will be passed in on execution and @pagereadssec will be output
	@mindate DATETIME, 
	@maxdate DATETIME,
	@pagereadssec INT OUTPUT
AS
BEGIN

	SET NOCOUNT ON;
	
	-- Declare variables
	DECLARE @maxval BIGINT
	DECLARE @minval BIGINT
	DECLARE @deltaval INT
	DECLARE @deltasec INT
	DECLARE @tablemaxdate DATETIME
	DECLARE @errormessage VARCHAR(500)
	
	-- If @maxdate or @mindate are greater than the max date in the table, NULL will be returned
	-- To check for this, select the MAX date value out of the table
	SELECT @tablemaxdate = MAX(insertdate)
	FROM pageread_history
	
	-- If @maxdate or @mindate are greater than @tablemaxdate, raise an error and output the MAX date value
	IF (@mindate > @tablemaxdate) OR (@maxdate > @tablemaxdate)
		BEGIN
			SET @errormessage = 'Both @mindate and @maxdate must be greater than ' + CONVERT(VARCHAR(20),@tablemaxdate,121)
			RAISERROR (@errormessage,
						16,
						1)	

	-- Pull the first value from a datetime greater than or equal to @mindate	
	SELECT TOP 1 @minval = pagereads
	FROM pageread_history
	WHERE insertdate >= @mindate
	
	-- Pull the first value from a datetime greater than or equal to @maxdate
	-- If @maxdate or @mindate are greater than the max date in the table, NULL will be returned
	SELECT TOP 1 @maxval = pagereads
	FROM pageread_history
	WHERE insertdate >= @maxdate

	-- Calculate the delta value with the counts returned from the table
	SET @deltaval = @maxval - @minval

	-- Calculate the number of seconds elapsed
	SET @deltasec = DATEDIFF(SS, @mindate, @maxdate)

	-- Calculate page reads/sec
	SET @pagereadssec = @deltaval / @deltasec

END
GO

While I've used page reads in the previous examples, this code can be modified to calculate any of the per-second metrics returned from sys.dm_os_performance_counters. I hope you can use the ideas presented here.

Categories: DMO, PerfMon, SQL Server, T-SQL Tags: ,

Calculating Deltas for sys.dm_os_performance_counters, Part I

March 21, 2012 1 comment

In the environment I work we collect a number of performance metrics from the sys.dm_os_performance_counters DMO. This data is useful for troubleshooting and historical analysis. Analyzing this data presents a challenge because SQL Server stores any metric measured per second as a cumulative counter that is reset when the instance starts up. To gather meaningful information from the data, you must take the delta between two measurements taken at different times and divide by the seconds elapsed between the two measurements.

I was reminded of this after reading an excellent article by Jonathan Kehayias on the debate over buffer cache hit ratio and page reads/sec. Here is example of calculating deltas using page reads/sec as an example:

USE master
GO

-- Declare variables

DECLARE @maxval BIGINT
DECLARE @maxdate DATETIME
DECLARE @minval BIGINT
DECLARE @mindate DATETIME
DECLARE @deltaval INT
DECLARE @deltasec INT
DECLARE @pagereadssec INT

-- Collect the mininum values for page reads/sec and run date

SELECT @minval = cntr_value, @mindate = GETDATE()
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Page reads/sec'

-- Wait for 1 minute
WAITFOR DELAY '00:01:00'

-- Collect the maximum value for page reads and run date
SELECT @maxval = cntr_value, @maxdate = GETDATE()
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Page reads/sec'

-- Calculate the number of page reads in the time elapsed
SET @deltaval = @maxval - @minval

-- Calculate the number of seconds elapsed
SET @deltasec = DATEDIFF(SS, @mindate, @maxdate)

-- Calculate page reads/sec
SET @pagereadssec = @deltaval / @deltasec

-- Select the minimum value, maximum value and the three calulated values to confirm the process is working correctly
-- This illustrates the cumulative nature of the sys.dm_os_performance_counters values

SELECT @maxval AS [Maximum Value], 
@minval AS [Minimum Value], 
@deltasec AS Seconds, 
@deltaval AS [Page Reads], 
@pagereadssec AS [Page Reads/Sec]

Here is an example of the results from an extremely active instance:

Instance Max Val Min Val Sec Page Reads Page Reads/Sec
myinst 20494050250 20494046388 60 3862 64

From these results, you can see how large the cumulative counters for these metrics can get and why delta calculations are so important. In my next post, I will show how to save these values historically and a procedure for calculating the per-second value over different time frames.

Categories: DMO, PerfMon, SQL Server, T-SQL Tags: ,

SQL Server 2012 Launch

March 17, 2012 Leave a comment

I attended the SQL Server 2012 launch this past Thursday. Thanks to Dave Paulson, Ross LoForte, Ron Martin and the rest of the folks at Microsoft for putting on such a great event. Some of the highlights from the launch:

Ross's demo of AlwaysOn

It was the first time Id seen it in action and it is impressive. I received answers to some questions about the listener, and I hope to get it up and running in a lab environment this week.

A better understanding of the ColumnStore index

Data warehousing is one of things on my learning list for this year, so the information Dave Paulson provided should come in handy.

Tempdb on a local drive

We are testing a lot of new products and platforms in our environment this year: VMWare, Microsoft's Data Consolidation Appliance, and SQL Server 2012. This is another item to put on the list. I am looking forward to running performance comparisons of tempdb on our existing SAN versus a local drive.

Reminders of SQL Server 2008 features

Dave Paulson's presentation on Mission Critical features contained a section on SQL Server 2008 features, including Policy-Based Management and Resource Governor. I have done some initial research into these features, but this will work to implement them this year.

Infonomics

The highlight of the event was the keynote given by Doug Laney of Gartner. The topic was infonomics, or methods of treating information as an asset. Having worked in information technology for 13 years with the last five in database administration, I understand the value of the data I work with. But I'd never given thought to that data as an asset on a balance sheet.

Laney talked about four ways to determine the value of information:

  1. Replacement value - How much would you have to pay to replace your data?
  2. Revenue generation - How does your data contribute to the bottom line?
  3. Market value - How much can you sell your data for on the open market?
  4. Ransom - How much would you pay if your data was held hostage?

It was an extremely thought-provoking presentation, and I've started conversations with my leadership about bringing these ideas into our organization. Gartner is releasing two white papers in the coming weeks, and I am looking forward to reading them. I will post more on the topic after reading.