Did you know that SQL Server tracks suspect pages and puts the details into the msdb database in a table called suspect_pages? Well it does, but the question I want answering is what happens if the page within the suspect pages table is fixed? Does it automatically get removed/updated or do we as the administrators have to do anything manually?
If you know me by now I like rebuilding indexes and that is no different in Azure. Now we have the ability to resume a paused rebuilding operation rather than cancelling it (Feature currently in public preview). I like this because I have the flexibility to pause it if I feel that it is taking up too much DTU (Database Transaction Unit) usage hence I can free up resources for other operations.
SQL Server Deadlocks – Also known as “The deadly Embrace” occurs when there are 2 processes and neither can advance and get the relevant lock it needs because the other process involved is preventing it from getting the resource it requires.
In this quick video clip I create a deadlock scenario and check how to get some deadlock information via system health extended events session.
Below is the T-SQL for querying the system health session. This was against a SQL Server 2014 install so you may need to change the file location.
SELECT top(1) CONVERT(xml, event_data).value('(event[@name="xml_deadlock_report"]/@timestamp)','datetime2') AS [TimeOfDeadlock], CONVERT(xml, event_data).query('/event/data/value/child::*') AS [DeadlockGraph] FROM sys.fn_xe_file_target_read_file('C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Log\system_health*.xel', null, null, null) WHERE object_name = 'xml_deadlock_report' ORDER BY [TimeOfDeadlock] DESC
A really quick one today, something that made me think for a minute and I thought it might make others think too. So you have enabled TDE – Transparent Data Encryption (you can see these previous posts here: https://blobeater.blog/?s=tde&submit=Search) on your SQL Server database and in the back of your mind you know TempDB gets encrypted too.
A very quick post for today, recently I have been working on some code to gather metrics around SQL Server memory, more specifically, how much memory is on your server, your total / target memory and PLE. (If you want to know more about total vs target see this link: https://blobeater.blog/2017/03/01/sql-server-target-vs-total-memory/)
A quick post that is hopefully useful, I wanted a quick way to find the time, size of the database file size change and who caused it.
Over the past 6 months I have been trying to push myself to use extended events (XEvents) over SQL trace, once you get past the learning curve it’s probably the way to go. If you are operating in the Azure space then you have no choice. Extended events are what you will need to use if you want to collect information against SQL database. There are some subtle differences on how you write the T-SQL between SQL database (Azure) and your locally installed SQL Servers.
This is a new feature where you can issue TSQL commands via a TSQL editor straight from the Azure portal, why this is great is because sometimes you do not want to keep switching between your SSMS (SQL Server Management Studio) connection and the Azure portal to issue basic commands and that assumes that you have the right version of tools installed and correct ports opened locally.
This is in public preview and you can do the following:
- Query dynamic management views for real-time workload insights (Which is what I will be doing).
- Issue ad-hoc queries.
- Manage your user authentication.
To access the editor do the following: Go to your SQL database select the Tools button and then select the Query editor button.
Here you will see the Query editor option.
You will need to login, I have only SQL authentication setup but if you use Active Directory then you have the choice to use the other options.
If you fail you will obviously be told.
You want to see a blue authentication image!
Now let’s issue a query – instead of switching over to my local SSMS connection. How about some database level wait stats? I really need to look into the IO_completion wait type!
Then if you really want you can save your query (which saves it to your local machine as a SQL file):
Saved locally as per below.
Or you can open a SQL file locally (Open query option) and let that be imported into your Azure portal session, for example you could have the waits stats script pre-written and just open it up whenever you like.
It will not create a new window, it basically overwrites the contents of the current open query window, I can work with that.
I was building some basic queries around time zone manipulation and I am happy to say that I enjoyed myself as I found a way to get a time based on a geographic region.
This new feature is available to you from SQL 2016 which obviously means SQL database too (Azure). It is called AT TIME ZONE.
AT TIME ZONE implementation relies on following registry hive where time zones are installed: KEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones. This is exposed via the following query
SELECT * FROM sys.time_zone_info
Looking at books on line via https://msdn.microsoft.com/en-us/library/mt612790.aspx it is wrong.
Look at the screen shot, it states it is supported from SQL 2008 – well it is not, it only works on 2016.
Running it on a non SQL 2016 server you will get the following message:
Msg 208, Level 16, State 1, Line 1 Invalid object name ‘sys.time_zone_info’.
On a correct SQL version it will return the following. (Snippet)
This information is key as we will use it to build the time zone queries (which is very basic).
SELECT SYSDATETIMEOFFSET() AT TIME ZONE 'GMT Standard Time' AS [My Location] GO SELECT SYSDATETIMEOFFSET() AT TIME ZONE 'US Mountain Standard Time' AS [US Mountain Standard Time] GO SELECT SYSDATETIMEOFFSET() AT TIME ZONE 'Central Asia Standard Time' AS [Central Asia Standard Time] GO SELECT SYSDATETIMEOFFSET() AT TIME ZONE 'AUS Central Standard Time' AS [AUS Central Standard Time]
Based on the zone offset you will get the relevant correct time.
Have you ever wanted to start a SQL trace (server-side trace!) from SQL agent and let it stop after X minutes and write the TRC to a location? Well this is how you would do it:
ALTER PROCEDURE [dbo].[ScriptTrace] @RunTime INT, @FileName NVARCHAR(100) AS SET NOCOUNT ON DECLARE @rc INT DECLARE @TraceID INT DECLARE @maxfilesize BIGINT -- Specify Max File Size DECLARE @Now DATETIME DECLARE @StopTime DATETIME DECLARE @FileStamp NVARCHAR(25) DECLARE @COMMAND NVARCHAR(125) SET @Now = GETDATE() SET @StopTime = DATEADD (Mi , @RunTime , @Now ) SET @FileName = @FileName+'_' SET @FileStamp = CAST(DATEPART(YEAR, GETDATE()) AS NVARCHAR) + CAST(DATEPART(MONTH, GETDATE()) AS NVARCHAR) + CAST(DATEPART(DAY, GETDATE()) AS NVARCHAR) + CAST(DATEPART(HOUR, GETDATE()) AS NVARCHAR) + CAST(DATEPART(MINUTE, GETDATE()) AS NVARCHAR) SET @FileName = @FileName + @FileStamp SET @maxfilesize = 18000 EXEC @rc = sp_trace_create @TraceID OUTPUT, 0, @FQFileName, @maxfilesize, @StopTime IF (@rc != 0) GOTO error SET @COMMAND = 'SQL Trace Log @' + @FQFileName + ' activated for ' + CAST(@RunTime AS VARCHAR) + ' minute(s)' PRINT @COMMAND --Set the events for example DECLARE @on BIT SET @on = 1 EXEC sp_trace_setevent @TraceID, 14, 1, @on EXEC sp_trace_setevent @TraceID, 14, 9, @on EXEC sp_trace_setevent @TraceID, 14, 6, @on EXEC sp_trace_setevent @TraceID, 14, 10, @on EXEC sp_trace_setevent @TraceID, 14, 14, @on EXEC sp_trace_setevent @TraceID, 14, 11, @on EXEC sp_trace_setevent @TraceID, 15, 10, @on EXEC sp_trace_setevent @TraceID, 15, 14, @on EXEC sp_trace_setevent @TraceID, 15, 18, @on EXEC sp_trace_setevent @TraceID, 15, 11, @on EXEC sp_trace_setevent @TraceID, 15, 12, @on --Set the Filters DECLARE @intfilter INT DECLARE @bigintfilter BIGINT -- filter 1 include databaseId 5 EXEC sp_trace_setfilter @TraceID, 3, 1, 0, 5 -- filter 2 EXEC sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Profiler' EXEC sp_trace_setstatus @TraceID, 1 SELECT TraceID=@TraceID GOTO finish ERROR: SELECT ErrorCode=@rc FINISH: --Now call it EXEC dbo.ScriptTrace '1','G:\Logs\Trace'
output example: SQL Trace Log @G:\Logs\Trace_20153101419 activated for 1 minute(s)