CHECKDB’s Hidden Snapshot

I am in the middle of some research regarding CHECKDB and learnt something new. From SQL Server 2014 onwards the database snapshot that is used to get to a transactionally-consistent point-in-time is no longer hidden.

To understand the behavior prior to 2014 I am going to quote Paul Randal here – he states “ As the database snapshot is hidden, there is no control over where the snapshot files are placed – in fact they are created as alternate streams of the files comprising the source database” (http://www.sqlskills.com/blogs/paul/checkdb-from-every-angle-complete-description-of-all-checkdb-stages/)

In SQL Server 2014 things have changed – it is not hidden and it doesn’t use alternate streams.

DBCC CHECKDB ('Newdb')

Checking the location where the data files reside we can see the snapshot.

checkme.JPG

This disappears once CHECKDB finishes.

 

SQL Server – RECONFIGURE

When you want to change configuration settings for your SQL Server you would either do it via Management studio (under server properties) or via TSQL calling on sp_configure.

For example, I usually change the cost threshold for parallelism from the low value of 5 which is what I will show you now on my SQL Server, plus with a side-effect.

The side-effect is that your plan cache will get flushed, yes it really does.

Let’s check cache before changing the setting (adapted from Kimberly Tripp http://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads/)

SELECT objtype AS [CacheType],
COUNT_BIG(*) AS [Total Plans],
SUM(CAST(size_in_bytes AS DECIMAL(18, 2))) / 1024 / 1024 AS [Total MBs]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs] DESC
GO

cacge.JPG

Let’s now change the setting and re-check cache.

EXEC sp_configure 'cost threshold for parallelism', '50'
RECONFIGURE;

cache1.JPG

 SELECT objtype AS [CacheType],
COUNT_BIG(*) AS [Total Plans],
SUM(CAST(size_in_bytes AS DECIMAL(18, 2))) / 1024 / 1024 AS [Total MBs]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs] DESC
GO

cach4.JPG

Yes, it has flushed. I couldn’t find anything on books on line about this behaviour. However it was mentioned in the following whitepaper: http://msdn.microsoft.com/en-us/library/dn148262.aspx under the flush entire plan cache section page 22 where RECONFIGURE command is mentioned. I am not sure if this applies to all options or a subset of them, more testing is required.

I was intrigued so I did one more test – enabling backup compression which is currently off.

cach5.JPG

Or you can check it this way:

SELECT *
FROM sys.configurations
WHERE name = 'backup compression default' ;
GO

afasefswgf

On the same test server my cache now looks like:

llllyr.JPG

Let’s enable backup compression and re-check.

EXEC sp_configure 'backup compression default', 1 ;
RECONFIGURE
GO

gggg.JPG

Well, no effect this time.

SELECT objtype AS [CacheType],
    COUNT_BIG(*) AS [Total Plans],
    SUM(CAST(size_in_bytes AS DECIMAL(18, 2))) / 1024 / 1024 AS [Total MBs]
FROM sys.dm_exec_cached_plans
GROUP BY objtype
ORDER BY [Total MBs] DESC
GO

hh.JPG

This is something that I would personally talk about to a group of people and is my entry to https://sqlbek.wordpress.com/2016/10/25/t-sql-tuesday-84-growing-new-speakers/ – yes it would be demo heavy.

tsql.JPG

SQL Server Backup Compression

I thought that it would be handy to capture some Perfmon graphs of what happens to backup throughput and CPU on my SQL Server when I have backup compression on vs off against my 35GB database.

No compression

BACKUP DATABASE TATA TO DISK = 'd:\mylocation\DBNOTOCOMP.BAK' WITH COPY_ONLY,
STATS = 10

I will be using data collectors to show the perfmon counters(below) for the duration of the backup command – it just paints a better picture than running real time perfmon graphs.

% Processor Time

perf

Backup Throughput/sec

perf1

Overall Time & size

5.5 minutes to do:

perf3

With Compression

BACKUP DATABASE TATA TO DISK = ' d:\mylocation\DByesCOMP.BAK'
WITH COPY_ONLY, STATS = 10, COMPRESSION

Again Data collectors paint a nice picture for CPU and Backup Throughput.

% Processor Time

perf5

Backup Throughput/sec

perf6

Overall Time & size

2.5 minutes to do:

perf7

Findings

Some of the findings maybe obvious to you, having said that here is what I found:

  • Using backup compression uses more CPU. (My Average for %Processor Time went from 5.05 to 18.93)
  • A compressed backup is much smaller hence backup speed is much faster (nearly reduced the time taken by 50%).
  • Backup Throughput more than doubled when compressing.
  • Compression ratio does depend on many factors but my ratio was 2.88.

Trace Flag 3042

You can change the behaviour of the compressed backup. By default the Database Engine uses a pre-allocation algorithm for the backup file. For my example it pre-defined to the size of DByesCOMP.BAK = 4646286KB.

Using the trace flag you can see the change in behaviour. Instead it starts of from a low size and works its way up to the true final size.

DBCC TRACEON (3042)
BACKUP DATABASE TATA TO DISK = ' d:\mylocation\DByesCOMP1.BAK'
WITH COPY_ONLY, STATS = 10, COMPRESSION

perf8.JPG

I did find that using this trace flag made the backup operation slightly longer with a reduced throughput rate. I personally have no use for this trace flag but it was a new behaviour for me so I thought I would share it too.

perf10

SSMS 2016 Bug

I have been using SSMS (SQL Server Management Studio) 2016 build number 13.0.15900.1 and noticed that if you wanted to manage compression you couldn’t as it is greyed out.

Microsoft SQL Server Management Studio 13.0.15900.1

ssms

The actual message that you are presented states: Tables with sparse columns cannot be compressed – I have no sparse columns.

I never had this issue with SQL Server 2014 Management Studio:

Microsoft SQL Server Management Studio           12.0.4100.1

ssms1

The next release of SSMS is now available build number 13.0.16000.28 – see link https://msdn.microsoft.com/en-us/library/mt238290.aspx

I am happy because it addresses this issue, item number 4 from the above link states: Fixed the issue that “Manage Compression” menu item is disabled for the user table nodes in object explorer tree.

 

 

More bytes for Row versioning

I like row versioning– see this link for more details: https://technet.microsoft.com/en-us/library/ms189122(v=sql.105).aspx

If your database is enabled for one of the isolation levels that uses row versioning and a row is updated it will have 14 bytes added to it.

Before anything is enabled:

SELECT name,is_read_committed_snapshot_on, snapshot_isolation_state_desc,
snapshot_isolation_state
 FROM sys.databases WHERE database_id = 18

iq1

Let’s check a table before enabling anything.

select
index_id, index_type_desc,alloc_unit_type_desc,index_depth,max_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id('AdventureWorks2018'),
object_id('AWBuildVersion'), null, null, 'DETAILED')

dfsf

Switch it on:

ALTER DATABASE [AdventureWorks2018]
SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE;
GO

ALTER DATABASE [AdventureWorks2018]
SET ALLOW_SNAPSHOT_ISOLATION ON 
GO

SELECT name,is_read_committed_snapshot_on, snapshot_isolation_state_desc,
snapshot_isolation_state
 FROM sys.databases WHERE database_id = 18

jgjkgtjkgkgk

Let’s do some work (UPDATE STATEMENT)

UPDATE [dbo].[AWBuildVersion]
SET ModifiedDate = GETDATE()
GO

select
index_id, index_type_desc,alloc_unit_type_desc,index_depth,max_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id('AdventureWorks2018'),
object_id('AWBuildVersion'), null, null, 'DETAILED')

hjjjjj

56 +14 = 70, meaning we have a 14 byte overhead. I did some research on this and I ended up going to my book self and dug out the SQL Server 2012 Internals text book, the 14 byte overhead is needed for “the actual pointer to the file, page, and row in tempdb which needs 8 bytes and 6 bytes are needed to store the XSN” (Chapter 13. Page 823)

What happens if I disable row versioning and run an update?

ALTER DATABASE [AdventureWorks2018]
SET READ_COMMITTED_SNAPSHOT OFF
GO

ALTER DATABASE [AdventureWorks2018]
SET ALLOW_SNAPSHOT_ISOLATION OFF
GO

UPDATE [dbo].[AWBuildVersion]
SET ModifiedDate = '2009-10-13 00:00:00.000'
GO


select
index_id, index_type_desc,alloc_unit_type_desc,index_depth,max_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id('AdventureWorks2018'),
object_id('AWBuildVersion'), null, null, 'DETAILED')

There is no longer the 14 byte overhead – due to the fact that row versioning is no longer on.

lllll