I thought that it would be handy to capture some Perfmon graphs of what happens to backup throughput and CPU on my SQL Server when I have backup compression on vs off against my 35GB database.
No compression
BACKUP DATABASE TATA TO DISK = 'd:\mylocation\DBNOTOCOMP.BAK' WITH COPY_ONLY,
STATS = 10
I will be using data collectors to show the perfmon counters(below) for the duration of the backup command – it just paints a better picture than running real time perfmon graphs.
% Processor Time

Backup Throughput/sec

Overall Time & size
5.5 minutes to do:

With Compression
BACKUP DATABASE TATA TO DISK = ' d:\mylocation\DByesCOMP.BAK'
WITH COPY_ONLY, STATS = 10, COMPRESSION
Again Data collectors paint a nice picture for CPU and Backup Throughput.
% Processor Time

Backup Throughput/sec

Overall Time & size
2.5 minutes to do:

Findings
Some of the findings maybe obvious to you, having said that here is what I found:
- Using backup compression uses more CPU. (My Average for %Processor Time went from 5.05 to 18.93)
- A compressed backup is much smaller hence backup speed is much faster (nearly reduced the time taken by 50%).
- Backup Throughput more than doubled when compressing.
- Compression ratio does depend on many factors but my ratio was 2.88.
Trace Flag 3042
You can change the behaviour of the compressed backup. By default the Database Engine uses a pre-allocation algorithm for the backup file. For my example it pre-defined to the size of DByesCOMP.BAK = 4646286KB.
Using the trace flag you can see the change in behaviour. Instead it starts of from a low size and works its way up to the true final size.
DBCC TRACEON (3042)
BACKUP DATABASE TATA TO DISK = ' d:\mylocation\DByesCOMP1.BAK'
WITH COPY_ONLY, STATS = 10, COMPRESSION

I did find that using this trace flag made the backup operation slightly longer with a reduced throughput rate. I personally have no use for this trace flag but it was a new behaviour for me so I thought I would share it too.

Like this:
Like Loading...