Scaling up or down an Azure SQL Database is a very common task. Whilst common it is also very easy to do via the Azure portal or even PowerShell. When you scale a database please be aware that it creates a replica of the original database at the new performance level and then switches connections over to the replica but what do you do if you want to cancel the scale request?
One of the features Microsoft wants us to use for Azure SQL Database is Automatic Tuning. Automatic Tuning is a feature where you can think of it as entering the world of self-running and self-tuning databases.
It is said to be safe, reliable and proven using complex algorithms and built-in intelligence where it can do the following (see this link for more details: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-automatic-tuning)
- CREATE INDEX – identifies indexes that may improve performance of your workload, creates indexes, and automatically verifies that performance of queries has improved.
- DROP INDEX – identifies redundant and duplicate indexes daily, except for unique indexes, and indexes that were not used for a long time (>90 days). Please note that at this time the option is not compatible with applications using partition switching and index hints.
- FORCE LAST GOOD PLAN – identifies SQL queries using execution plan that is slower than the previous good plan, and queries using the last known good plan instead of the regressed plan.
Quite an interesting situation I found myself in where I was perplexed for about 5 minutes. I was connected to an Azure SQL Database where I was configuring some users where then I executed a query and was presented with the following message:
Msg 3906, Level 16, State 2, Line 2
Failed to update database “testdb” because the database is read-only.
If you have been reading my blog for a while now you would know that a common technique to move to Azure SQL DB is to use BACPAC files. Just a reminder, see the below image.
When I was presenting my Azure SQL Database session at DataRelay (used to be SQLRelay) I was asked (over coffee) about auto scaling capabilities. Quite simply there is nothing out of the box to achieve this. The idea of auto scaling would be good where you would need a burst to fulfill higher demand in terms of workload for a time duration, you know, something like “end of the day, Friday night sale” for your database.
Okay it is not really called Giant Azure SQL Database but its close. There is a new public preview vCore service tier called Hyperscale. The architecture behind this is very different to that of a Standard tier SQL database (as an example). What drives this concept? Flexible storage architecture where you can scale out the storage as your database grows (up to the 100TB mark).
It is always a good idea to test your failover processes when you have setup failover groups in Azure. I have the following setup:
Okay honestly I have done this once. I have deleted Azure SQL Databases and then try and find the quickest way to recover. The Azure portal is actually pretty good when it comes to deleting resources, for example it will usually ask you to re-type the name of the resource to confirm deletion, so you can tell what a bad mistake I made.
The infamous setting that we all know and love – MAXDOP. Did you know that you can actually control MAXDOP when using Azure SQL Database? You might not be able to tinker with the Cost Threshold for Parallelism setting but you sure can with MAXDOP.
In my mind there are a couple of ways to move a database across resource groups. They vary from scripting to just using the Azure portal. I am going to use the Azure portal and do the following.
- Export a database in resource group X to a storage account Z.
- Import the file from the storage account Z into a database that is in resource group Y.
It’s just like a “backup and restore” strategy, all with the assumption that you are working within the same subscription ID.