This is kind of a follow up from my last blog post about a scale down request issue. (https://blobeater.blog/2018/11/07/azure-sql-database-aborting-scale-request/) I was confused, so confused that I ended up logging a support request with Microsoft. The issue was I wanted to scale down a database from S1 to Basic however it would take hours for a 1GB database. Obviously something was up, but what?
Scaling up or down an Azure SQL Database is a very common task. Whilst common it is also very easy to do via the Azure portal or even PowerShell. When you scale a database please be aware that it creates a replica of the original database at the new performance level and then switches connections over to the replica but what do you do if you want to cancel the scale request?
Quite an interesting situation I found myself in where I was perplexed for about 5 minutes. I was connected to an Azure SQL Database where I was configuring some users where then I executed a query and was presented with the following message:
Msg 3906, Level 16, State 2, Line 2
Failed to update database “testdb” because the database is read-only.
If you have been reading my blog for a while now you would know that a common technique to move to Azure SQL DB is to use BACPAC files. Just a reminder, see the below image.
When I was presenting my Azure SQL Database session at DataRelay (used to be SQLRelay) I was asked (over coffee) about auto scaling capabilities. Quite simply there is nothing out of the box to achieve this. The idea of auto scaling would be good where you would need a burst to fulfill higher demand in terms of workload for a time duration, you know, something like “end of the day, Friday night sale” for your database.
Okay it is not really called Giant Azure SQL Database but its close. There is a new public preview vCore service tier called Hyperscale. The architecture behind this is very different to that of a Standard tier SQL database (as an example). What drives this concept? Flexible storage architecture where you can scale out the storage as your database grows (up to the 100TB mark).
It is always a good idea to test your failover processes when you have setup failover groups in Azure. I have the following setup:
Okay honestly I have done this once. I have deleted Azure SQL Databases and then try and find the quickest way to recover. The Azure portal is actually pretty good when it comes to deleting resources, for example it will usually ask you to re-type the name of the resource to confirm deletion, so you can tell what a bad mistake I made.
In my mind there are a couple of ways to move a database across resource groups. They vary from scripting to just using the Azure portal. I am going to use the Azure portal and do the following.
- Export a database in resource group X to a storage account Z.
- Import the file from the storage account Z into a database that is in resource group Y.
It’s just like a “backup and restore” strategy, all with the assumption that you are working within the same subscription ID.
Have you ever heard of SQL Reserved vCores? Well I never until recently. With this concept you have the ability to save money by PRE-PAYING for your compute resources for Azure SQL DB where you might be currently using a pay-as-you-go plan.