Let’s create a virtual machine in Azure that has an imaged copy of SQL Server on it. I want to do this because down the line I want to show how you can setup automated backups to a storage account based on an IaaS extension, when I do that I will most likely talk about the different backup options because there are many.
Let’s get straight to the point. From official documentation it states that “To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific vnets. This configuration enables you to build a secure network boundary for your applications”.
Navigate to your storage account, what is the default setting? It is shown below.
An amazing blog post by Microsoft describing the idea of hot patching the database engine in Azure SQL Database to allow for minimal downtime when applying patches to SQL Server. We know that it is one of the benefits of Azure SQL Database but now we get some insight into how it’s done.
I am not sure when this became available but for Azure SQL Database 150 compatibility level is now available. Last time I created a database few weeks ago, only level 140 ( SQL Server 2017) was available so I think it is a recent thing.
Upon some testing, if you create a new Azure SQL Database by default it is 150 as per the screen shot below (from the script action command)
I want to do a quick summary post of the many different types of Azure SQL Database available and I am not talking about elastic pools, VMs etc, more so the singleton type.
A small but useful change has been made to the Azure Portal for Data Platform objects.
Using a Shared Access Signature (SAS) is usually the best way to control access rights to Azure storage resources (like a container for backups) without exposing the primary / secondary storage keys. It is based on a URI and this is what I want to look at today.
I always use the Azure Storage Explorer to build a SAS token. Let’s dig into what the different parts mean.
I only ever use the storage explorer when managing my blobs, files, queues within storage accounts. It is your single view access point for all your storage needs and I totally recommend downloading it and using it (https://azure.microsoft.com/en-gb/features/storage-explorer/).
Do you want to identify the correct Service Tier and Compute Size ( was once known as performance level) for your Azure SQL Database? How would you go about it? Would you use the DTU (Database Transaction Unit) calculator? What about the new pricing model vCore? How would you translate you current on-premises workload to the cloud?
Okay it is not really called Giant Azure SQL Database but its close. There is a new public preview vCore service tier called Hyperscale. The architecture behind this is very different to that of a Standard tier SQL database (as an example). What drives this concept? Flexible storage architecture where you can scale out the storage as your database grows (up to the 100TB mark).