I want to show you how I went from having multiple single SQL databases in Azure to a database elastic pool within a new dedicated SQL Server. Once setup I create and use elastic jobs. This post is long but I am sure you will find it useful.
- Create a new “logical” SQL Server.
- Create a new elastic pool within this logical SQL Server.
- Move the data from the old single SQL databases to the above elastic pool (couple of ways to do this but I used built-in backups).
- Confirm application connection.
- Decommission single SQL databases.
- Create / setup an elastic job.
- Check the controller database.
Creating Database Elastic Pool
So, I create a SQL server where this will be the “new home” for my elastic pool.
Next I create the pool.
Select and configure where necessary, my pool is called SQLPOOLBOY which is a standard 100 eDTU pool.
Currently it is empty; the idea is to move the existing databases from a different server into it. Thinking about it there are couple of ways. I could export the data out via my storage account and import them back, I could use the COPY command or I could just leverage the built-in backup system, this is what I will do.
So connect to your pool.
With the built-in backups we can create the database from it. The tool is powerful; it will search across all your SQL Servers and give you a list of databases (from your different servers) with the relevant backup set.
Click the +Create Database option.
You will then have the list of databases and the last available backup, which I believe is the last hourly transaction log backup.
Basically enter a LOOP and go through the same process with the databases that you want to build into the elastic pool. The end goal will look like the below.
There you have it, the databases are now moved into an elastic pool that resides on a new SQL Server, because I use contained users I need not do any further fixes. The only concept to change would be application connection strings etc. As a side note the single databases have been deleted now.
Once you have setup the pool further configuration changes are very easy to do. Just be careful with the cost implications of certain actions, for example increasing the Pool eDTU will definitely change your cost projections, below shows a 2000 eDTU pool costing £3350/month.
The pool setup works well for this example because the databases do NOT simultaneously peak so I do not reach my upper capped limit.
If for example all peak at the same time then I will run into performance issues.
I manufactured a situation where 2 peaked at the same time – you should expect high eDTU usage like below (99.91%).
This lays the foundation for creating and using elastic jobs.
Creating Elastic jobs
Why use it? Well we can easily do schema changes, credential management, data updates and database maintenance across the group. To leverage the true power of elastic jobs you should go down the PowerShell Route because the portal has reduced functionality.
There are 4 key components to an elastic job – it requires Service bus, SQL Server, SQL database and access to a storage account which all reside within a resource group. My understanding is that it needs the SQL database because it uses this as a controller database.
The problem I had was that I did not have the permissions within the storage account, I was given an error during deployment and every time I logged in to “manage” a job I would receive the below message.
This was just going on for quite some time and seemed to be an endless loop, so I decided to go to the resource group and drop the whole thing.
So I started again but with admin access to all areas.
I go through the installation.
Once completed click DEPLOY.
If you navigate to your job details and see something similar to the below then you know that you are on the right track.
Once the deployment has finished you will see the components within the elastic job.
Now is the time to create a job – navigate to the create job section and enter your login.
You will be presented with the main menu for the job details. Now a key point, official documentation states that the scripts you build within an elastic job must be idempotent meaning where the script “can be applied multiple times without changing the result beyond the initial application”, why they say this is because of potential transient network errors where if encountered the job has auto retry capability.
IF OBJECT_ID (N'dbo.Mydata', N'U') IS NOT NULL DROP TABLE dbo.Mydata; CREATE TABLE dbo.Mydata ( MeasureID INT NOT NULL, Description VARCHAR(100), QSector DECIMAL(3, 0), QSector2 DECIMAL(3, 0) )
Hence in the portal it becomes:
Saving is a good idea where then you will be presented with a split screen where on the right side you will have job execution details and with the ability to RUN it from that window.
It is very detailed with the logging, this failed because I entered in the wrong user credentials.
However, I feel this is very much in its early stages because it is difficult to find where we can amend or delete the details. So I created another job.
If successful you will see the following stages: SharpMapRefresh, ScriptSplit then finally expand operation.
They have the following purpose:
- ShardMapRefresh: Queries the shard map to determine all the databases used as shards.
- ScriptSplit: Splits the script across ‘GO’ statements into batches.
- ExpandJob: Creates child jobs for each database from a job that targets a group of databases i.e. my create table code.
Logging on via SQL Server Management Studio to the SQL Server I created at the start of the guide you will see the outcome.
1 command executed against the databases within the pool – exactly what I wanted.
The controller database
If you remember from the earlier part of the post there are many key components to an elastic job.
One of them is the controller database (the above SQL database) which you can connect to and query, below shows my connection to it.
I wrote couple of basic queries to get generic information which maybe handy for others.
Below shows Job execution details based on user operations only.
SELECT JJ.name,JJ.JobType, EJ.CreatedTime,EJ.StartTime,EJ.EndTime,EJ.LifeCycle FROM [__ElasticDatabaseJob].[JobExecution] EJ JOIN [__ElasticDatabaseJob].[Job] JJ ON EJ.InitiatedFromJobID = jj.JobId WHERE jj.JobType = 'User'
Maybe you want to see the script details?
SELECT JJ.Name, EC.ContentType, SB.CommandText FROM [__ElasticDatabaseJob].[Job] JJ JOIN [__ElasticDatabaseJob].[Content] ec ON JJ.ContentId = EC.ContentID JOIN [__ElasticDatabaseJob].[ScriptBatch] SB ON JJ.ContentId = SB.ContentId WHERE JJ.JobType = 'User'
If Microsoft make a couple of tweaks to the portal this would make the experience even better.