Being in the cloud does have many benefits, from lower administration to fast scaling but another “side effect” of operating in Azure SQL Database is the cloud first nature of changes. By this I basically mean new features always get pushed to Azure first before the classic on-premises version so some gems come to light.
There are many factors to consider when you are thinking about the move to Azure SQL Database (PaaS) – this could be single databases (provisioned compute or serverless) to elastic pools. Going through your head should be how many vCores do you want? What are the I/O requirements, do we need access to certain features like in-memory OLTP? But what about the memory requirements? This has always been a key requirement for SQL Server – those wonderful words – Min / Max memory settings.
How does this relate to Azure? Well it all depends on your vCore count and the generation of hardware we select during the build process.
There are currently 4 hardware generations ( each has its own purpose) :Gen4, Gen5, Fsv2-series and M-series. Each type has xGB per vCore up to a certain max. So it is important to remember this when sizing your workloads. (screen shot summarising is below) https://docs.microsoft.com/en-us/azure/azure-sql/database/service-tiers-vcore?tabs=azure-portal
So, for example I select a provisioned Azure SQL Database – 12 vCore on Gen 4 means I will have 84GB memory available for my workload.
Quick video showing you how to failover your Azure SQL Database between your primary and secondary location.
If you have been following me or generally topics around Azure SQL Database and security you would know that it is important to leverage Advanced Data Security (ADS) for Azure SQL Database, if you remember this meant having tools such as advanced threat protection, vulnerability scans at your finger tips.
It is a really common requirement to add specific libraries to databricks. Libraries can be written in Python, Java, Scala, and R. You can upload Java, Scala, and Python libraries and point to external packages in PyPI, Maven, and CRAN repositories.
Libraries can be added in 3 scopes. Workspace, Notebook-scoped and cluster. I want to show you have easy it is to add (and search) for a library that you can add to the cluster, so that all notebooks attached to the cluster can leverage the library.
Within the Azure databricks portal – go to your cluster.
The key vault should always be a core component of your Azure design because we can store keys, secrets, certicates thus abstract / hide the true connection string within files. When working with databricks to mount storage to ingest your data and query it ideally you should be leveraging this to create secrets and secret scopes.
Before discussing why you would want to pin a cluster it would be useful to understand the different states of a cluster. We can have:
A very common approach is to query data straight from Databricks via Power BI. For this you need Databricks token and the JDBC address URL. This is found within Account settings of the cluster.
Data engineers, pipe line developers, general data enthusiasts will be spending most of their time within a notebook. Here you develop your code, nice visualisations and commentary boxes are possible too, a very rich web-based interface and is best experienced with google chrome ( in my opinion).
I have spent many long weekends getting stuck into Azure Databricks, plenty of time to understand the core functionality from mounting storage, streaming data, knowing the delta lake and how it fits into the bigger picture with tech like Event hubs, Azure SQL DW, Power BI etc.
So, I am going to show you how easy it is to create a cluster (that’s the end goal), you will appreciate the ease of deployment for huge amounts of infrastructure.