The Microsoft Cloud Solution Provider (CSP) program enables partners to sell Microsoft Online services to customers with additional value added services like assessment, consulting, deployment, migration and monitoring, support and billing. As part of the CSP program, partners are able to create tenants, provide them Azure subscriptions and manage various services on behalf of the customer. One service that is a part of the Azure subscription is Azure SQL Database and it represents a unique opportunity for CSP partners. Azure SQL Database is a managed relational database service in the cloud based on the market-leading Microsoft SQL Server engine, with mission-critical capabilities. Azure SQL Database has an extensive set of features that enables predictable performance, scalability with no downtime, business continuity and data protection.
One critical on-going business need is to get solutions to market quickly. It is important to reduce barriers like capital expenditure (CAPEX ) expenditures and deployment times for infrastructure provisioning and applications. Also these solutions must ensure high availability at all times because any downtime means potential loss of revenue. Applications that typically require database support include:
Another factor is the shortening of development and deployment cycles due to better development tools, a tendency to try new functionality in production on a subset of customers and iterate quickly on customer feedback. These type of applications represent a significant opportunity for partners to provide consumption based managed services like provisioning and managing these applications in terms of business continuity, disaster recovery, security and scalability. Azure SQL Database is a managed platform as a service solution with backups turned on by default, ability to scale up or out and ability to replicate the database to other Azure regions adding resilience and higher level SLAs for customer databases. This enables Partners to provide these higher value added services like migration, performance tuning, security assessments and monitoringvia the CSP without the overhead of patching and managing the SQL Server instance or the underlying infrastructure like the operating system, network, compute and storage.
The Azure SQL Database V12 was architected specifically with the goal of making it easy to manage for partners through the CSP program. The Azure SQL Database is deployed and managed either in the single database model where one or more Azure SQL Databases support the application or the Elastic Database model where potentially hundreds or thousands of databases are deployed and managed as a pool to support the application. The following sections explain both the models and when they are suitable.
In this section we will cover deploying a new application with Azure SQL Database supporting the data tier. This model is typically appropriate for applications with the following load characteristics:
For example, a business critical application in a large multi-national corporation with thousands of employees that enables employees to procure office supplies, book corporate travel and reimburse expenses. In this case, any non-availability or performance degradation causes high business impact but the load on the application is well understood and predictable. The key advantages of deploying such applications in Azure as compared to an on premises datacenter are quick time for deploying and upgrading applications and their data tiers, ability to provide a range of high availability and disaster recovery options, availability of and ability to scale to limitless capacity on demand with ease and quick turnaround time with zero upfront or on-going CAPEX investment. The following sections describe how Partners can help their customers realize these benefits specifically focusing on the data tier.
In order to provision the web application with the SQL Database we will follow these steps:
Within minutes this template will create an Azure Website which is already deployed and publicly accessible through a URL linked to a SQL database without requiring any CAPEX. In order to deploy code to this application, Azure provides several options as described in Deploy your app to Azure App Service
The Azure SQL Database is available in three service tiers – Basic, Standard and Premium. In subsequent sections this whitepaper describes various considerations on selecting the appropriate service tier and number of databases needed for the application. While it is important to plan and be aware of the options available in various tiers and estimate the right number of databases, Azure SQL Database provides great flexibility with fast and easy configurability to scale the number of databases and switch between tiers if your application needs more resources. Infact, it is recommended that you start with the Basic tier during development and scale up and out as your application needs grow.
As part of deploying and managing the Web Application it is important to ensure the Azure SQL database (s) be managed for high availability, business continuity, scalability, performance and cost of usage. These represent unique opportunities for CSP partners to provide value-added managed services to the customers.
The following sections focus on how manage the Azure SQL database high availability, performance monitoring and scalability.
While deciding between the high availability options the first thing is to understand the Azure SQL Database SLA. In order to ensure high availability, it is important to understand the amount of downtime (time for which the database not available) acceptable to your customer's application in case of various scenarios like regional outages, administrative human errors, application upgrades and when you perform your disaster recovery drills (to ensure methods and procedures are up to date).
Azure SQL Database offer four capabilities for high availability Point-In-Time Restore, Geo-Restore, Standard Geo-Replication, Active Geo-Replication.
Here is a quick summary of when to use the different capabilities.
When to use
In case of recovery from accidental data deletion or alteration
Recovery from regional outage of the database or Azure Region by restoring to a different Azure region
Disaster Recovery from a regional failures with faster recovery time than Geo-Restore to an offline secondary
Application has a high-volume read-oriented workload and could benefit from read-scale load balancing in addition to fast disaster recovery
Detailed guidance for recovery times and objectives for these capabilities is given here. Depending upon your business continuity goals, you will decide on the capabilities you need and by extension the service-tier you need for your customer's application.
As mentioned before, database service tiers can be changed anytime manually or in an automated manner if your application needs change.
Configuring High Availability
To configure high availability on the SQL database, follow these steps:
After you provision the databases for the appropriate tier then it is time to monitor them for performance and get alerts when performance crosses thresholds. Azure SQL database supports a large list of Dynamic Management Views (DMVs) for monitoring and they are described here. Additionally, alerts can be configured in database settings on the on the Azure Management Portal by clicking on database settings for metrics like database size, deadlocks, CPU%, DTU%, data IO%, connection failures, In-Memory OLTP%. It is recommended that alerts be configured with Azure Webhooks so that alerts can be routed to your internal systems like SMS, bug tracking systems, team chats or monitoring dashboards. A key performance and scalability metric to monitor and get alerted on is the Database Throughput Unit (DTU). It is a combined metric for how much throughput resources are made available to your application by the SQL database and is a key metric that will used to scale up/down or scale out/in the database.
If you would like to tune the performance of your databases then configure Query Performance Insight on the database. Query Performance Insight requires that Query Store be running on your database and provides deeper insight into your DTU consumption, most expensive queries and ability to drill down into the query details.
To configure Performance Monitoring on the SQL database, follow these steps:
Now that you have the SQL Database setup and it is being monitored, let's see how Azure SQL database helps address the unpredictable traffic load characteristics of your customer's applications. A common SaaS application pattern is for each customer or category of users with a similar usage profile to be given a database. For an unpredictable growth in the usage pattern of a customer of the application, it is recommended that the database be scaled up. If the customer usage goes down, you can scale down the database. Scaling Up/Down can be done within the same service tier and also across different service tiers. However, there are some important considerations to keep in mind while scaling up/down across service tier which are explained in
Azure SQL Database resource limits.
In order to scale up/down within and across service tiers follow these steps:
As mentioned before, it is easily possible to scale the SQL databases through the management portal and the PowerShell scripts. So it is important to keep track of the usage and associated billing charges for your customer. The usage and charges are posted once a month to the CSP portal. Given the possibility of wide variances due to rapid scaling of the database, it is recommended that the partners use the GetUsageInfo and GetRateCard APIs that enable calculation of charges up to the hour and configure alerts for unexpected consumption charges. The ability to scale up quickly to meet performance SLAs and scale down rapidly to reduce costs represents a unique value-added service that can be provided by CSP partners. The CSP Program provides a full SDK for programmatically interacting with the Partner Center. To gain up-to-the hour visibility into the usage, utilize the Usage API and to understand real time pricing to calculate the charges incurred by the customer leverage the RateCard API.
As described previously you have decided the high availability strategy from your customer's application. This section provides guidance on how to recover from situations which cause downtime for the database.
While administering the database on behalf of your customers, in case of actions which leave the data in an erroneous state it is recommended that Point-in-time restore capabilities be used. In order to execute a point in time restore:
As mentioned in this section, setup monitoring and alerting for your database. In case, of connectivity failure to your database or a notification from Azure about a regional outage, you will failover to a geo-replicated secondary database and recover using Geo-Restore.
In order to execute a point in time restore:
A typical Disaster Recovery drill consists simulating a data tier outage, recovering from the outage and ensuring that the application is a healthy state post-recovery. The best practices and guidance for performing disaster recovery drill is provided in SQL Database disaster recovery drills.
A typical pattern in Web Application is frequent upgrades to front ends and services deployed in Azure and the Azure SQL database used for persisting application data and/or metadata. The best practices for ensuing high availability during application upgrades is described SQL Database business continuity application upgrade.
A simple reference architecture of the web application and Azure SQL Database is shown below. While, your customer's application might need you to leverage additional components like a Content Delivery Network for Website acceleration, non-relational storage for static files like images, or a caching layer to cache relational data – this reference architecture summarizes the essential components for Web + Azure SQL Database application discussed in previous sections.
As mentioned before, Multi-database ISV SaaS applications generally follow a pattern in which they assign a database per customer. This provides a simple approach to data isolation where each customer's data is isolated from other customers and database level security prevents a customer from accessing other customer's data. Additionally, this enables extending an application's data model for specific customer's needs and recovering a customer's data from a backup is simpler and doesn't affect other customers. As mentioned before, examples of this are multi-database ISV applications for business customers like an accounting application that servers thousands of customers or a Patient HealthCare application serving thousands of hospitals. In this case, the application typically ends up creating tens, hundreds or even thousands of databases. Given the wide variety in usage patterns of each customer due to location, size, individual business needs, database usage for such applications is typically characterized by one or more of the following: Large number of databases, Bursts followed by low activity periods, not all databases concurrently active, varying or unpredictable usage pattern per database as shown in the figure below
With this sort of a pattern – traditionally database managers had two choices overprovision for peak usage (and overpay) to ensure performance or under provision and affect performance and customer satisfaction. Additionally, this represents a huge management overhead with long implementation times and unpredictability in the cost of providing databases to the application. With Elastic Database Pools partners can provide efficient management services without much overhead by managing hundreds or thousands of databases as a pool with an overall capacity and cost rather than individual databases. The following sections explain the concepts of the Elastic Database Pool and how to manage the capacity and usage through the CSP program.
The following theoretical example helps explains how an Elastic Pool shares capacity between two databases with non-overlapping usage. Imagine an application with two databases DB01 and DB02. DB01 is determined to need 100 DTUs during the first half of the day from 00:00 hours to 12:00 and DB02 is determined to need 50 DTUs during the second half of the day from 12:00 to 24:00 hours. In the single database model you would have to provision two databases for a total of 150 DTUs and manage their capacities individually in spite of the fact that for half the day this capacity is lying unutilized. Given the non-overlapping usage pattern of the two databases, these two databases can be added to an elastic pool of 100 DTUs while still maintaining the performance SLA thereby cutting the resource costs by up to a third. Now imagine DB1 extends its usage period from 12:00 hours to 24:00 hours with a resource need of 50 DTUs during the second half of the day. In this case, we would put a Max cap of 100 DTUs on the databases in the pool. The elastic pool will then ensure that during the second half of the day both DB01 and DB02 share the 100 DTUs in the pool but DB01 is allowed to utilize all 100 DTUs in the first half.
While the above theoretical example explains how a shared resource pool of 100 DTUs can be shared between two databases without affecting end user performance SLA with predictable costs, the shared elastic pool concept really proves its worth when it comes to real life SaaS applications with large number of databases with bursty non-overlapping usage patterns.
In summary, in an elastic database pool, the databases are allocated a pool of resources (DTUs) that are shared across databases to accommodate the needs of the individual databases in the pool. The DTU requirement for the pool is determined by the aggregate utilization of its concurrently active databases and the Storage limit per pool is determined by the DTU guarantee for the pool. There are three important performance settings for the pool and subsequent sections provide guidance on how to determine these settings for the pool:
The Max DTU cap represents the DTU limit any database in the pool will get after which the database will start getting throttled affecting performance. The Min DTU allocation represents the Minimum resource guaranteed for each database in the pool. The Pool is priced based on the total DTU for the pool and the number of databases in the pool.
Individual databases are given the flexibility to auto-scale within set parameters. Under heavy load a database can consume more eDTUs if available, to meet demand. Databases under light loads consume less, and databases under no load don't consume any eDTUs. Provisioning resources for the entire pool rather than for single databases simplifies your management tasks and result in a predictable budget for the pool. More details are provided in this SQL Database elastic pool.
Managing Elastic database pools consists of determining when an elastic pool makes sense given the usage profile, setting up the pools and monitoring and re-sizing them to ensure performance and cost savings as application needs change. The following sections provide guidance for each of these aspects.
The eDTU and storage limits for the elastic pool service tiers and the allowed limits and capabilities on the individual elastic databases are provided in etdu and storage limits for elastic pool.
There are two ways to determine the pool setup. The first method is to do this yourself by understanding the performance history of the candidate databases. You can monitor the DTU usage of the databases for the SaaS application by using DMVs like sys.dm_db_resource_stats and build a historical performance profile of the databases. It is also important for you to understand the performance SLAs and the DTUs needed to provide that SLA for the most critical databases. Once you understand the aggregate DTU consumption and the Max DTUs needed for the databases to meet their SLA then you are ready to configure the Pool.
The second method is to use the recommendations provided by the Azure SQL Database service itself which is a great way to get to a starting point from which you can do further analysis in case of a large number of databases.
Configuring the Elastic Database Pools
To configure the elastic database pool,
Once the Pool is configured it can be monitored from the Azure Management Portal. There are several views that provide information regarding eDTU utilization and per database eDTU consumption. Based on the historical analysis the pool total eDTU allocation and per database eDTU cap can be changed. While monitoring and managing multiple elastic pools it is highly recommended that pool performance data be pulled into your performance monitoring and reporting systems using PowerShell cmdlets described here.
In order to monitor and resize elastic database pools:
Currently in the preview, elastic databases support most business continuity and recovery features discussed previously in the whitepaper that are available to single databases. These are explained in more detail in SQL Database elastic pool.
While the elastic pool is given a set number of eDTUs, for a set price, it is important to keep track of the usage and associated billing charges for your customers for using the Azure SQL Database service. The usage and charges are posted once a month to the CSP portal. For a multi-database application, you can track the eDTU consumption at the databases level and utilize it to allocate consumption charges among the tenants. Given the possibility of wide variances due to rapid scaling of the database pools, it is recommended that the partners use the GetUsageInfo and GetRateCard APIs that enable calculation of charges up to the hour and configure Alerts for unexpected consumption charges. The ability to scale up quickly in order to meet performance SLAs and scale down rapidly to reduce costs represents a unique value added service that can be provided by CSP partners. The CSP Program provides a full SDK for programmatically interacting with the Partner Center. In order for you to gain up to the hour visibility into the usage utilize the Usage API and to understand real time pricing to calculate the charges incurred by the customer leverage the RateCard API.
The following figure shows the essential components needed for a partner to manage a Web Application with an Elastic Database Pool.
As a partner with the responsibility of managing the database for your customer's application, it is important that you understand the security needs of the application. One of the key ways to secure data in the Azure SQL database is by using encryption. Azure SQL database enables protecting the data at rest (in backups and files) using transparent data encryption. During flight (when queried), data is encrypted using AlwaysOn encryption and columnar data format.
The best practices for securing the Azure SQL database are described in Securing your Azure SQL DataBase.
To setup various encryption options for your customer:
The Azure SQL Database V12 represents a scalable, secure and manageable database as a service that is suitable for the entire range of applications from ISV SaaS multi-database applications to applications serving consumers and line-of-business applications for employees. It is a managed database platform which eliminates the need for patching and managing SQL Server instances or the underlying storage, compute and network infrastructure. It has easy to use capabilities which make traditional management tasks like provisioning, upgrades, business continuity, performance monitoring, scalability and cost management, enabling partners to provide high SLAs with low operational costs. Via the CSP program, partners have a unique opportunity to develop a managed services practice with higher value added services like migration, application deployment, application rearchitecture, capacity and cost management.
The following is a list of articles referred to in this Whitepaper. In addition, please use the Azure SQL Learning path to learn how to create, configure and develop with Azure SQL Database.