Azure AD/Office 365 seamless sign-in – Build a base configuration for an evaluation environment

Introduction

Microsoft Office 365 provides secure anywhere access to professional email, shared calendars, instant messaging (IM), video conferencing, document collaboration, etc. It represents the cloud version of the Microsoft communication and collaboration products with the latest version of the Microsoft desktop suite for businesses of all sizes.

Azure Active Directory (Azure AD) is the directory behind Office 365 used to store user identities and other tenant properties. Just like the on-premises Active Directory stores the information for Exchange, SharePoint, Skype for Business and your custom Line of Business (LOB) apps, Azure AD stores the information for Exchange Online, SharePoint Online, Skype for Business Online, etc., and any custom applications built in the Microsoft's cloud.

Through the availability of multiple seamless sign-in options, Azure AD provides organizations with an open choice and eventually the ability to authenticate in accordance their own requirements, allowing their users - regardless of the subsequent implementation choice - to benefit from a seamless sign-experience to access Azure AD/Office 365 and the services that they have been provisioned for.

Objectives of this paper

As previously noticed, this document complements the first part entitled Azure AD/Office 365 seamless sign-in – Part 1 by providing an end-to-end walkthrough to rollout a baseline evaluation environment to further test and evaluate on that basis the multiple options offered by Azure AD/Office 365 to provide seamless sign-in experiences to access Azure AD/Office 365.

Non-objectives of this paper

This document doesn't provide a full description of AD FS in Windows Server 2012 R2. It doesn't provide neither guidance for setting up and configuring AD FS in a production environment nor a complete technical reference for AD FS.

Note    For information on AD FS, please refer to the product documentation, and the dedicated AD FS Q&A forum.

It doesn't neither provide an understanding of the different sign-in deployment options with Azure AD/Office 365, how to enable them using corporate Active Directory credentials whenever applicable, and the different configuration elements to be aware of for such deployments. This is specifically the intent of the aforementioned first part that covers all the key aspects the readers should understand to successfully provide seamless sign-in experiences with Azure AD/Office 365 for their organization in a way that fulfill their requirements.

Organization of this paper

To cover the aforementioned objectives, this document is organized in the following 2 sections:

  • Building an evaluation environment.
  • Setting up a base configuration test lab.

These sections provide the information details necessary to (hopefully) successfully build a working environment for the scenario. They must be followed in order.

About the audience

This document is intended for system architects and IT professionals who are interested in understanding the seamless sign-in capabilities of Azure AD/Office 365 from a hand-practice perspective.

Terminology used in this paper

Throughout the rest of this document, the following terms detailed in Table 1 are used regarding AD FS and WAP.

Table 1 Terminology

Term

Description

Federation server

A computer running Windows Server 2012 R2 or Windows Server 2016 that has been configured to act in the federation server (FS) role for AD FS. A federation server serves as part of a Federation Service that can issue, manage, and validate requests for security tokens and identity management. Security tokens consist of a collection of claims, such as a user's name or role.

Federation server farm

Two or more federation servers in the same network that are configured to act as one Federation Service instance.

Web application proxy

A computer running Windows Server 2012 R2 or Windows Server 2016 that has the web application proxy (WAP) role installed and that has been configured to act as an intermediary proxy service between a client on the Internet and a federation service that is located behind a firewall on a corporate network. In order to allow remote access to the services in Office 365, such as from a smart phone, home computer, or Internet kiosk, you need to deploy a web application proxy (WAP).

Web application proxy farm

Two or more WAP servers in the same network that are configured to act as one WAP Service instance.

Internal load balancer

Internet load balancer

Network load balancer   
Hardware load balancer

A dedicated application or service (such as Network Load Balancing) or hardware device (such as a multilayer switch) used to provide fault tolerance, high availability, and load balancing across multiple nodes. For AD FS, the cluster DNS name that you create using this NLB must match the Federation Service name that you specified when you deployed your first federation server in your farm.

Building an evaluation environment

As its title suggests, this section guides you through a set of instructions required to build a representative lab environment, which aims at providing users with the most seamless sign-in experience as they access Microsoft cloud services and/or other cloud-based applications while logged on to the corporate network.

For the sake of simplicity and in order to focus on the key aspects that relate to such a configuration, the test environment features:

  • In the cloud, an Azure AD/Office 365 tenant, and cloud-based applications that leverage Azure AD for identity management and access control.
  • On-premises, an Active Directory single forest environment with:
    • An AD FS farm (with two servers) integrated with Active Directory for authentication,
    • A WAP farm (with two servers) to publish on the Internet the AD FS server endpoints,
    • A single Active Directory Certificate Services (AD CS) based certificates authority to issue the required certificates,

in order to name a few - and the related required configuration.

The following diagram provides an overview of the overall test lab environment with the main software and service components that need to be deployed / configured.

We have tried to streamline and to ease as much as possible the way to build a suitable lab environment, to consequently reduce the number of instructions that tell you what servers to create, how to configure the operating systems and core platform services, and how to install and configure the required core services, products and technologies, and, at the end, to reduce the overall effort that is needed for such an environment.

We hope that the provided experience will enable you to see all of the components and the configuration steps both on-premises and in the cloud that go into such a multi-products and services solution.

Creating a test Azure AD/Office 365 tenant

The easiest way to provision both an Azure AD/Microsoft Office 365 Enterprise tenant and related Office application workloads for the purpose of the test lab certainly consists in signing up to a free 30-day trial. To sign-up for such a tenant, follow the instructions at https://go.microsoft.com/fwlink/p/?LinkID=403802&culture=en-US&country=US.

For the course of this walkthrough, we've provisioned an Office 365 Enterprise (E3) tenant: litware369.onmicrosoft.com.
You will have to choose in lieu of a tenant domain name of your choice whose name is currently not in used.
Whenever a reference to litware369.onmicrosoft.com is made in a procedure, it has to be replaced by the tenant domain name of your choice to reflect accordingly the change in naming.

Building an Azure-based lab environment

A challenge in creating a useful lab environment is to enable its reusability and extensibility. Because creating a test lab can represent a significant investment of time and resources, your ability to reuse and extend the work required to create the test lab is important. An ideal test lab environment would enable you to create a basic lab configuration, save that configuration, and then build out multiple test lab scenarios in the future by starting with the base configuration.

Moreover, another challenge people is usually facing with relates to the hardware configuration needed to run such a base configuration that involves several (virtual) machines.

For these reasons and considering the above objectives, this guide will leverage the Microsoft Azure environment along with the Azure PowerShell cmdlets to build the on-premises test lab environment to test and evaluate the single sign-on configuration.

Introducing virtual machines in Azure

Azure Virtual Machines provides support for virtual machines (VMs) provisioned from the cloud. At a glance, a VM consists of a piece of infrastructure available to deploy an operating system and an application. Specifically, this includes a persistent operating system (OS) disk, possibly some persistent data disks, and internal/external networking "glue"/connectivity to hold it all together. With these infrastructure ingredients, it enables the creation of a platform where you can take advantage of the reduced cost and ease of deployment offered by Azure.

To mimic an on-premises deployment with a multi-VM workload as needed here, virtual networks are also required. This is where Azure Virtual Networks come into play. Azure Virtual Networks let you provision and manage virtual networks (VNET) in Azure. A VNET provides the ability to create a logical boundary and place VMs inside it. VNET also provides the capability of connecting Azure Cloud Services (VMs, web roles, and worker roles).

Azure Virtual Network provides control over the network topology, including configuration of IP addresses, routing tables and security policies. A VNET has its own private address space. The address space is IPv4 and IPv6. With Virtual Network, you can easily extend your on-premises IT environment into the cloud, much the way that you can set up and connect to a remote branch office. You have multiple options to securely connect to a Virtual Network - you can choose an IPsec VPN or a private connection using the Azure ExpressRoute service.

To synthetize, Azure Virtual Network allows you to create private network(s) of VMs in your Azure tenant environment that you can assign IP addresses to, and then optionally connect to your data center through. Using this method, you can seamlessly connect on-premises (virtual) machines to VMs running in your Azure tenant.

The fundamental requirements for deploying Active Directory Domain Services (AD DS) on VM(s) in Azure differ very little from deploying it in VMs (and, to some extent, physical machines) on-premises. For example, if the domains controllers that you deploy on VMs are replicas in an existing on-premises corporate domain/forest, then the Azure deployment can largely be treated in the same way as you might treat any other additional AD DS site. That is, subnets must be defined in AD DS, a site created, the subnets linked to that site, and connected to other sites using appropriate site-links. There are, however, a number of differences that are common to all Azure deployments and some that vary according to the specific deployment scenario.

Note     For more information, see the articles Install a new Active Directory forest on an Azure virtual network and Guidelines for Deploying Windows Server Active Directory on Azure Virtual Machines that cover the fundamental differences and explain in great detail how successfully deploy and operate AD DS in Azure. The former deals with a standalone configuration in the cloud as we will deploy later in this document whereas the latter highlights the requirements for deploying AD in a hybrid scenario in which AD DS is partly deployed on-premises and partly deployed on VMs in Azure.

Understanding the ongoing costs of virtual machines in Azure

Virtual machines in Azure incur an ongoing monetary cost when they are running. This cost is billed against your free trial, MSDN subscription, or paid subscription.

Note    For more information about the costs of running Azure virtual machines, see Azure pricing.

To minimize the cost of running the test lab virtual machines, you can do one of the following:

  • Create the test lab environment and perform your needed testing and demonstration as quickly as possible. When complete, delete the test lab virtual machines from the virtual machines page of the Azure portal at https://portal.azure.com.
  • Shut
    down your test lab virtual machines into a de-allocated state from the virtual machines page of the Azure portal as covered later in this document.

Signing up for an Azure trial

If you do not already have an Azure account, you can sign up for a free one-month trial.

Note    If you have an MSDN Subscription, see article Azure benefit for MSDN subscribers.

Note    Once you have completed your trial tenant signup, you will be redirected to the Azure account portal
and can proceed to the Azure management portal by clicking Portal at the top right corner of your screen.

Adding the Azure trial to the Office 365 account

Once you have signed up and established your organization with an account in Office 365 Enterprise E3, you can then add an Azure trial subscription to your Office 365 account. This can be achieved by accessing the Azure Sign Up page at https://account.windowsazure.com/SignUp with your Office 365 global administrator account. You need to select Sign in with your organizational account for that purpose.

Note    You can log into the Office 365 administrator portal and go to the Azure Signup page or go directly to the signup page, select sign in with an organizational account and log in with your Office 365 global administrator credentials.
Once you have completed your trial tenant signup you will be redirected to the Azure account portal and can proceed to the Azure management portal by clicking Portal at the top right corner of your screen.

At this stage, you should have an Office 365 Enterprise E3 trial subscription with an Azure trial subscription.

Preparing the local environment for Azure

Azure PowerShell is a set of modules that provide cmdlets to manage Azure with Windows PowerShell. You can use the cmdlets to create, test, deploy, and manage solutions and services delivered through the Azure platform. In most cases, the cmdlets can be used for the same tasks as the Azure portal, such as creating and configuring cloud services, virtual machines, virtual networks, and web apps.

Installing and configuring Azure PowerShell

The configuration of Azure PowerShell on a local computer consists of:

  • Installing Azure PowerShell,
  • Verifying that Azure PowerShell can run scripts,
  • Verifying that WinRM allows Windows PowerShell to connect, and configuring WinRM to support basic authentication.

Note that this local computer must have Internet connectivity.

Installing Azure PowerShell

The preferred way to install Azure PowerShell is to use PowerShell Gallery.

Note    Installing items from the PowerShell Gallery requires the latest version of the PowerShellGet module, which is available in Windows 10, in Windows Management Framework (WMF) 5.0, or in the MSI-based installer (for PowerShell 3 and 4). If the PowerShellGet module is not already available in your current configuration, it is available at https://www.powershellgallery.com.

To install the latest Azure PowerShell from the PowerShell Gallery, proceed with the following steps:

  1. Open an elevated Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt.
  2. Run the following command to install the Azure Resource Manager (ARM) modules:
PS C:\> Install-Module AzureRM

Note    For information on Azure Resource Manager (ARM), see article Azure Resource Manager overview.

  1. Run the following commands to install the (legacy) Azure Service Management (ASM) modules:
PS C:\> Install-Module Azure

Note    For information on the differences between Azure Service Manager (ASM) and Azure Resource Manager (ARM), see eponym blog post Difference between Azure Service Manager and Azure Resource Manager.

  1. Run the following command to make sure the Azure PowerShell module is available after you install:
PS C:\> Get-Module –ListAvailable

At this stage, you can run the cmdlets from Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt.

Connecting to your Azure subscription with Azure PowerShell

To connect to your Azure subscription with the above cmdlets, proceed with the following steps:

  • Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt.
  • Run the following command:
PS C:\> Add-AzureRmAccount

  • Type "Y". A Sign in to your account dialog brings up.

  • Type the email address. Depending on your email address, you may be redirected to an alternative sign-in page.
  • Type the password associated with your account and click Sign in.
  • Azure authenticates you, saves the credential information, and then closes the dialog. A message states that your subscription is now selected as the default subscription.
  • Once connected to your default subscription, you can use the built-in Help system to list and get help about the cmdlets in the Azure PowerShell module. To list the available cmdlets for ARM, run the following command:
< help C:\>>

You can then display help about a specific cmdlet by typing help followed by the name of the cmdlet, for example "help New-AzureRmVM".

Note    For additional information, see articles Get started with Azure PowerShell cmdlets and Manage Azure resources with PowerShell and Resource Manager.

Discovering the ARM API

The ARM API is particularly rich. You can leverage the Azure Resource Explorer tool to discover it. This web tool is available at https://resources.azure.com/.

Note    For more information, see article Azure Resource Explorer: a new tool to discover the Azure API.

You are now ready to setup the Windows Server 2012 R2 (or Windows Server 2016) base configuration needed for the test lab.

This is the purpose of the next section.

Setting up a base configuration test lab

By following the instructions outlined hereafter, you should be able to successfully prepare your on-premises test lab environment based on virtual machines (VMs) running in Azure to later deploy and configure the test environment, install and configure it.

Important note    Individual virtual machines (VMs) are needed to separate the services provided on the network and to clearly show the desired functionality. This being said, the suggested configuration is neither designed to reflect best practices nor does it reflect a desired or recommended configuration for a production network. The configuration, including IP addresses and all other configuration parameters, is designed only to work on a separate test lab networking environment.

Any modifications that you make to the configuration details provided in the rest of this document may affect or limit your chances of successfully setting up the Azure-based test environment that will serve as the basis for the single sign-on configuration with Azure AD/Office 365. We recommend following this guide as-is first to familiarize yourself with the steps involved, before attempting a deployment on an environment with a different configuration.

Microsoft has successfully built the suggested environment with Azure IaaS, and Windows Server 2012 R2 (or Windows Server 2016) virtual machines.

In order to complete the document's walkthrough, you need an environment that consists of the following components for the Azure-based test lab infrastructure:

  • Two computers running Windows Server 2012 R2 (or Windows Server 2016) (named DC1 respectively DC2 by default) that will be configured as a domain controller with a test user and group accounts, and Domain Name System (DNS) servers. DC1 will host Azure AD Connect for the sync between the Azure-based test lab infrastructure and the Azure AD/Office 365 subscription. Alternatively, DC2 will be configured as an enterprise root certification authority (PKI server),
  • Two intranet member server running Windows Server 2012 R2 (or Windows Server 2016) (named ADFS1 respectively ADFS2 by default) that will be configured as an AD FS farm.
  • Two Internet-facing member server running Windows Server 2012 R2 (or Windows Server 2016) (named WAP1 respectively WAP2 by default) that is configured as Web servers for the Web Application Proxy (WAP) farm.

Note     Windows Server 2012 R2 and Windows Server 2016 offer businesses and hosting providers a scalable, dynamic, and multitenant-aware infrastructure that is optimized for the cloud. For more information, see the Microsoft TechNet Windows Server 2012 R2 homepage and Windows Server 2016 homepage.

These Azure VMs will enable you to:

  • Connect to the Internet to install updates, and access Internet resources in real time.
  • Later configure them in the other parts with Azure AD Connect to finally get a relevant Azure-based test infrastructure.
  • Remotely managed those using a Point-to-Site (P2S) connection and then Remote Desktop (RDP) connections by your computer that is connected to the Internet or your organization network.

Note    You must be logged on as a member of the Domain Admins group or a member of the Administrators group on each computer to complete the tasks described in this guide. If you cannot complete a task while you are logged on with an account that is a member of the Administrators group, try performing the task while you are logged on with an account that is a member of the Domain Admins group.

  • Create snapshots so that you can easily return to a desired configuration for further learning and experimentation.

For illustration purposes, we've opted to configure the domain litware369.com (LITWARE369). You will have to choose in lieu of a domain name of yours. For checking purpose, you can for instance use the domain search capability provided by several popular domain name registrars.

Whenever a reference to litware369.com
is made in a procedure, it has to be replaced by the DNS domain name of your choice to reflect accordingly the change in naming. Likewise, any reference to LITWARE369 should be substituted by the NETBIOS domain name of your choice.

For the sake of simplicity, the same password "Pass@word1!?" is used throughout the procedures detailed in this document. This is neither mandatory nor recommended in a real world scenario.

To perform all the tasks in this guide, we will use the local administrator account AzureAdmin or alternatively the LITWARE369 domain administrator account AzureAdmin for each VM, unless instructed otherwise.

Note    When you configure Windows Server 2012 R2 (or Windows Server 2016), you are required to click Continue or Yes in the User Account Control (UAC) dialog box for some tasks. Several of the configuration tasks require UAC approval. When you are prompted, always click Continue or Yes to authorize these changes. Alternatively, see the Appendix of this guide for instructions about how to set the UAC behavior of the elevation prompt for administrators.

Deploying the base workloads in Azure

The base workloads deployment in Azure leverages the Azure Resource Manager (ARM) template adfs-6vms-regular-template-based available in GitHub along with the article AD FS deployment in Azure.

Note    For information on Azure Resource Manager, see the whitepaper Getting started with Azure Resource Manager. More in-depth information can be found in the whitepaper World Class ARM Templates Considerations and Proven Practices.

Downloading the ARM template

Since the ARM template is available on GitHub, this means that you can not only get the ARM template source package to notably modify the azuredeploy.json and azuredeploy.parameters.json files to accommodate your needs but also clone the Git repo, read and modify the JSON files and submit pull requests just like any other open source package you might find on GitHub.

The next two sections explore the two possible options. If you want to deploy a Windows Server 2012 R2 based configuration, you can opt to both options. However, if you rather want to leverage Windows Server 2016, you should go with the second option that provide a smooth path to reference modified linked templates in the main azuredeploy.json template.

Getting the ARM template

To get the ARM template, specify the parameters for your test lab environment, you can simply download the entire source package as an archive file from the GitHub repo.

Note    If you are new to GitHub, and for a 101 tour on GitHub, see free eBook GitHub Succinctly.

To download the adfs-6vms-regular-template-based source package from GitHub, proceed with the following steps:

  1. Click Clone or Download.

  1. Click Download ZIP.
  2. Save the adfs-6vms-regular-template-based-master.zip file on your local machine.
  3. Extract the content of the adfs-6vms-regular-template-based-master.zip file on your local disk, for example under the C:\adfs-6vms-regular-template-based folder in our illustration.
  4. Unblock the downloaded adfs-6vms-regular-template-based-master.zip file so that it can comply with the above execution policy and be executed in your environment.
  5. Extract all the template's files in a folder, for example C:\adfs-6vms-regular-template-based in our illustration.
Cloning the ARM template

As mentioned above, the ARM template source package is stored on GitHub, which uses Git as a source control system.

To access and customize the ARM template and the related linked templates, you should have some basic familiarity with Git, GitHub and/or even Visual Studio, but the following steps provide some information and links to get you started.

Note    For information on how to set up Git and GitHub, see the article Set up Git on the GitHub site.

To access the source package and further invest/contribute to it, you need to i) fork the Git repo that contains it and ii) clone it on your local computer.

To fork the Git repo, proceed with the following steps:

  1. Click Fork in the upper right corner of your browser to fork your own copy of the ARM template to your account, and then specify where to fork the repository if prompted.

You can then clone the fork by using the GitHub app or on the command line in the GitHub shell.

To clone the repo in the GitHub Shell, proceed with the following steps:

  1. From the previous browsing session, click Clone or Download.

  1. Copy the provided web URL, for example in our illustration:

    https://github.com/philber/adfs-6vms-regular-template-based.git

  2. Open a Git Shell by double-clicking the eponym icon on your local computer desktop, and then run the following command from the prompt:
C:\Users\philber\Documents\GitHub>Copygit clone https://github.com/philber/adfs-6vms-regular-template-based.git

The cloning starts.

C:\Users\philber\Documents\GitHub> git clone https://github.com/philber/adfs-6vms-regular-template-based.git
Cloning into 'adfs-6vms-regular-template-based'...
remote: Counting objects: 208, done.
remote: Compressing objects: 100% (23/23), done.
Rremote: Total 208 (delta 9), reused 0 (delta 0), pack-reused 185
Receiving objects: 95% (198/208), 20.01 KiB | 18.00 KiB/s
Receiving objects: 100% (208/208), 48.39 KiB | 18.00 KiB/s, done.
Resolving deltas: 100% (129/129), done.
C:\Users\philber\Documents\GitHub>

The ARM template source package is available in the adfs-6vms-regular-template-based
folder under %UserProfile%\Documents\GitHub.

Regardless of the chosen option, i.e. getting the package vs. cloning the package, the source package is located under a folder named adfs-6vms-regular-template-based
in our illustration.

Important note    We will later refer to this as the adfs-6vms-regular-template-based folder in this document.

Choosing between Windows Server 2012 R2 and Windows Server 2016

If you want to deploy a Windows Server 2012 R2 based configuration, you can skip this section.

Specifying a new base URL for the linked templates

The azuredeploy.json template file now needs to be updated to reflect the above fork so that modified linked templates in the fork can be taken into account for the deployment. See next section.

For that purpose, to specify a new base URL for the linked templates, proceed with the following steps:

  1. Navigate to the folder where you've extracted the ARM template, i.e. the adfs-6vms-regular-template-based
    folder.
  2. Open the azuredeploy.json file with the editor of your choice. You can for instance use Visual Studio Code.
  1. Scroll down to the line 190.

},
"variables": {
"baseUrl": https://raw.githubusercontent.com/paulomarquesc/adfs-6vms-regular-template-based/master/,
"storageAccountNamePrefix": "[concat(uniquestring(resourceGroup().id),'sa')]",
"deployStorageAccountsUrl": "[concat(variables('baseUrl'),'/deployStorageAccounts.json')]",
"deployPublicIPsUrl": "[concat(variables('baseUrl'),'/deployPublicIPs.json')]",
"publicIpName": "wapLbPip",
"publicIPAddressType": "Static",
"deployAvailabilitySetsUrl": "[concat(variables('baseUrl'),'/deployAvailabilitySets.json')]",
"availabilitySetNames": [
"addc-as",
"adfs-as",
"wap-as"
],
  1. Modify the baseUrl entry to now point to the fork as follows in our illustration:

},
"variables": {
"baseUrl": https://raw.githubusercontent.com/philber/adfs-6vms-regular-template-based/master/,
"storageAccountNamePrefix": "[concat(uniquestring(resourceGroup().id),'sa')]",
"deployStorageAccountsUrl": "[concat(variables('baseUrl'),'/deployStorageAccounts.json')]",
"deployPublicIPsUrl": "[concat(variables('baseUrl'),'/deployPublicIPs.json')]",
"publicIpName": "wapLbPip",
"publicIPAddressType": "Static",
"deployAvailabilitySetsUrl": "[concat(variables('baseUrl'),'/deployAvailabilitySets.json')]",
"availabilitySetNames": [
"addc-as",
"adfs-as",
"wap-as"
],
  1. Save the file, and then closed it.
Listing the available VM images

To list the available VM images in your Azure subscription, proceed with the followings steps:

  1. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, and then navigate to the above folder.
  2. Connect to your Azure subscription as per section § Connecting to your Azure subscription with Azure PowerShell.
  3. Run the following command.
PS C:\> Get-AzureRmVMImageSku -Location "North Europe" -PublisherName "MicrosoftWindowsServer" -Offer "WindowsServer"
Skus Offer PublisherName Location Id
---- ----- ------------- -------- --
2008-R2-SP1 WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2008-R2-SP1-BYOL WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2012-Datacenter WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2012-Datacenter-BYOL WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2012-R2-Datacenter WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2012-R2-Datacenter-BYOL WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2016-Datacenter WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2016-Datacenter-Server-Core WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2016-Datacenter-with-Containers WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2016-Nano-Server WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2016-Nano-Server-Technical-Preview WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
2016-Technical-Preview-with-Containers WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu...
Windows-Server-Technical-Preview WindowsServer MicrosoftWindowsServer northeurope /Subscriptions/8848a529-9d69-4049-8469-8218547a61e2/Providers/Microsoft.Compu... PS C:\>
Specifying the VM image to deploy

The deploysVms.json linked template file notably contains the VM image to deploy. This image is set by default to 2012-R2-Datacenter for Windows Server 2012 R2 Datacenter.

The related parameter entry needs to be modified to deploy another VM image, for instance Windows Server 2016 Datacenter in our illustration.

For that purpose, to specify a new VM image, proceed with the following steps:

  1. Navigate to the folder where you've extracted the ARM template, i.e. the adfs-6vms-regular-template-based
    folder.
  2. Open the deploysVms.json file with the editor of your choice.
  3. Scroll down to the line 56 for the sku entry.

"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2012-R2-Datacenter",
"version": "latest"
},
  1. Modify the line to now set Windows Server 2016 Datacenter in our illustration. The value of the sku entry must correspond to one of the available VM images as per previous section:

"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
},
  1. Save the file, and then closed it.
Recording the changes to the GitHub repo

To commit the changes to the repo in the GitHub Shell, proceed with the following steps:

  1. Open a Git Shell by double-clicking the eponym icon on your local computer desktop.
  2. Move to the adfs-6vms-regular-template-based folder.
  3. View the current status:
C:\Users\philber\Documents\GitHub\adfs-6vms-regular-template-based [master ≡ +0 ~1 -0 !]> git status
On branch master
Your branch is up-to-date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: deployVMs.json
no changes added to commit (use "git add" and/or "git commit -a")
  1. Commit your changes to the GitHub repo:
C:\Users\philber\Documents\GitHub\adfs-6vms-regular-template-based [master ≡ +0 ~1 -0 !]> git commit -a -m "New OS version"
[master 3264055] New OS version
1 file changed, 1 insertion(+), 1 deletion(-)
C:\Users\philber\Documents\GitHub\adfs-6vms-regular-template-based [master ↑]>

Note    For more information, see article 2.2 Git Basics - Recording Changes to the Repository.

Specifying the parameters for the ARM template

We will use the declarative model of the Azure Resource Manager to deploy the base workloads in Azure. This requires to set the various parameters in a parameter file named azuredeploy.parameters.json.

Note    For information on the parameters of the ARM template, please refer to article AD FS deployment in Azure.

To specify the parameters for the ARM template, proceed with the following steps:

  1. Navigate to the folder where you've extracted the ARM template, i.e. the adfs-6vms-regular-template-based
    folder.
  2. Open the parameter file azuredeploy.parameters.json with the editor of your choice.
{
"$schema": http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#,
"contentVersion": "1.0.0.0",
"parameters": {
}
}
  1. Add the following content within the two brackets of the parameters JSON element. For the parameters not listed below, we will use the default values instead.
{
"$schema": http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#,
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"value": "North Europe"
},
"storageAccountType": {
"value": "Standard_LRS"
},
"virtualNetworkUsage": {
"value": "new"
},
"addcVMsSize": {
"value": "Basic_A1"
},
"adfsVMsSize": {
"value": "Standard_A1_v2"
},
"wapVMsSize": {
"value": "Standard_A0"
},
"adminUsername": {
"value": "AzureAdmin",
},
"adminPassword": {
"value": Pass@word1!?
}
}
}

Important note    If you intend to deploy a Windows Server 2016 based configuration, we advise to also use the Standard_A1_v2 size for the wapWMsSize parameter value.

  1. Saved the parameter file.

Deploying the resource group based on the ARM template

To deploy the base workloads in Azure, proceed with the following steps:

  1. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, and then navigate to the above folder.
  2. Connect to your Azure subscription as per section § Connecting to your Azure subscription with Azure PowerShell.
  3. Move to the adfs-6vms-regular-template-based folder on your local computer (C:\adfs-6vms-regular-template-based or %UserProfile%\Documents\GitHub in our illustration).
  4. Run the following command to create a resource group in your subscription for the base workloads:
PS C:\adfs-6vms-regular-template-based> New-AzureRMResourceGroup -Name "LITWARE369-RG" -Location "North Europe"

ResourceGroupName : LITWARE369-RG
Location : northeurope
ProvisioningState : Succeeded
Tags :
ResourceId : /subscriptions/8848a529-9d69-4049-8469-8218547a61e2/resourceGroups/LITWARE369-RG

PS C:\Scripts>

  1. Run the following command to deploy the base workloads in your subscription:
PS C:\adfs-6vms-regular-template-based> New-AzureRMResourceGroupDeployment -Name myTestLabDeployment -ResourceGroupName "LITWARE369-RG" -TemplateFile .\azuredeploy.json -TemplateParameterFile .\azuredeploy.parameters.json
DeploymentName : myTestLabDeployment
ResourceGroupName : LITWARE369-RG2
ProvisioningState : Succeeded
Timestamp : 1/2/2017 5:31:46 PM
Mode : Incremental
TemplateLink :
Parameters :
Name Type Value
=============== ========================= ==========
location String North Europe
storageAccountType String Standard_LRS
virtualNetworkUsage String new
virtualNetworkName String adfs-infra-vnet
virtualNetworkResourceGroupName String n/a
virtualNetworkAddressRange String 10.0.0.0/16
internalSubnetName String Internal-sn
internalSubnetAddressRange String 10.0.0.0/24
dmzSubnetAddressRange String 10.0.1.0/24
dmzSubnetName String DMZ-sn
addc01NicIPAddress String 10.0.0.101
addc02NicIPAddress String 10.0.0.102
adfs01NicIPAddress String 10.0.0.201
adfs02NicIPAddress String 10.0.0.202
wap01NicIPAddress String 10.0.1.101
wap02NicIPAddress String 10.0.1.102
adfsLoadBalancerPrivateIpAddress String 10.0.0.200
addcVmNamePrefix String dc
adfsVmNamePrefix String adfs
wapVmNamePrefix String wap
addcVMsSize String Basic_A1
adfsVMsSize String Standard_A1_v2
wapVMsSize String Standard_A0
adminUsername String AzureAdmin
adminPassword SecureString
Outputs :
DeploymentDebugLogLevel :
PS C:\adfs-6vms-regular-template-based > _

This above command creates a new deployment by using the downloaded ARM template and the modified parameter file for the parameter values to honor.

You can troubleshoot your deployment by looking at either the audit logs, or the deployment operations.

Note    For more information, see article View deployment operations with Azure PowerShell.

Note     For help with resolving particular deployment errors, see article Resolve common errors when deploying resources to Azure with Azure Resource Manager

This command executes the following tasks for you to:

  1. Create a resource group for the test lab environment.
  2. Create a single virtual network (VNET) for the VMs of the test lab environment named adfs-infra-vnet. This VNET contains two subnets: DMZ-sn and Internal-sn.
  3. Create the network security groups (NSG) that contains a list of Access Control List (ACL) rules to allow or deny network traffic the VMs in the VNET, namely DMZ-sn-nsg and Internal-sn-nsg that are respectively associated with the above two subnets DMZ-sn and Internal-sn.
  4. Create the availability sets for the DC, the AD FS and the WAP roles, respectively named addc-as, adfs-as, and wap-as.
  5. Create an Internal Load balancer (ILB) for the AD FS farms named adfs-lb.
  6. Create the Internet Load balancer (ILB) for the WAP farms named wap-lb.
  7. Create storage accounts to notably store the VHDs of the VMs as blobs.
  8. Create the public IP addresses of the test lab environment.
  9. Create the VMs of the test lab environment: dc1, dc2, adfs1, adfs2, wap1, and wap2.

A deeper look at the deployed configuration

A total of 24 resources have been created for you by the ARM template. These resources constitute the base workloads of your test lab environment.

To view the various resources created by the ARM templates, proceed with the following steps:

  1. Open a browsing session and navigate to the Azure portal at https://portal.azure.com.
  2. Sign in with your administrative credentials to your Azure subscription in which you've deployed the test lab environment.
  3. On the left pane of the Azure portal, click resource group. A new blade opens up.

Note    A blade is one piece of the overall view. You can think of a blade as a window.

  1. Click LITWARE369-RG in the list. An eponym blade opens up.

Let's have a look on these resources.

Virtual network and subnets

The two subnets DMZ-sn and Internal-sn are created in a single virtual network (VNET), i.e. adfs-infra-vnet, rather than in two completely different virtual VNet in order not to require a VNET to VNET gateway for communications within the test lab environment.

Note    for more information, see article Virtual networks.

Network security groups

A network security group (NSG) contains a list of access control list (ACL) rules that allow or deny network traffic to your VM instances in a VNET.

Note    for more information, see article Network security groups.

A network security group (NSG) is associated with each above subnet, namely DMZ-sn-nsg for the DMZ-sn subnet respectively Internal-sn-nsg for the internal network Internal-sn subnet. The defined ACL rules apply to all the VM instances in those subnets.

The following security rules are defined for the Internal-sn-nsg NSG to secure the internal subnet:

  • Inbound security rules:

  • Outbound security rule:

Likewise, the following rules are defined for the DMZ-sn-nsg NSG to secure the DMZ subnet:

  • Inbound security rules:

  • Outbound security rule:

Availability sets

Availability sets are created for each role (DC, AD FS and WAP) in the test lab environment, namely adds-as, adfs-as, and wap-as. They contain 2 VMs each: DC1 and DC2, ADFS1 and ADFS2, WAP1 and WAP2. This help achieving higher availability for each role. While creating the availability sets, it is essential to decide on the following:

  • Fault Domains. VMs in the same fault domain share the same power source and physical network switch. A minimum of 2 fault domains are recommended. The default value of 3 is used for the purpose of this deployment.
  • Update domains. VMs belonging to the same update domain are restarted together during an update. You want to have minimum of 2 update domains. The default value of 5 is used for the purpose of this deployment.

Note    for more information, see article Manage the availability of virtual machines.

Virtual machines

As already outlined, the following six machines are created for this deployment to host the different roles in the infrastructure we'd like to model.

Machine

Role

Subnet

Availability set

IP Address

DC1

DC

Internal-sn

adds-as

10.0.0.101 (Static)

DC2

DC

Internal-sn

adds-as

10.0.0.102 (Static)

ADFS1

ADFS

Internal-sn

adds-as

10.0.0.201 (Static)

ADFS2

ADFS

Internal-sn

adfs-as

10.0.0.202 (Static)

WAP1

WAP

DMZ-sn

wap-as

10.0.1.101 (Static)

WAP2

WAP

DMZ-sn

wap-as

10.0.1.102 (Static)

Static IP addresses are used. This is a recommendation if you are managing the DNS as you will later do in this walkthrough. The DNS role service instantiated when promoting DC1 and DC2 as domain controllers will be used for the DNS records for AD DS domain.

The VM pane should look like below after the deployment is completed:

Load balancers

Azure Load Balancer delivers high availability and network performance to your workloads. It is a Layer 4 (TCP, UDP) load balancer that distributes incoming traffic among healthy instances of services defined in a load-balanced set.

Note    for more information, see article Azure Load Balancer overview.

An internal load balancer (ILB) is created for the AD FS availability set adfs-as, namely adfs-lb. the ILB is configured with one load balancing rule for TCP on port 443 (HTTPS):

based on a probe configured for the same protocol and port:

Note    for more information, see article Create an internal load balancer using PowerShell.

Likewise, an Internet facing (public) load balancer is created with a public IP address for the WAP availability set wap-as, namely wap-lb. The ILB is configured with one load balancing rule for TCP on port 443 (HTTPS):

based on a probe configured for the same protocol and port:

Note    for more information, see article Creating an Internet-facing load balancer in Resource Manager by using PowerShell.

At this stage, all the Azure resources listed in the ARM template should have been successfully deployed in your subscription. These resources constitute an up and running base configuration that we will leverage in the next steps.

The next sections imply that you have in place such an environment.

It's now time to start the configuration of your test lab environment.

Accessing the various machines of the test lab environment

Configuring a Point-to-Site (P2S) connection to the test lab environment

Only the virtual machines WAP1 and WAP2 have public IP addresses configured by the ARM template being executed. As a consequence, the Connect capability of the Azure portal can only be used onto these two machines to access the rest of our machines in the deployed test lab environment in Azure.

In order to access all the machines in the test lab environment, one option could consist in opening first a RDP connection on one of these two machines, and in turn opening another RDP session from this machine to the targeted machine.

We will rather setup a Point-to-Site (P2S) configuration that will allow you to create a secure connection from your client computer to the adfs-infra-vnet VNET of the test lab environment. Once the connection is established to the VNET, you can then access the various machines in the test lab environment through a RDP connection.

Such a P2S connection is composed of the following items: the above VNET with a VPN gateway, a root certificate .CER file (public key), a client certificate, and a VPN client configuration package on the client(s). VPN clients that connect to the VNET using this P2S connection receive an IP address from the client address pool.

Resource Group: Litware369-RG
Location: North Europe
Name: adfs-infra-vnet
GatewaySubnet: 10.0.2.0/24
Virtual network gateway name: adfs-infra-vnet-gw
Gateway type: VPN
VPN type: Route-based
Public IP address: adfs-infra-vnet-pip
Connection type: Point-to-site
Client address pool: 172.16.201.0/24

To configure a Point-to-site (P2S) connection to the VNET of the test lab environment, proceed with the following steps:

Note    For more information, see articles Configure a Point-to-Site connection to a VNet using PowerShell and Configure a Point-to-Site connection to a VNet using the Azure Portal.

  1. Creating the certificates for P2S connection to the VNET.
  2. Configuring the VPN gateway for the VNET.
  3. Downloading the VPN client configuration package.
  4. Connecting to the VNET.
  5. Verifying the P2S connection to the VNET.
Creating the certificates for P2S connection to the VNET

Each VPN client that wants to connect to the VNET using the P2S connection must have first a client certificate installed that was generated from the root certificate. The root certificate will be a self-signed root certificate in our configuration.

To create a self-signed certificate, proceed with the following steps:

Note    For more information, see article Working with self-signed certificates for Point-to-Site connections.

    1. Click
      Download the standalone SDK.

  1. Click Save, and then Run once downloaded. Follow the instructions.
  1. Open a command prompt and create a self-signed root certificate for the P2S connection:
C:\> "C:\Program Files (x86)\Windows Kits\10\bin\x64\makecert.exe" -sky exchange -r -n "CN=Litware369P2SRootCert" -pe -a sha1 -len 2048 -ss My "Litware369P2SRootCert.cer"
  1. Obtain the public key of the self-signed root certificate. It will be later uploaded as part of the VPN Gateway configuration for the P2S connection:
    1. From the command line prompt, open certmgr.msc. Navigate under Certificates – Current User | Personal | Certificates.

  1. Right-click the Litware369P2SRootCert certificate, click All Tasks, and then click Export. The Certificate Export Wizard opens up.

  1. Click Next, select No, do not export the private key, and then click Next.
  2. On Export File Format, select Base-64 encoded X.509 (.CER), and then click Next.
  3. On File to Export, browse to the location to which you want to export the root certificate. For File name, name the certificate file, for example "Litware369P2SRootCert" in our configuration. Then click Next.
  4. Click Finish to export the certificate. Click OK to close the dialog.
  1. Now generate a client certificate from the self-signed root certificate.
C:\> "C:\Program Files (x86)\Windows Kits\10\bin\x64\makecert.exe" -n "CN=ClientCertificateName" -pe -sky exchange -m 96 -ss My -in "Litware369P2SRootCert" -is my -a sha1
Configuring the VPN gateway for the VNET

To create and configure the VPN gateway for the VNET, proceed with the followings steps:

  1. Connect to your subscription as per section § Connecting to your Azure subscription with Azure PowerShell, and then navigate to the folder in which you've exported the Base-64 encoded X.509 (.CER) file for the root certificate.
  2. Create the VPN gateway for the VNET. Run the followings commands in order.
    1. Set the variables for the cmdlets to run.
PS C:\> $rgName = "LITWARE369-RG"
PS C:\> $Location = "North Europe"
  1. Add a gateway subnet to the existing VNET. The gateway subnet must be named "GatewaySubnet".
PS C:\> $vNetName = "adfs-infra-vnet" 
PS C:\> $gWSubName = "GatewaySubnet"
PS C:\> $gWSubPrefix = "10.0.2.0/24"
PS C:\> $vnet = Get-AzureRmVirtualNetwork -Name $vNetName -ResourceGroupName $rgName
PS C:\> Add-AzureRmVirtualNetworkSubnetConfig -Name $gWSubName -VirtualNetwork $vnet -AddressPrefix $gWSubPrefix
PS C:\> $vnet = Set-AzureRmVirtualNetwork -VirtualNetwork $vnet
  1. Request a dynamically assigned public IP address for the VPN gateway.
PS C:\> $gWIPName = "adfs-infra-vnet-ip"
PS C:\> $subnet = Get-AzureRmVirtualNetworkSubnetConfig -Name $gWSubName -VirtualNetwork $vnet
PS C:\> $pip = New-AzureRmPublicIpAddress -Name $gWIPName -ResourceGroupName $rgName -Location $Location `
-AllocationMethod Dynamic
  1. Create the configuration for the VPN gateway. The VPN gateway configuration defines the subnet and the public IP address to use.
PS C:\> $gWIPconfName = "gwipconf"
PS C:\> $ipconf = New-AzureRmVirtualNetworkGatewayIpConfig -Name $gWIPconfName -Subnet $subnet -PublicIpAddress $pip
  1. Set the root certificate to use. You must point to the previously exported Base-64 encoded X.509 (.CER) file. See previous section.
PS C:\> $p2sRootCertName = "Litware369P2SRootCert.cer"
PS C:\> $filePathForCert = ".\Litware369P2SRootCert.cer"
PS C:\> $cert = new-object System.Security.Cryptography.X509Certificates.X509Certificate2($filePathForCert)
PS C:\> $certBase64 = [system.convert]::ToBase64String($cert.RawData)
PS C:\> $p2sRootCert = New-AzureRmVpnClientRootCertificate -Name $p2sRootCertName -PublicCertData $certBase64
  1. Create the VPN gateway. The VPN gateway can take 20 minutes or more to create.
PS C:\> $gwName = "adfs-infra-vnet-gw"
PS C:\> $vPNClientAddressPool = "172.16.201.0/24"
PS C:\> New-AzureRmVirtualNetworkGateway -Name $gwName -ResourceGroupName $rgName `
-Location $location -IpConfigurations $ipconf -GatewayType Vpn `
-VpnType RouteBased -EnableBgp $false -GatewaySku Standard `
-VpnClientAddressPool $vPNClientAddressPool -VpnClientRootCertificates $p2sRootCert

Clients connecting to the test lab environment using the P2S connection must have both a client certificate and a VPN client configuration package installed.

Downloading the VPN client configuration package

The VPN client configuration package contains information to configure the VPN client software that is built into Windows and is specific to the VPN that you want to connect to.

To download the VPN client configuration package, proceed with the following commands:

  1. From the previous PowerShell command prompt, download the VPN client configuration package.
PS C:\> $rgName = "LITWARE369-RG"
PS C:\> $gwName = "adfs-infra-vnet-gw"
PS C:\> Get-AzureRmVpnClientPackage -ResourceGroupName $rgName `
-VirtualNetworkGatewayName $gwName -ProcessorArchitecture Amd64

The cmdlet returns a URL link, for example in our configuration:

https://mdsbrketwprodsn1prod.blob.core.windows.net/cmakexe/777501b7-334c-4691-922c-b270192eea64/amd64/777501b7-334c-4691-922c-b270192eea64.exe?sv=2015-04-05&sr=b&sig=IiFh49z7GoO9AyORBL9jP7ARIBs%2F1HAyfqxnlwLbvKk%3D&st=2017-01-02T15%3A08%3A55Z&se=2017-01-02T16%3A08%3A55Z&sp=r&fileExtension=.exe

  1. Copy and paste the returned URL to a web browser to download the package.

  1. Click Save, and then Run to install the package on the client computer. Ignore the security warning if any. If you get a SmartScreen popup, click More info, then Run anyway in order to install the package.
  2. A popup dialog brings up. Click Yes to confirm the installation.
Connecting to the VNET

To connect to the VNET, proceed with the following steps:

  1. Navigate to Network Settings and click VPN. Locate the VPN connection that you've created. It is named the same name as the VNET: adfs-infra-vnet.

  1. Click adfs-infra-vnet, and then underneath Connect. A adfs-infra-vnet dialog pops up.

  1. Click Connect to start the P2S connection to the adfs-infra-vnet VNET.

  1. Select Do not show this message again for this Connection, and then click Continue.

  1. Click Advanced options.

At this stage, the P2S connection to the VNET should now be established.

Verifying the P2S connection to the VNET

To verify the P2S connection, proceed with the following steps:

  1. Open an elevated command prompt.
  2. Run the following command:
C:\> ipconfig/all

The assigned IP address is one of the addresses within the P2S VPN client address pool that you've specified in the above configuration:

PPP adapter adfs-infra-vnet:

Connection-specific DNS Suffix . :
Description . . . . . . . . . . . : adfs-infra-vnet
Physical Address. . . . . . . . . :
DHCP Enabled. . . . . . . . . . . : No
Autoconfiguration Enabled . . . . : Yes
IPv4 Address. . . . . . . . . . . : 172.16.201.1(Preferred)
Subnet Mask . . . . . . . . . . . : 255.255.255.255
Default Gateway . . . . . . . . . :
DNS Servers . . . . . . . . . . . : 10.0.0.101
10.0.0.102
NetBIOS over Tcpip. . . . . . . . : Disabled

Connecting to a specific VM in VNET

To connect to a specific machine in the test lab environment, proceed with the following steps:

  1. open a command prompt and type the following command:
C:\> mstsc.exe

A Remote Desktop Connection dialog brings up.

  1. In Computer, type the name or IPv4 address of the machine on which you want to open a remote session, for example "DC1" conversely "10.0.0.101" for the DC1 computer, and then click Connect.
  2. Check Don't ask me again for connections to this computer and click Connect. A Windows Security dialog brings up.

  1. Click Use a different account and then log on as the local account AzureAdmin with "Pass@word1!?" as password.
  2. Another Remote Desktop Connection dialog appears.

  1. Check Don't ask me again for connections to this computer and click Yes.

The connection is then established to the remote desktop of the targeted machine.

Disabling the IE Enhanced Security Configuration (ESC)

The configuration may imply to download files from the Internet and should consequently be authorized.

Since the all the above computers are intended to run on a test lab environment, the IE Enhanced Security Configuration could be disabled for the course of the installation operations.

To disable the IE Enhanced Security Configuration (ESC), proceed with the following steps:

  1. Open a remote desktop connection on the DC1 computer. Follow the instructions as per above section. Log on as the local account AzureAdmin with "Pass@word1!?" as password.
  2. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, and then run the following commands:
PS C:\> $adminKey = "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A7-37EF-4b3f-8CFC-4F3A74704073}"
PS C:\> Set-ItemProperty
-Path $adminKey -Name "IsInstalled" -Value 0
PS C:\> $userKey
= "HKLM:\SOFTWARE\Microsoft\Active Setup\Installed Components\{A509B1A8-37EF-4b3f-8CFC-4F3A74704073}"
PS C:\> Set-ItemProperty
-Path $userKey -Name "IsInstalled" -Value 0
PS C:\> Stop-Process –
Name Explorer
  1. Repeat the above steps on the DC2, ADFS1, ADFS2, WAP1, and WAP2 computers.

Configuring the domain controllers

To configure the domain controllers in our test lab environment, proceed with the following steps:

  1. Deploying a new Active Directory forest.
  2. Configuring public DNS forwarders.
  3. Creating DNS records.
  4. Configuring the VNET to use DC1 as the DNS Server.
  5. Adding as a second domain controller to the Active Directory forest.
  6. Configuring the VNET to also use DC2 as a DNS Server.
  7. Joining the LITWARE369 domain.
  8. Creating test accounts.
  9. Allowing test accounts to log on locally.

The following subsections describe each of these steps in the context of our test lab environment. These steps are illustrated with Windows Server 2012 R2.

Deploying a new Active Directory forest

To deploy a new Active Directory forest in the test lab environment, proceed with the following steps:

  1. Open a remote desktop connection on the DC1 computer. Follow the instructions as per section § Accessing the various machines of the test lab environment and specify in step 2 "10.0.0.101" for the DC1 computer. Log on as the local account AzureAdmin with "Pass@word1!?" as password.
  2. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt.
  3. Run the following command to add the binaries required for Active Directory Domain Services (AD DS):
PS C:\> Add-WindowsFeature -name AD-Domain-Services –IncludeManagementTools –Restart

Once completed, the DC1 computer will reboot. You are now ready to install a new domain controller in a new forest.

  1. Repeat steps 1 and 2, and then run the following command to promote the DC1 computer into a domain controller.
PS C:\> Install-ADDSForest –DomainName "litware369.com" -InstallDns		

This command installs a new forest named litware369.com, prompts you to provide and confirm the Directory Services Restore Mode (DSRM) password, and specifies a DNS server should also be installed during the forest installation process. When prompted, type "Pass@word1!?" for the DSRM.

Once completed, the DC1 computer will reboot.

Configuring public DNS forwarders

The previous steps have resulted in configuring a DNS server on the computer for name resolution instead of the Azure-provided name resolution by default.

We thus must ensure that our DNS servers are configured to use the root hints if no forwarders are available so that we can correctly resolve name over the Internet in our test lab environment.

Note     For more information on the root hints, see the eponym page Root Servers.

To configure the DNS servers to use the root hints, proceed with the following steps:

  1. Open a remote desktop connection on the DC1 computer. Follow the instructions as per section § Accessing the various machines of the test lab environment and specify in step 2 "10.0.0.101" for the DC1 computer. Log on as the LITWARE369 domain administrator account AzureAdmin with "Pass@word1!?" as password.
  2. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, and run the following command to start the DNS Manager console:
PS C:\> dnsmgmt.msc

The DNS Manager console brings up.

  1. In the console tree, select DC1.
  2. On the Action menu, click Properties. The DC1 Properties dialog brings up.
  3. Select the Forwarders tab.

  1. Ensure that Use root hints if no forwarders are available is checked.
  2. Click OK and close the DNS Manager console.
  3. From the above Windows PowerShell command prompt, type the following command to validate the resolution with the root hints:
PS C:\> dnscmd /ipvalidate /roothints 192.5.5.241
. completed successfully.
Raw Flags ResultCode NoTcp RTT IP Address
---------------------------------------------------------------------------
00001000 0 Success 0 10 192.5.5.241
Command completed successfully.
PS C:\> _

You should see Success as the result code.

Creating DNS records

To create the appropriate DNS records four our test lab environment, proceed with the following steps:

  1. Updating DNS with the internal load balancer.
  2. Assigning a DNS label to the public IP of the Internet load balancer.
  3. Updating DNS with the Internet load balancer.
  4. Updating your external domain registrar.

The following subsections describe each of these steps in the context of our test lab environment.

Updating DNS with the internal load balancer

A DNS records must be added for the AD FS farm, and the enterprise registration endpoint for the device registration with AD FS, etc.

To create the required DNS records, proceed with the following steps:

Note    For more information on the DNS cmdlets, see the article Domain Name System (DNS) Server Cmdlets.

  • From the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt on DC1, run the following command to add an A record for the internal load balancer adfs-lb for the ADFS farm: adfs:
PS C:\> Add-DnsServerResourceRecord -ZoneName "litware369.com" -A -Name "adfs" -IPv4Address "10.0.0.200"

Important note    If the DNS resolution of the AD FS service endpoint is performed through CNAME record lookup instead of through an A record lookup, you will be repeatedly prompted for credentials later in this lab during sign-in.

  • Run the following command to add a CNAME record for enterpriseregistration:
PS C:\> Add-DnsServerResourceRecord -CName -Name "enterpriseregistration" -HostNameAlias "adfs.litware369.com" ` -ZoneName "litware369.com"			
Assigning a DNS label to the public IP of the Internet load balancer

To assigning a DNS label to the public IP of the Internet load balancer, proceed with the following steps:

  1. Open a browsing session and navigate to the Azure portal at https://portal.azure.com.
  2. Click Resource groups on the left pane. A new blade opens up. Click LITWARE369-RG in the list. An eponym blade opens up.
  3. Click the public IP address wapLbPip. An eponym blade opens up for the public IP and its settings.
  4. Click Configuration.

  1. In DNS name label, provide a DNS label, for example "litware369fs" in our illustration.

This will become the public DNS label that you can access from anywhere, for example litware369fs.northeurope.cloudapp.azure.com in our configuration. You can then add an entry in the external DNS domain registrar for the federation service (like adfs.litware369.com) that resolves to the DNS label of the external load balancer (litware369fs.northeurope.cloudapp.azure.com). See below section.

  1. Click Save.
Updating DNS with the Internet load balancer

A DNS records must be added for the Internet load balancer wap-lb in front of the WAP farm.

To create the required CNAME DNS records, proceed with the following steps:

  • From the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt on DC1, run the following command to add a CNAME record for the WAP farm: wap:
PS C:\> Add-DnsServerResourceRecord -CName -Name "wap" -HostNameAlias "litware369fs.northeurope.cloudapp.azure.com" ` -ZoneName "litware369.com"
  • Run the following command to add a CNAME record for www:
PS C:\> Add-DnsServerResourceRecord -CName -Name "www" -HostNameAlias "litware369fs.northeurope.cloudapp.azure.com" ` -ZoneName "litware369.com"
Updating your external domain registrar

Furthermore, to externally resolve the adfs.litware369.com, enterpriseregistration.litware369.com, and www.litware369.com FQDN names and point to the above adfs-infra-vnet VNET in Azure, you will then need to create the following CNAME records in your DNS zone (e.g. litware369.com in our configuration) of your domain registrar. The exact method depends on the chosen domain registrar.

You will need to externally resolve these FQDN names for the Web Application Proxy (WAP) servers.

Name

Type

Value

TTL

adfs

CNAME

litware369fs.northeurope.cloudapp.azure.com

3 hours

enterpriseregistration

CNAME

litware369fs.northeurope.cloudapp.azure.com

3 hours

www

CNAME

litware369fs.northeurope.cloudapp.azure.com

3 hours

Configuring the VNET to use DC1 as the DNS Server

To configure the VNET to use DC1 as the DNS server, proceed with the following steps:

  1. Open a browsing session and navigate to the Azure portal at https://portal.azure.com.
  2. Click Resource groups on the left pane. A new blade opens up. Click LITWARE369-RG in the list. An eponym blade opens up.
  3. Click the adfs-infra-vnet VNET resource. An eponym blade opens up.
  4. Click DNS Server. An eponym blade opens up.
  5. Select Custom for DNS servers.

  1. Under DNS SERVER, add "10.0.0.101" for the DC1 computer.

  1. Click Save.
  2. Restart the six VMs of the test lab environment to reflect this change in their configuration.

Adding as a second domain controller to the Active Directory forest

To add a second domain controller to the litware369.com forest, proceed with the following steps:

  1. Open a remote desktop connection on the DC2 computer. Follow the instructions as per section § Accessing the various machines of the test lab environment and specify in step 2 "10.0.0.102" for the DC2 computer. Log on as the local account AzureAdmin with "Pass@word1!?" as password.
  2. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt.
  3. Run the following command to add the binaries required for Active Directory Domain Services (AD DS):
PS C:\> Add-WindowsFeature -name AD-Domain-Services –IncludeManagementTools –Restart

Once completed, the DC2 computer will reboot. You are now ready to add a new domain controller in LITWARE369 forest.

  1. Repeat steps 1 and 2, and then run the following commands in order to set your Administrator credentials.
PS C:\> $domain = "litware369.com"
PS C:\> $password = "Pass@word1!?" | ConvertTo-SecureString -asPlainText -Force
PS C:\> $username = "$domain\AzureAdmin"
PS C:\> $cred = New-Object System.Management.Automation.PSCredential($username,$password)
  1. Run the following command to promote the DC2 computer into a domain controller in the existing LITWARE369 forest.
PS C:\> Install-ADDSDomainController -DomainName "litware369.com" -InstallDns -Credential $cred

This command installs a domain controller and DNS server in the LITWARE369 domain using your domain admin credentials and prompts you to provide and confirm the Directory Services Restore Mode (DSRM) password. When prompted, type "Pass@word1!?" for the DSRM.

Once completed, the DC2 computer will reboot.

Configuring the VNET to also use DC2 as a DNS Server

To configure the VNET to also use DC2 as a DNS server, proceed with the following steps:

  1. Open a browsing session and navigate to the Azure portal at https://portal.azure.com.
  2. Click Resource groups on the left pane. A new blade opens up. Click LITWARE369-RG in the list. An eponym blade opens up.
  3. Click the adfs-infra-vnet VNET resource. An eponym blade opens up.
  4. Click DNS Server. An eponym blade opens up.
  5. Under DNS SERVER, add "10.0.0.102" for the DC2 computer.

  1. Click Save.
  2. Restart the six VMs of the test lab environment to reflect this change in their configuration.

Joining the LITWARE369 domain

To join all the remaining computers to the LITWARE369 domain, proceed with the following steps:

  1. Open a remote desktop connection on the ADFS1 computer. Follow the instructions as per section § Accessing the various machines of the test lab environment and specify in step 2 "10.0.0.201" for the ADFS1 computer. Log on as the local administrator account AzureAdmin with "Pass@word1!?" as password.
  2. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, and the run the following commands in order.
PS C:\> $domain = "litware369.com"
PS C:\> $password = "Pass@word1!?" | ConvertTo-SecureString -asPlainText -Force
PS C:\> $username = "$domain\AzureAdmin
PS C:\> $cred = New-Object System.Management.Automation.PSCredential($username,$password)
PS C:\> Add-Computer -DomainName $domain -Credential $cred -Restart

Once completed, the ADFS1 computer will reboot.

  1. Repeat above steps with the ADFS2, WAP1, and WAP2 computers.

Important note    A Web Application Proxy (WAP) server can be deployed in a workgroup or as part of an Active Directory domain. This is a configuration choice. We opt here for the second option.

Creating test accounts

We will now create a test group and test user accounts in our domain litware369.com and add one of the user account to the group account. These accounts are used to complete the walkthroughs later in the other parts of this whitepaper.

To create the test user accounts, proceed with the following steps:

  1. From the previous elevated Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, run the following command to create the user Robert Hatley with the following credentials: User name: "RobertH" and password: "Pass@word1!?":
PS C:\> Import-Module -Name ActiveDirectory 
PS C:\> New-ADUser –Name "Robert Hatley" -SamAccountName "roberth" -DisplayName "Robert Hatley" `
-AccountPassword (ConvertTo-SecureString "Pass@word1!?" -AsPlainText –Force) -ChangePasswordAtLogon $false `
-PasswordNeverExpires $true -Enabled $true -UserPrincipalName "roberth@litware369.com" -GivenName "Hatley" `
-Surname "Robert"
  1. Run the following command to create the user Janet Schorr with the following credentials: User name: User name: "JanetS" and password: "Pass@word1!?":
PS C:\> New-ADUser –Name "Janet Schorr" -SamAccountName "janets" -DisplayName "Janet Schorr" `
-AccountPassword (ConvertTo-SecureString "Pass@word1!?" -AsPlainText –Force) -ChangePasswordAtLogon $false `
-PasswordNeverExpires $true -Enabled $true -UserPrincipalName "janets@litware369.com" -GivenName "Schorr" `
-Surname "Janet"
  1. Run the following command to create the group
    Finance:
PS C:\> New-ADGroup -Name "Finance" -SamAccountName "Finance" -GroupCategory Security -GroupScope Global ` -DisplayName "Finance" -Description "Members of this group belong to the Litware369 Finance Division"
  1. Run the following command to add the Robert Hatley account to the Finance group:
PS C:\> Add-ADGroupMember -Identity "Finance" -Members "roberth"  
  1. Run the following command to retrieve the security identifier (SID) of the Finance group. This value will be needed later at the end of the document when configuring AD FS for multi-factor authentication.
PS C:\> Get-ADGroup -Identity "Finance"
DistinguishedName : CN=Finance,CN=Users,DC=litware369,DC=com
GroupCategory : Security
GroupScope : Global
Name : Finance
ObjectClass : group
ObjectGUID : e16f7126-61d7-4a16-b8d9-7512ad5bf516
SamAccountName : Finance
SID : S-1-5-21-1479725894-3138805608-1555037044-2112
PS C:\> _

Allowing test accounts to log on locally

By default, a domain user is not allowed to log on locally on a member server like the WAP1 and WAP2 computer. A configuration of group policy can be modified so that a domain user account can log on locally on a member server. Though this is NOT at all recommended in production environment but for testing purpose or in lab setup like this one this configuration can be quite handy. This configuration indeed helps where there are only few computers.

To modify group policy settings to allow a domain user to log on locally a member server, proceed with following steps:

  1. Open an elevated command prompt if none, and run the following command:
PS C:\> gpmc.msc

A Group Policy Management window brings up.

  1. Double-click the name of the forest, double-click Domains, and double-click the name of the domain.

  1. Right-click Default Domain Policy, and then click Edit. A Group Policy Management Editor window brings up.

  1. In the console tree, expand Computer Configuration, Policies, Windows Settings, Security Settings, and Local Policies, and then click User Rights Assignment.
  2. In the details pane, double-click Allow Logon Locally.

  1. Check Define these policy settings, and then click Add User or Group. An Add User or Group dialog brings up.

  1. Click Browse to locate the account with the Select Users, Computers, or Groups dialog.

  1. Under Enter the object names to select, type "janets; roberth; administrators", click Check Names, and then click OK.
  2. Click OK in the Add User or Group dialog, and then click OK in the Allow log on locally Properties dialog box.
  3. Close the Group Policy Management Editor window.
  4. Close the Group Policy Management window.

Configuring the Enterprise root Certificate Authority (CA)

To configure the root Enterprise CA, proceed with the following steps:

  1. Installing and configuring the AD CS role service.
  2. Configuring an appropriate certificate template for SSL/TLS certificate (optional).

The following subsections describe each of these steps in the context of our test lab environment.

Installing and configuring the AD CS role service

To install the Active Directory Certificate Services (AD CS) role service, proceed with the following steps:

  1. Open a remote desktop connection on the DC2 computer. Follow the instructions as per section § Accessing the various machines of the test lab environment and specify in step 2 "10.0.0.102" for the DC2 computer. Log on as the LITWARE369 domain administrator account AzureAdmin with "Pass@word1!?" as password.
  2. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt.
  3. Run the following command to add the binaries required for Active Directory Certificate Services (AD CS) role service:
PS C:\> Add-WindowsFeature -name Adcs-Cert-Authority –IncludeManagementTools
Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
True No Success {Active Directory Certificate Services, Ce...
PS C:\> _
  1. Run the following command:
PS C:\> Install-AdcsCertificationAuthority -CAType EnterpriseRootCa `
-CryptoProviderName "RSA#Microsoft Software Key Storage Provider" -KeyLength 4096 `
-HashAlgorithmName SHA256 -ValidityPeriod Years -ValidityPeriodUnits 3

This command installs on the DC2 computer an Enterprise root CA with the Microsoft Software Key Storage Provider using the RSA algorithm, key length (2048), hash algorithm (SHA 256), and validity period (3 years).

  1. When prompted, type "Y" to confirm the operation to perform.
PS C:\> Install-AdcsCertificationAuthority -CAType EnterpriseRootCa `
-CryptoProviderName "RSA#Microsoft Software Key Storage Provider" -KeyLength 4096 `
-HashAlgorithmName SHA256 -ValidityPeriod Years -ValidityPeriodUnits 3 Confirm
Are you sure you want to perform this action?
Performing the operation "Install-AdcsCertificationAuthority" on target "DC2".
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): Y

ErrorId ErrorString
------- -----------
0
PS C:\> _

An ErrorId value of 0 indicates that the Enterprise root CA has been successfully installed.

Configuring an appropriate certificate template for SSL/TLS certificate (optional)

Services on both the ADFS1, ADFS2, WAP1, and WAP2 computers will require secure sockets layer (SSL)/transport layer security (TLS).

The Web Server certificate template is the one conventionally used to request such a SSL certificate for a domain-joined computer. Its settings are perfectly appropriated when the certificate must be installed on the server that requests it. However, for a test lab environment, it could convenient to be able to export both the certificate and private key. In such situation, these default settings are not suitable because they do not allow to export the private key.

Consequently, we will configure a new certificate template that will inherit from this template, and thus presenting the same characteristics as the original template but with the possibility to export the private key.

To configure a certificate template for SSL/TLS certificate, proceed with the following steps:

  • Whilst still being connected on the DC2 computer, from the Server Manager, click Tools and then Certification Authority. The Certification Authority console brings up.

  • Expand the certification authority litware369-DC2-CA so that you can see Certificate Templates. The name of the certification authority may differ if you have chosen another NetBIOS domain name and another name for the DC2 computer.
  • Right-click Certificate Templates and then click Manage. The Certificate Templates Console brings up.

  • In the details pane of the Certificate Templates console, right-click the Web Server template and then click Duplicate Template. A Properties of New Template dialog brings up.

  • Select the Request Handling tab.

  • Check Allow private key to be exported.
  • Select the Security tab.

  • We must ensure the domain computer accounts will have the ability to enroll for the template. To do so, click Add. A Select Users, Computers, Services Accounts, or Groups dialog brings up.

  1. In Select Users, Computers, Service Accounts, or Groups, type "Domain computers". Click Check Names, and then click OK.
  2. Ensure that the group is selected and then select the Allow checkbox that corresponds to the Enroll permission.
  • Select the General tab.

  • Under Template display name, type a name that you want to use for the template, for example, "SSL Certificates" in our configuration.
  • Click OK.
  • Close the Certificate Templates console and return to the Certificate Authority console.
  • In the console tree of the Certification Authority console, right-click Certificate Templates, click New, and then click Certificate Template to Issue. An Enable Certificate Templates dialog brings up.

  • In the Enable Certificate Templates dialog, select the new certificate template that you just have configured and then click OK.

Updating the HOSTS file on the WAP farm

The WAP servers need to make contact back to the AD FS servers, So, you need to tell the WAP servers how to get to them. The simplest way of doing this consists in editing the local HOSTS file on the WAP1 computer. Keep in mind that we don't have connectivity or the ability to route to the internal IP address, so we need to route to the external IP of the internal load balancer that holds the two AD FS computers ADFS1 and ADFS2.

To update the HOSTS file, proceed with the following steps:

  1. Open a remote desktop connection on the WAP1 computer. Follow the instructions as per section § Accessing the various machines of the test lab environment and specify in step 2 "10.0.1.101" for the WAP1 computer. Log on as the LITWARE369 domain administrator account AzureAdmin with "Pass@word1!?" as password.
  2. Launch the text editor of your choice, navigate to C:\Windows\System32\drivers\etc and open the HOSTS file.
  3. Add the following line at the end of the file. This will ensure that all communication regarding adfs.litware369.com end up at the internal load balancer (ILB) adfs-lb and are appropriately routed to the AD FS farm.
10.0.0.200       adfs.litware369.com
  1. Save the file and close the editor.
  2. Repeat all the above steps on the WAP2 computer.

Adding a domain to your Azure AD/Office 365 tenant

To provide seamless sign-in experiences for the user accounts in your organization, you must have an Azure AD directory that is integrated with your on-premises directory (AD DS).

Signing up for Office 365 automatically created an Azure AD directory for your organization (e.g. litware369.onmicrosoft.com). The next and final setup step consists in adding your on-premises Active Directory to your Azure AD/Office 365 subscription.

This section thus walks you through the process of adding a verified vanity domain (e.g. litware369.com) in your Azure AD/Office 365 subscription. This domain will be later federated with your on-premises Active Directory.

To add a domain to your Azure AD/Office 365 tenant, proceed with the following steps:

Note    For more information, see the article Add a custom domain name to Azure Active Directory.

  1. Open a browsing session and navigate to the Office 365 portal at https://portal.office.com.
  2. Sign in with your administrative credentials to your Office 365 subscription, i.e. the credentials created when you signed up for your cloud subscription.
  3. Click the Admin tile. The Office 365 admin center opens up in a new tab.
  4. In Domains, click Add a domain to start the setup wizard.

  1. In Enter a domain you own specify your domain name and continue, for example "litware369.com" in our illustration. Click Next.

  1. Click Select your registrar, select for example GoDaddy in our illustration, and then click GoDaddy. The wizard will automatically provide a sign-in
    to that service by opening a new tab.
  2. Sign-in with your credentials at your domain registrar, and then create the required TXT record.
  3. Once created, click Verify in the wizard.

  1. Click Next.

  1. Leave Add the DNS records for me checked and click Next.
  2. A popup window opens up to enter your credentials at your domain registrar and accept the creation of the DNS settings. Accept to perform the operation.

  1. Scroll down and click Next.

  1. Click Finish.

At this stage, the base configuration for the evaluation environment is now complete.

To avoid spending your credit when you don't work on the test lab environment, you can shut down the six VMs (DC1, DC2, ADFS1, ADFS2, WAP1, and WAP2) when you don't work on the test lab.

To shut down the VMs of the test lab environment, proceed with the following steps:

  1. Open a browsing session and navigate to the Azure portal at https://portal.azure.com.
  2. Sign in with your administrative credentials to your Azure subscription in which you've deployed the test lab environment.
  3. On the left pane of the Azure portal, click virtual machines.

  1. On the virtual machine page, select wap1. A new blade opens up.

  1. Click Stop. A dialog pops up.
  2. Click Yes to confirm the shutdown.
  3. Repeat steps 4 to 6 with wap2, adfs2, adfs1, dc2, and then dc1.

To resume working on the test lab environment, you will then need to start the six VMs that constitute it.

To start the VMs of the test lab environment, proceed with the following steps:

  1. From the Azure portal, click virtual machines.
  2. On the virtual machine page, select dc1. A new blade opens up.
  3. Click Start.

  1. Repeat steps 2 to 3 with dc2, adfs1, adfs2, wap1, and then wap2.

Note    for more information, see article Manage virtual machines using Azure Resource Manager and PowerShell.

This concludes the second part of this whitepaper.

Appendix A. Applying security recommendations

This appendix suggests some recommendations to additionally apply to the test lab environment. This is optional in the context of such a test lab environment but this definitely constitutes a best practice for a production environment.

Installing Microsoft Antimalware extension in the VMs

Microsoft Antimalware for Azure Virtual Machines is a real time protection that will monitor the VM's in the test lab environment to detect and block malwares.

Note    For more information, see article Microsoft Antimalware for Azure Cloud Services and Virtual Machines.

To provision Antimalware via PowerShell, proceed with the followings:

  1. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, and then navigate to the above folder.
  2. Connect to your Azure subscription as per section § Connecting to your Azure subscription with Azure PowerShell.
  3. Set some other variables for the cmdlets to run.
$location = "North Europe" 
$resourceGroupName = "LITWARE369-RG"
  1. Set the require JSON configuration file as per article Set-AzureVMMicrosoftAntimalwareExtension:
PS C:\> $settingString = '{ "AntimalwareEnabled": true,"RealtimeProtectionEnabled": true }'
  1. Retrieve the current version string for the Antimalware.
PS C:\> $allVersions= (Get-AzureRmVMExtensionImage -Location $location -PublisherName "Microsoft.Azure.Security"  ` -Type "IaaSAntimalware").Version
PS C:\> $versionString = $allVersions[($allVersions.count)-1].Split(".")[0] + "." ` + $allVersions[($allVersions.count)-1].Split(".")[1]
  1. Get all the VMs you have deployed in the test lab environment.
PS C:\> $vMs = Get-AzureRmVM –ResourceGroupName "LITWARE369-RG" | select Name 
  1. Set the extension using prepared values for each found VM in the test lab environment.
if ($vMs) { 
Foreach ($vM in $vMs) {
Set-AzureRmVMExtension -ResourceGroupName $resourceGroupName -Location $location -VMName $vM.Name `
-Name "IaaSAntimalware" -Publisher "Microsoft.Azure.Security" -ExtensionType "IaaSAntimalware" `
-TypeHandlerVersion $versionString -SettingString $settingString
     }
}

Note    After deploying the antimalware using the above Set-AzureRmVMExtension cmdlet, the Antimalware user interface (UI) will not be available for the end user. As part of its setup, the Azure Antimalware extension modifies the policy to explicitly turn off the UI within the VM.

This was an explicit design decision made for the Azure environment. The intent is to avoid modal dialogs and popups surfacing on unattended service machines. If you try to modify the antimalware settings via UI you will receive an error message. For more information, see blog post Update on Microsoft Antimalware and Azure Resource Manager (ARM) VMs.

Note    Changing the cleanuppolicy.xml file as per blog post Enabling Microsoft Antimalware User Interface on ARM VMs Post Deployment is NOT supported.

Encrypting the disks of the VMs with Azure Disk Encryption

Azure Disk Encryption is a new capability that lets you encrypt the disks of your Windows and Linux VMs. Azure Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data volume disks.

The solution is integrated with Azure Key Vault to help you control and manage the disk encryption keys and secrets in your key vault subscription, while ensuring that all data in the VM volume disks are encrypted at rest in your Azure storage.

Note    For more information, see article Azure Disk Encryption for Windows and Linux IaaS VMs and whitepaper Azure Disk Encryption for Windows and Linux Azure Virtual Machines.

The encryption of the volumes of the VMS with Azure Disk Encryption comprises the followings steps:

  1. Creating a key vault in Azure Key Vault.
  2. Generating a Key Encryption Key (KEK) (optional).
  3. Registering a service principal in Azure AD.
  4. Enabling disk encryption on the VMs.

The next sections describe all the related operations.

Creating a key vault in Azure Key Vault

Azure Disk Encryption securely stores the encryption secrets in a specified key vault. In order to make sure the encryption secrets don't cross regional boundaries, Azure Disk Encryption needs the key vault and the VMs to be co-located in the same region.

So let's start by creating a key vault that is in the same resource as the six VMs to be encrypted for our test lab environment.

To create a key vault in Azure Key Vault via PowerShell, proceed with the followings:

  1. Open a Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, and then navigate to the above folder.
  2. Connect to your Azure subscription as per section § Connecting to your Azure subscription with Azure PowerShell.
  3. Set some other variables for the cmdlets to run.
PS C:\> $location = "North Europe" 
PS C:\> $resourceGroupName = "LITWARE369-RG"
PS C:\> $keyVaultName "Litware369KeyVault"
  1. Now create the key vault in the same resource group of our test lab environment. Run the following command.
PS C:\> New-AzureRmKeyVault -VaultName $keyVaultName -ResourceGroupName $resourceGroupName -Location $location  `
-EnabledForDeployment -EnabledForDiskEncryption

Vault Name : Litware369KeyVault
Resource Group Name : LITWARE369-RG
Location : North Europe
Resource ID : /subscriptions/3083c9bc-e5fb-4f4c-975e-0e6a6f410a35/resourceGroups/LITWARE369-RG/providers/Microsoft.KeyVault/vaults/Litware369KeyVault
Vault URI : https://Litware369KeyVault.vault.azure.net
Tenant ID : 9f2d8bc2-174a-4ba5-b4a3-d55014c08855
SKU : Standard
Enabled For Deployment? : True
Enabled For Template Deployment? : False
Enabled For Disk Encryption? : True
Access Policies :
Tenant ID : 9f2d8bc2-174a-4ba5-b4a3-d55014c08855
Object ID : df4e7f97-e32a-4391-a121-89f51bbc8d92
Application ID :
Display Name : Philippe Beraud
(admin@litware369.onmicrosoft.com)
Permissions to Keys : get, create, delete, list, update, import, backup,
restore
Permissions to Secrets : all
Permissions to Certificates : all
Tags :
PS C/\>

  1. The Azure Resource Manager needs to access encryption secrets, i.e. BitLocker keys in our configuration in order to boot the encrypted VMs if any in the test lab environment. To set Key Vault access policies to allow the Azure Resource Manager access the encryption secrets secured in the vault, run the following command:
PS C:\> Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ResourceGroupName $resourceGroupName  ` -EnabledForDiskEncryption
  1. Run the following command to view the properties of the key vault:
PS C:\> Get-AzureRmKeyVault -VaultName $keyVaultName -ResourceGroupName $resourceGroupName
Vault Name : Litware369KeyVault
Resource Group Name : LITWARE369-RG
Location : North Europe
Resource ID : /subscriptions/3083c9bc-e5fb-4f4c-975e-0e6a6f410a35/resourceGroups/LITWARE369-RG/providers/Microsoft.KeyVault/vaults/Litware369KeyVault
Vault URI : https://litware369keyvault.vault.azure.net/
Tenant ID : 9f2d8bc2-174a-4ba5-b4a3-d55014c08855
SKU : Standard
Enabled For Deployment? : True
Enabled For Template Deployment? : False
Enabled For Disk Encryption? : True
Access Policies :
Tenant ID : 9f2d8bc2-174a-4ba5-b4a3-d55014c08855
Object ID : df4e7f97-e32a-4391-a121-89f51bbc8d92
Application ID :
Display Name : Philippe Beraud
(admin@litware369.onmicrosoft.com)
Permissions to Keys : get, create, delete, list, update, import, backup, restore
Permissions to Secrets : all
Permissions to Certificates : all Tags :
PS C:\>

Note    For more information on how to setup a key vault in Azure, see blog post Azure Key Vault – Step by Step.

Generating a Key Encryption Key (KEK) (optional)

The RFC 4949 Internet Security Glossary, Version 2 defines a key encryption key (KEK) as follows: "A cryptographic key that (a) is used to encrypt other keys (either DEKs or other TEKs) for transmission or storage but (b) (usually) is not used to encrypt application data. Usage: Sometimes called "key-encryption key"."

In order to use the key encryption key (KEK) feature of Azure Disk Encryption, a key needs to be created in the above key vault. This key will be used as the key encryption key to wrap the encryption secrets, i.e. the BitLocker keys of the encrypted volumes of the VMs in our configuration, to further secure them before writing to the above vault.

It's now time to generate a Key Encryption Key (KEK) in the vault. This key encryption key (KEK) must have been created in the same vault where the encryption secrets will be placed.

To generate a Key Encryption Key (KEK) in the vault, proceed with the following steps:

  1. From the Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, run the following command:
PS C:\> $key = Add-AzureKeyVaultKey -VaultName $keyVaultName -Name 'KeyEncryptionKey' -Destination 'Software'
  1. Get this version of the secret with the following command:
PS C:\> $key.key.kid
https://litware369keyvault.vault.azure.net/keys/KeyEncryptionKey/924f3dac8ee04888abbb637832890e51
PS C:\>

Note    For more information, see article Get started with Azure Key Vault.

Registering a service principal in Azure AD

In order to write encryption secrets to the specified key vault, Azure Disk Encryption needs the credentials of an Azure AD application, i.e. the related service principal, that has permissions to write secrets to the specified Key Vault.

Rather than using the Client ID and the Client Secret of the Azure AD application, we will use instead a certificate for the credentials so that we don't have to rely on any password-like information in our configuration to obtain through the Azure AD application an access token to the vault. Any security conscious user will indeed not want client secrets to be hard coded or leaked inside our script files.

The registration of a service principal in Azure AD comprises the followings steps:

  1. Creating a certificate to authenticate against Azure AD.
  2. Encoding the certificate for the authentication against Azure AD.
  3. Creating an application in Azure AD.
  4. Updating the vault access permission for the application.
  5. Adding the certificate as a secret to the vault.

The next sections describe these steps.

Creating a certificate to authenticate against Azure AD

You've previously downloaded and installed the Windows Software Development Kit (SDK) for Windows 10 to generate the Point-to-Site (P2S) certificate. We will reuse the makecert.exe tool to create a new client certificate.

To create a self-signed client certificate to authenticate against Azure AD, open a command prompt and run the following commands:

C:\> "C:\Program Files (x86)\Windows Kits\10\bin\x64\makecert.exe" -sv mykey.pvk -n "cn=Litware369KeyVault" Litware369KeyVault.cer -b 05/20/2016 -e 05/20/2017 -r
C:\> "C:\Program Files (x86)\Windows Kits\10\bin\x64\pvk2pfx.exe" -pvk mykey.pvk -spc Litware369KeyVault.cer -pfx Litware369KeyVault.pfx -po Pass@word1!?
Encoding the certificate for the authentication against Azure AD

As mentioned before, the certificate and its private key will be used as credentials for the Azure AD application to authenticate against Azure AD.

To proper encode the certificate, run the following commands in order from the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt:

PS C:\> $pathToCertFile = "Litware369KeyVault.pfx"
PS C:\> $x509 = New-Object System.Security.Cryptography.X509Certificates.X509Certificate($pathToCertFile, "Pass@word1!?")
PS C:\> $certValue = [System.Convert]::ToBase64String($x509.GetRawCertData())
Creating an application in Azure AD

To create an application in Azure AD and associate it with the above certificate for the credentials, run the following commands in order from the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt:

PS C:\> $now = Get-Date
PS C:\> $oneYearFromNow = $now.AddDays(364)
PS C:\> $identifierUri = https://localhost:443/
PS C:\> $homePage = http://keyvault.litware369.com
PS C:\> $app = New-AzureRmADApplication -DisplayName "KeyVault" -HomePage $homePage -IdentifierUris $identifierUri `
-CertValue $certValue -StartDate $now -EndDate $oneYearFromNow
PS C:\> $servicePrincipal = New-AzureRmADServicePrincipal -ApplicationId $app.ApplicationId
Updating the vault access permission for the Azure AD application

In the process of enabling encryption on the VM, the generated encryption secrets, i.e. BitLocker keys in our configuration, will be written to the specified key vault as already outlined before. The credentials initialized above for the Azure AD application will be used to authenticate against Azure AD, and obtain on that basis an access token so that the secrets can be written to the vault.  So, to make that happens, the Azure AD application needs to first be authorized to write such secrets to the vault.

To set the appropriate access policies to allow the above created application to write secrets to the vault, run the following command from the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt:

PS C:\> Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ResourceGroupName $resourceGroupName ` -ServicePrincipalName $servicePrincipal.ApplicationId -PermissionsToKeys all -PermissionsToSecrets all
Adding the certificate as a secret to the vault

Once the certificate is associated with the Azure AD application, the .pfx file of the certificate needs to be uploaded as a secret to the key vault and also deployed to the machine's 'My' certificate store of the VMs to encrypt (see section § Importing the authentication certificate to the VM later in this document).

These steps are required so that the Azure Disk Encryption VM extension can consume the certificate deployed to the VM and authenticate to Azure AD and be able to write the encryption secrets, i.e. the BitLocker key of the volumes, to the vault.

To add the certificate .pfx file as a secret to the vault, proceed with the followings steps:

  1. From the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, run the following commands in order to "package" the .pfx file as a suitable secret for the vault.
PS C:\> $filename = $pathToCertFile
PS C:\> $certPassword = "Pass@word1!?"
PS C:\> $fileContentBytes = get-content $fileName -Encoding Byte
PS C:\> $fileContentEncoded = [System.Convert]::ToBase64String($fileContentBytes)
PS C:\> $jsonObject = @"
{
"data": "$fileContentEncoded",
"dataType" :"pfx",
"password": "$certPassword"
}
"@
PS C:\> $jsonObjectBytes = [System.Text.Encoding]::UTF8.GetBytes($jsonObject)
PS C:\> $jsonEncoded = [System.Convert]::ToBase64String($jsonObjectBytes)
PS C:\> $secretValue = ConvertTo-SecureString -String $jsonEncoded -AsPlainText -Force
  1. Add the secret, i.e. the .pfx file, in the vault with the following command:
PS C:\> $secret = Set-AzureKeyVaultSecret -VaultName $keyVaultName -Name AuthCert -SecretValue $secretValue
  1. Get this version of the secret. Run this command.
PS C:\> $secret.id

Enabling disk encryption on the VMs

The encryption of all the volumes of a VM is a three-steps process. It comprises the followings steps:

  1. Importing the authentication certificate to the VM.
  2. Encrypting all the volumes of the VM.
  3. Verifying disk encryption on the VM.

These steps should be executed on all the six VMs of the test lab environment. The next sections illustrate them on the ADFS1 computer. They thus should be repeated on the ADFS2, DC1, DC2, WAP1, and WAP2 computers.

Importing the authentication certificate to the VM

As outlined before, the Azure Disk Encryption VM extension will upload encryption secrets corresponding to all the volumes, i.e. the BitLocker keys in our context, into the key vault specified when enabling encryption.

In order to be able to write these encryption secrets to the vault, the Azure Disk Encryption VM extension must first authenticate to Azure AD. It will indeed have to obtain an access token for the write operation. For that reason, we need to import at this stage the certificate generated above, e.g. the pfx file, into the machine's 'My' certificate store of the VM so that it can later be used with its private key to authenticate against the application previously created in Azure AD to obtain such an access token to the vault.

To import the .pfx file in the machine certificate store of the VM from the vault, run the following commands in order from the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt:

PS C:\> $certUrl = (Get-AzureKeyVaultSecret -VaultName $keyVaultName -Name 'AuthCert').Id
PS C:\> $sourceVaultId = (Get-AzureRmKeyVault -VaultName '$keyVaultName -ResourceGroupName $resourceGroupName).ResourceId
PS C:\> $vm = Get-AzureRmVM -ResourceGroupName $resourceGroupName -Name ADFS1'
PS C:\> $vm = Add-AzureRmVMSecret -VM $vm -SourceVaultId $sourceVaultId -CertificateStore 'My' ` -CertificateUrl $certUrl
PS C:\> Update-AzureRmVM -VM $vm -ResourceGroupName $resourceGroupName

Upon a successful completion, you should see the following output confirming the certificate with its private key is now imported into the machine certificate store.

AVERTISSEMENT : Breaking change notice: In upcoming releaese, top level properties, DataDiskNames and NetworkInterfaceIDs, will be removed from VM object because they are also in StorageProfile and NetworkProfile, respectively.
RequestId IsSuccessStatusCode StatusCode ReasonPhrase
--------- ------------------- ---------- ------------
True OK OK
Encrypting all the volumes of the VM

To encrypt all the volumes of the VM, proceed with the following steps:

  1. From the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt, set the Azure AD credentials, i.e. the certificate to use via its thumbprint, to obtain an access token to the vault. Run in order the following commands:
PS C:\> $aadClientID = $servicePrincipal.ApplicationId
PS C:\> $certPath = "Litware369KeyVault.cer"
PS C:\> $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
PS C:\> $cert.Import($certPath)
PS C:\> $aadClientCertThumbprint = $cert.Thumbprint
  1. Set the key vault information by executing in order the following commands:
PS C:\> $keyVault = Get-AzureRmKeyVault -VaultName $keyVaultName -ResourceGroupName $resourceGroupName
PS C:\> $diskEncryptionKeyVaultUrl = $keyVault.VaultUri
PS C:\> $KeyVaultResourceId = $keyVault.ResourceId
PS C:\> $keyEncryptionKeyUrl = (Get-AzureKeyVaultKey -VaultName $keyVaultName -Name 'KeyEncryptionKey').Key.kid
  1. We're all set to enable encryption on the VM, here the ADFS1 computer, using the Azure AD credentials, i.e. the client certificate to use via its thumbprint, and use a key encryption key (KEK) to wrap the disk encryption secrets. Run the following command for that purpose:
PS C:\> Set-AzureRmVMDiskEncryptionExtension -ResourceGroupName $resourceGroupName -VMName 'ADFS1' ` 
-AadClientID $aadClientID -AadClientCertThumbprint $aadClientCertThumbprint `
-DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $keyVaultResourceId `
-KeyEncryptionKeyUrl $keyEncryptionKeyUrl -KeyEncryptionKeyVaultId $keyVaultResourceId

This cmdlet uses the variables initialized above in steps 1 and 2. it prepares the VM for encryption, writes the encryption secrets, i.e. The BitLocker keys in our configuration to the specified vault using the specified Azure AD client certificate credentials and then starts encryption on the VM. This cmdlet is a long running operation that may take more than 15 minutes and may need to reboot the VM. The encryption secrets in the key vault will be encrypted with the key encryption key (KEK).

You can alternatively enable disk encryption without the key encryption key (KEK):

PS C:\> Set-AzureRmVMDiskEncryptionExtension -ResourceGroupName $resourceGroupName -VMName 'ADFS1' ` 
-AadClientID $aadClientID -AadClientCertThumbprint $aadClientCertThumbprint `
-DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $keyVaultResourceId

Upon a successful completion, you should see the following output confirming the VM encryption was successful:

RequestId IsSuccessStatusCode StatusCode ReasonPhrase
--------- ------------------- ---------- ------------
True OK OK
Verifying disk encryption on the VM

Once you have enabled the disk encryption capability and deployed it on the VM, it's time to verify the resulting encryption status.

To get the encryption status of the OS and data volumes of a VM, here the ADFS1 computer, run the following commands from the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt:

PS C:\> Get-AzureRmVMDiskEncryptionStatus -ResourceGroupName $resourceGroupName -VMName 'ADFS1'
OsVolumeEncrypted : Encrypted
DataVolumesEncrypted : Encrypted
OsVolumeEncryptionSettings : Microsoft.Azure.Management.Compute.Models.DiskEncryptionSettings
ProgressMessage : OsVolume: Encrypted, DataVolumes: Encrypted

PS C:\>

Listing all the encrypted VMs with their secrets

To see all the OS volume and data volumes encryption status for all VMs in the test lab environment to see which of the VMs are currently encrypted, run the following commands from the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt:

PS C:\> $osVolEncrypted = {(Get-AzureRmVMDiskEncryptionStatus -ResourceGroupName $_.ResourceGroupName ` -VMName $_.Name).OsVolumeEncrypted}
PS C:\> $dataVolEncrypted= {(Get-AzureRmVMDiskEncryptionStatus -ResourceGroupName $_.ResourceGroupName ` -VMName $_.Name).DataVolumesEncrypted}
PS C:\> Get-AzureRmVm | Format-Table @{Label="MachineName"; Expression={$_.Name}}, @{Label="OsVolumeEncrypted"; Expression=$osVolEncrypted}, @{Label="DataVolumesEncrypted"; Expression=$dataVolEncrypted}

AVERTISSEMENT : Breaking change notice: In upcoming releaese, top level properties, DataDiskNames and NetworkInterfaceIDs, will be removed from VM object because they are also in StorageProfile and NetworkProfile, respectively.

MachineName OsVolumeEncrypted DataVolumesEncrypted
----------- ----------------- --------------------
adfs1 Encrypted Encrypted
adfs2 NotEncrypted NotEncrypted
dc1 NotEncrypted NotEncrypted
dc2 NotEncrypted NotEncrypted
wap1 NotEncrypted NotEncrypted
wap2 NotEncrypted NotEncrypted
PS C:\>

Listing all the disk encryption secrets used for encrypting the VMs

To see all the encryption secrets, i.e. the BitLocker keys in the vault, run the following command from the previous Windows PowerShell or PowerShell Integrated Scripting Environment (ISE) prompt:

PS C:\> Get-AzureKeyVaultSecret -VaultName $keyVaultName | where {$_.Tags.ContainsKey('DiskEncryptionKeyFileName')} | format-table @{Label="MachineName"; Expression={$_.Tags['MachineName']}}, @{Label="VolumeLetter"; Expression={$_.Tags['VolumeLetter']}}, @{Label="EncryptionKeyURL"; Expression={$_.Id}}
MachineName VolumeLetter EncryptionKeyURL
----------- ------------ ----------------
ADFS1 C:\ https://liware369keyvault.vault.azure.net:443/secrets/75A66C26-1F30-4767-9DFD-C7589816BCA8
PS C:\>

This command line returns the corresponding machine name(s), the volume letter(s) along with the URL(s) of the encryption secrets written by Azure Disk Encryption.

Note     For more information, see blog posts Explore Azure Disk Encryption with Azure Powershell and Explore Azure Disk Encryption with Azure PowerShell – Part 2.

Leveraging Azure Security Center

Azure Security Center helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources like the ones of your test lab environment. As such, it provides integrated security monitoring and policy management across your Azure subscription, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.

To quickly get started with Azure Security Center, follows the steps of the article Azure Security Center quick start guide. As its name indicates, this article that guides you through the security monitoring and policy management components of Security Center to further manage the security of your test lab environment.

This concludes this appendix on some security recommendations we'd like to outline.