Friday, August 22, 2014

Considerations for Deploying Active Directory Domain Controllers in Microsoft Azure

Azure is the new hotness. It seems like there are new features being lit up each week in the platform and it's gaining quite a bit of momentum. One thing that many organizations are starting to consider is extending their on-premises Active Directory Domain Services infrastructure out to Azure IaaS. Note that this is not the same as Azure Active Directory, which is an authentication solution for Office 365 and other cloud-based services.

One place where this is starting to gain momentum is with AD FS in support of Office 365. Organizations are starting to say "Hey, we've moved out messaging and collaboration to the cloud, why is our authentication infrastructure for those services still on-prem?" It's a fair question to ask, and I am starting to find that deploying AD FS in Azure in support of Office 365 is a great way for admins and engineers to get their feet wet in the platform.

This post is going to focus on deploying AD Domain Controllers to Azure. I may do a post on deploying AD FS to Azure as well, but step one is extending directory services to the cloud. The reason for this is twofold:
  1. There is no point to deploying AD FS in the Azure without DCs there as well. If the site-to-site VPN between Azure and your datacenter fails, authentication fails. This defeats the purpose of putting AD FS in Azure in the first place.
  2. Azure bills for egress network traffic. In very busy environments with thousands of authentication attempts per minute, this can add up to a pretty penny. Keeping all authentication "self-contained" in Azure will greatly reduce this cost.
Once you have provisioned your VNet, assigned address space to various subnets, and spun up your VMs, you need to do a few things to prepare for DC promotion.


Set a DIP for each Domain Controller
Static IP addresses set on a VM's network adapter are not supported in Microsoft Azure. This is because various events can cause that network adapter (and really the whole VM config outside of the virtual disks) to be recreated. These events include a user-initiated deprovision, a self-healing event, or scheduled Azure maintenance. A DIP is like a DHCP reservation on steroids. Even though the VMs themselves are configured to receive their network information via DHCP, the DIP will ensure that the same dynamic IP address is assigned to the same VM each time. To read more about DIPs, see this MSDN article.

To set a DIP on an existing VM, select an unused IP address on the appropriate Azure subnet and run the following command:
Get-AzureVM -ServiceName CloudServiceYourVMisIn -Name NameOfAzureVM |
Set-AzureStaticVNetIP -IPAddress 10.1.1.20 | Update-AzureVM 
This will cause a restart of the VM, so make sure you plan accordingly if the VM in question is already in production.

If you're not sure what DIPs are in use, you can test with this cmdlet:
Test-AzureStaticVNetIP –VNetName MyVNet –IPAddress 10.1.1.20
One item to note is that in scenarios where DIPs are in use for Azure IaaS VMs on the same subnet as PaaS offerings, such as Azure Web Sites, duplicate IP address assignments can occur. To avoid this, separate PaaS and IaaS services onto separate subnets within a VNet.

Provision a New Virtual Disk for NTDS and SYSVOL
This step is very important. By default, Azure VMs are provisioned with two disks - the OS volume, which has read/write caching enabled, and a temporary storage volume, which is obviously not suitable for SYSVOL or NTDS data. I typically provision a new virtual disk between 10-15 GB for this, but the size ultimately depends on your environment.

When provisioning this disk, it is important to set the caching policy to "NONE" to prevent potential data loss of corruption during a VM failure. This setting cannot be changed on Azure VM OS volumes, so an additional virtual disk must be configured. During promotion, you are prompted for a location for SYSVOL, the AD database, and the database logs. Place them on this disk with caching disabled.

Avoid Deprovisioning DCs in Azure
There are two ways to "stop" or shut down a VM in Azure. You can do it from within the guest, or you can do it from the Azure management portal. Doing it from the portal will cause a deprovision of the VM, which normally isn't a big deal. In this case, it will reset the VM-Generation ID which is not a good thing. If you're not sure what this mean check out this TechNet article on Domain Controller Virtualization basics.

Long story short, if you're going to shut down your Azure VM Domain Controllers, do it from inside of the guest, not from the Azure portal.

Configure DNS Servers in your Azure VNet
After DCs (presumably with the DNS Server role) have been deployed in Azure, you're going to want your Azure VMs to use them for name resolution. In your Azure VNet, set your DNS server order so that clients use your Azure-based DCs first, and then your on-prem DCs second. Make sure that you have both cloud and on-prem DCs listed (assuming you've got connectivity between the two) for redundancy.

That's it! Go get your Install-ADDSDomainController on in the cloud!

2 comments:

  1. Some feedback on the size of the data disk for the NTDS database and log file. I would recommend a minimum size of 1TB. Azure doesn't currently allow for resize of VHD files.

    The VHDs used by OS disks and data disks are persisted as page blobs in Azure Storage. Page blobs are implemented as sparse storage which means that only pages that have actually been written to are stored and billed for. For example, a 1TB page blob which has never been written to occupies no space in Azure Storage and consequently incurs no charges. Azure Storage supports the ability to clear pages no longer needed which means that they are no longer billed.

    ReplyDelete
  2. Thanks, nice article.

    ReplyDelete