DirectAccess warm standby without IPv6

DirectAccess has come a long way and with Windows Server 2016 it is pretty easy to install as a single site, single server deployment.  What if you need additional resilience though, what if you want a fail over server?  You can of course go down the multi-site route but that needs IPv6 to be deployed within and between the data centers, which is not exactly straight forward…  Alternatively you can use Windows Network Load Balancing or a Hardware Load Balancer to do this within a site, but then what if the connectivity to that site failed?

A simple, cheap alternative DirectAccess failover

I’ve deployed two DirectAccess servers within a single forest, where the two servers are in separate data centers.  Using DNS and Firewall NATs we can fail over between the two with no other configuration changes needed.  So what’s the catch?   It isn’t documented anywhere that I can see, and it uses a form of AnyCast configuration where the same subnet exists in two places on the network at the same time.   Aside from that I’ve not come across any other catch.

DirectAccess Architecture

So how to set it up…

You need two Windows Server 2016 servers with a single NIC each and DirectAccess installed but not configured.  The key is ensuring that the NIC has the same static IP configured on both of the servers.  I ran the config once to see which IP it gave and then backed it out and set the IP before running the config again.  In my case it was fd6f:92e8:42cc:3333::1/128.  So running through the config I ensured that:

  • I used the same Security Group to nominate the computers that are in scope for getting DirectAccess enabled
  • I turned off the option ‘Enable DirectAccess for Mobile Computers Only’ as it is a performance drain and unecessary
  • I used the default Network Connectivity Assistant and Probe address (and ensured that they were the same on both servers)
  • I used the same public DNS name for DirectAccess on both servers and used the same public certificate for both
  • I kept Network Location Service on both of the DirectAccess servers, and used the default name
  • I set the IPv6 prefix to fd6f:92e8:42cc:/48
  • I set the IPv6 prefix assigned to client computers to fd6f:92e8:42cc:1000::/64
  • I used different server and client GPOs for both DirectAccess server installations (and then ensured that the server GPO was only bound to the respective DirectAccess server and that BOTH of the client GPOs were bound to all relevant DirectAccess client workstations
  • I registered the private IPv4 address of both DirectAccess servers for the probe and NLS DNS names, and also registered the primary public IP address for the main DirectAccess DNS name
  • I chose to only support Windows 10 clients and only IP-HTTPS (the principal should work with IPsec or Teredo though but not something I’ve tried)

I did further lock down the DirectAccess and Workstations in order to conform to the GPO baselines set out by the Center of Internet Security (CIS) and to fully manage the Windows firewalls for inbound communications on both sides.

Once that was done then simply changing the firewall NAT allows me to failover between one server and the other.  In the event of a total site outage at the primary data center, then updating the DirectAccess DNS name with the failover public IP that routes to the secondary data center also allows that to be survived.

All in all it turned out to be easier than I thought and has been very robust.  Hopefully this helps some of you who do want an additional level of resilience but don’t want to deploy IPv6.

Enjoy!

Twan van Beers

Twan is a senior consultant with over 20 years of experience. He has a wide range of skills including Messaging, Active Directory, SQL, Networking and Firewalls. Twan loves to write scripts and get deep and dirty into debugging code, in order to understand and resolve the most complex of problems.

This Post Has 2 Comments

  1. Great article and great confirmation that this does actually work.

    I was able to do something somewhat similar with ADFS that I have ever found documented as viable. Setup 2 warm standby ADFS servers at our secondary data center (proxy and internal adfs farm member) with a NATed public IP at that data center. So in the event of a failure or other need, the DNS record can simply be updated both internally and externally (respectfully to the warm standbys).

    Your DA setup sounds very nice and something you\others may consider is adding even more bling to the solution by having DNS failover automatically.

    In our circumstance, we required ADFS to failover automatically in the case the primary servers fail in the night, the on-call person had no idea if they should fail it over or whatever the case may be causing the outage to last past 10 minutes. So I delegated external DNS to CloudFloorDNS which offers a simple netmon DNS failover “test” they refer to it as. Simply, if the main public IP at the primary data center becomes unavailable (setting custom conditions that must exist for the failover) the service updates the DNS record to the secondary data center – within minutes of the deemed failover the service is available and working (keeping the DNS record TTL at 60 seconds).
    So internally i did something similar with its own set of conditions via a PowerShell script running on schedule every 3 minutes in the secondary data center. Once the conditions are met for failover, the dns record is updated by the script.

    Also to take advantage of these tests I have other conditions in which it alerts our teams on – such as server # X is experiencing XYZ issues with ADFS.

    In the case only the external Inet becomes unavailable but internal is fine, the external proxy at the secondary data center serves external clients and proxies them to the primary data center internal adfs farm across the WAN.

    This has been set up and working without a single flaw for 8 years now with a few upgrades to ADFS since then.

    I will be looking further at doing this same thing with DirectAccess using your setup. Did you find that the DA servers must be NATed? In our environment, we chose to not NAT and have been using the 2 nic configuration even when we migrated them from UAG DA to DA 2012.

    Corey

    1. Hi Corey,

      Thanks for reading and for the great comment!

      Interesting idea regarding ADFS! For DirectAccess we are using NAT in this case, although for larger orgs having a pair of public IPs would be preferred I think

      Cheers
      Twan

Leave a Reply

Your email address will not be published. Required fields are marked *

Search