Introduction
Increasing availability is a key concern with computer systems. With all the consolidation and virtualization efforts under way, you need to make sure your services are always up and running, even when some components fail. However, it’s usually hard to understand the details of what it takes to make systems highly available (or continuously available). And there are so many options…
In this blog post, I will describe four principles that cover the different requirements for Availability: Redundancy, Entanglement, Awareness and Persistence. They apply to different types of services and I’ll provide some examples related to the most common server roles, including DHCP, DNS, Active Directory, Hyper-V, IIS, Remote Desktop Services, SQL Server, Exchange Server, and obviously File Services (I am in the “File Server and Clustering” team, after all). Every service employs different strategies to implement these “REAP Principles” but they all must implement them in some fashion to increase availability.
Note: A certain familiarity with common Windows Server roles and services is assumed here. If you are not familiar with the meaning of DHCP, DNS or Active Directory, this post is not intended for you. If that’s the case, you might want to do some reading on those topics before moving forward here.
Redundancy – There is more than one of everything
Availability starts with redundancy. In order to provide the ability to survive failures, you must have multiple instance of everything that can possibly fail in that system. That means multiple servers, multiple networks, multiple power supplies, multiple storage devices. You should be seeing everything (at least) doubled in your configuration. Whatever is not redundant is commonly labeled a “Single Point of Failure”.
Redundancy is not cheap, though. By definition, it will increase the cost of your infrastructure. So it’s an investment that can only be justified when there is understanding of the risks and needs associated with service disruption balanced with the cost of higher availability. Sadly, that sometimes only comes after a catastrophic event (such as data loss or an extended outage).
Ideally, you would have a redundant instance that is as capable as your primary one. That would make your system work as well after the failure as it did before. It might be acceptable, though, to have a redundant component that is less capable. In that case, you’ll be in a degraded (although functional) state after a failure, while the original part is being replaced. Also keep in mind that, these days, redundancy in the cloud might be a viable option.
For this principle, there’s really not much variance per type of Windows Server role. You basically need to make sure that you have multiple servers providing the service, and make sure the other principles are applied.
Entanglement – Achieving shared state via spooky action at a distance
Having redundant equipment is required but certainly not sufficient to provide increased availability. Once any meaningful computer system is up and running, it is constantly gathering information and keeping track of it. If you have multiple instances running, they must be “entangled” somehow. That means that the current state of the system should be shared across the multiple instances so it can survive the loss of any individual component without losing that state. It will typically include some complex “spooky action at a distance”, as Einstein famously said of Quantum Mechanics.
A common way to do it is using a database (like SQL Server) to store your state. Every transaction performed by a set of web servers, for instance, could be stored in a common database and any web server can be quickly reprovisioned and connected to the database again. In a similar fashion, you can use Active Directory as a data store, as it’s done by services like DFS Namespaces and Exchange Server (for user mailbox information). Even a File Server could serve a similar purpose, providing a location to store files that can be changed at any time and accessed by a set of web servers. If you lose a web server, you can quickly reprovision it and point it to the shared file server.
If using SQL Server to store the shared state, you must also abide by the Redundancy principle by using multiple SQL Servers, which must be entangled as well. One common way to do it is using shared storage. You can wire these servers to a Fibre Channel SAN or an iSCSI SAN or even a File Server to store the data. Failover clustering in Windows Server (used by certain deployments of Hyper-V, File Servers and SQL Server, just to name a few) levarages shared storage as a common mechanism for entanglement.
Peeling the onion further, you will need multiple heads of those storage systems and they must also be entangled. Redundancy at the storage layer is commonly achieved by sharing physical disks and writing the data to multiple places. Most SANs have the option of using dual controllers that are connected to a set shared of disks. Every piece of data is stored synchronously to at least two disks (sometimes more). These SANs can tolerate the failure of individual controllers or disks, preserving their shared state without any disruption. In Windows Server 2012, Clustered Storage Spaces provides a simple solution for shared storage for a set of Windows Servers using only Shared SAS disks, without the need for a SAN.
There are other strategies for Entanglement that do not require shared storage, depending on how much and how frequently the state changes. If you have a web site with only static files, you could maintain shared state by simply provisioning multiple IIS servers with the same files. Whenever you lose one, simply replace it. For instance, Windows Azure and Virtual Machine Manager provide mechanisms to quickly add/remove instances of web servers in this fashion through the use of a service template.
If the shared state changes, which is often the case for most web sites, you could go up a notch by regularly copying updated files to the servers every day. You could have a central location with the current version of the shared state (a remote file server, for instance) plus a process to regularly send full updates to any of the nodes (either pushed from the central store or pulled by the servers). This is not very efficient for large amounts of data updated frequently, but could be enough if the total amount of data is small or it changes very infrequently. Examples of this strategy include SQL Server Snapshot Replication, DNS full zone transfers or a simple script using ROBOCOPY to copy files on a daily schedule.
In most cases, however, it’s best to employ a mechanism that can cope with more frequently changing state. Going up the scale you could have a system that sends data to its peers every hour or every few minutes, being careful to send only the data that has changed instead of the full set. That is the case for DNS incremental zone transfers, Active Directory Replication, many types of SQL Server Replication, SQL Server Log Shipping, Asynchronous SQL Server Mirroring (High-Performance Mode), SQL Server AlwaysOn Availability Groups (asynchronous-commit mode), DFS Replication and Hyper-V Replica. These models provide systems that are loosely converging, but do not achieve up-to-the-second coherent shared state. However, that is good enough for some scenarios.
At the high end of replication and right before actual shared storage, you have synchronous replication. This provides the ability to update the information on every entangled system before considering the shared state actually changed. This might slow down the overall performance of the system, especially when the connectivity between the peers suffers from latency. However, there’s something to be said of just having a set of nodes with local storage that achieve a coherent shared state using only software. Common examples here include a few types of SAN replication, Exchange Server (Database Availability Groups), Synchronous SQL Mirroring (High Safety Mode) and SQL Server AlwaysOn Availability Groups (synchronous-commit mode).
As you can see, the Entanglement principle can be addressed in a number of different ways depending on the service. Many services, like File Server and SQL Server, provide multiple mechanisms to deal with it, with varying degrees of cost, complexity, performance and coherence.
Awareness – Telling if Schrödinger's servers are alive or not
Your work is not done after you have a redundant entangled system. In order to provide clients with a seamless access to your service, you must implement some method to find one of the many sources for the service. The awareness principle refers to how your clients will discover the location of the access points for your service, ideally with a mechanism to do it quickly while avoiding any failed instances. There a few different ways to achieve it, including manual configuration, broadcast, DNS, load balancers, or a service-specific method.
One simple method is to statically configure each client with the name or IP Address of two or more instances of the service. This method is effective if the configuration of the service is not expected to change. If it ever does change, you would need to reconfigure each client. A common example here is how static DNS is configured: you simply specify the IP address of your preferred DNS server and also the IP address if an alternate DNS server in case the preferred one fails.
Another common mechanism is to broadcast a request for the service and wait for a response. This mechanism works only if there’s someone in your local network capable of providing an answer. There’s also a concern about the legitimacy of the response, since a rogue system on the network might be used to provide a malicious version of the service. Common examples here include DHCP service requests and Wireless Access Points discovery. It is fairly common to use one service to provide awareness for others. For instance, once you access your Wireless Access Point, you get DHCP service. Once you get DHCP service, you get your DNS configuration from it.
As you know, the most common use for a DNS server is to map a network name to an IP address (using an A or AAAA DNS record). That in itself implements a certain level of this awareness principle. DNS can also associate multiple IP addresses with a single name, effectively providing a mechanism to give you a list of servers that provide a specific service. That list is provided by the DNS server in a round robin fashion, so it even includes a certain level of load balancing as part of it. Clients looking for Web Servers and File Servers commonly use this mechanism alone for finding the many servers providing a service.
DNS also provides a different type of record specifically designed for providing service awareness. This is implemented as SRV (Service) records, which not only offer the name and IP address of a host providing a service, but can decorate it with information about priority, weight and port number where the service is provided. This is a simple but remarkably effective way to provide service awareness through DNS, which is effectively a mandatory infrastructure service these days. Active Directory, for instance, relies entirely on DNS Service records to allow clients to learn information about the location of Domain Controllers and services provided by them, including details about Active Directory site topology.
Windows Server failover clustering includes the ability to perform dynamic DNS registrations when creating clustered services. Each cluster role (formerly known as a cluster group) can include a Network Name resource which is registered with DNS when the service is started. Multiple IP addresses can be registered for a given cluster Network Name if the server has multiple interfaces. In Windows Server 2012, a single cluster role can be active on multiple nodes (that’s the case of a Scale-Out File Server) and the new Distributed Network Name implements this as a DNS name with multiple IP addresses (at least one from each node).
DNS does have a few limitations. The main one is the fact that the clients will cache the name/IP information for some time, as specified in the TTL (time to live) for the record. If the service is reconfigure and new address or service records are published, DNS clients might take some time to become aware of the change. You can reduce the TTL, but that has a performance impact. There is no mechanism in DNS to have a server proactively tell a client that a published record has changed. Another issue with DNS is that it provides no method to tell if the service is actually being provided at the moment or even if the server ever functioned properly. It is up to the client to attempt communication and handle failures. Last but not least, DNS cannot help with intelligently balancing clients based on the current load of a server.
Load balancers are the next step in providing awareness. These are network devices that function as an intelligent router of traffic based on a set of rules. If you point your clients to the IP address of the load balancer, that device can intelligently forward the requests to a set for servers. As the name implies, load balancers typically distribute the clients across the servers and can even detect if a certain server is unresponsive, dynamically taking it out of the list. Another concern here is affinity, which is an optimization that consistently forwards a given client to the same server. Since these devices can become a single point of failure, the redundancy principle must be applied here. The most common solution is to have two load balancers in combination with two records in DNS.
SQL Server again uses multiple mechanisms for implementing this principle. DNS name resolution is common, both statically or dynamically using failover clustering Network Name resources. That name is part of the client configuration known as a “Connection String”. Typically, this string will provide the name of a single server providing the SQL Service, along with the database name and your credentials. For instance: "Server=SQLSERV1A; Database=DB301; Integrated Security=True;". For SQL Mirroring, there is a mechanism to provide a second server name in the connection string itself. Here’s an example: "Server=SQLSERV1A; Failover_Partner=SQLSRV1B; Database=DB301; Integrated Security=True;".
Other services provide a specific layers of Awareness, implementing a broker service or client access layer. This is the case of DFS (Distributed File System), which simplifies access to multiple file servers using a unified namespace mechanism. In a similar way, SharePoint web front end servers will abstract the fact that multiple content databases live behind a specific site collection. Exchange Server Client Access Servers will query Active Directory to find which Mailbox Server or Database Access Group contains the mailbox for an incoming client. Remote Desktop Connection Broker (formerly known as Terminal Services Session Broker), is used to provide users with access to Remote Desktop services across a set of servers. All these brokers services can typically handle a fair amount of load balancing and be aware of the state of the services behind it. Since these can become single point of failures, they are typically placed behind DNS round robin and/or load balancers.
Persistence – The one that is the most adaptable to change will survive
Now that you have redundant entangled services and clients are aware of them, here comes the greatest challenge in availability. Persisting the service in the event of a failure. There are three basic steps to make it happen: server failure detection, failing over to a surviving server (if required) and client reconnection (if required).
Detecting the failure is the first step. It requires a mechanism for aliveness checks, which can be performed by the servers themselves, by a witness service, by the clients accessing the services or a combination of these. Failover clustering makes cluster nodes check each other (through network checks), in an effort to determine when a node becomes unresponsive.
Once a failure is detected, for services that work in an active/passive fashion (only one server provides the service and the other remains on standby), a failover is required. This can only be safely achieved automatically if the entanglement is done via Shared Storage or Synchronous Replication, which means that the data from the server that is lost is properly persisted. If using other entanglement methods (like backups or asynchronous replication), an IT Administrator typically has to manually intervene to make sure the proper state is restored before failing over the service. For all active/active solutions, with multiple servers providing the same service all the time, a failover is not required.
Finally, the client might need to reconnect to the service. If the server being used by the client has failed, many services will lose their connections and require intervention. In an ideal scenario, the client will automatically detect (or be notified of) the server failure, will be aware of other instances of the service and will automatically connect to a surviving instance, restoring the exact same client state before the failure. This is how Windows Server 2012 implements failover of File Servers though a process called SMB 3.0 Continuous Availability, available for both Classic and Scale-Out File Server Clusters. The File Server Cluster goes one step further, providing a Witness Service that will notify SMB 3.0 clients of a server failure and point them to an alternate server.
File servers might also leverage a combination of DFS Namespaces and DFS Replication that will automatically recover from the situation, with some potential side effects. While the File client will find an alternative file server via DFS-N, the connection state will be lost and need to be reestablished. Another persistence mechanism in the file server is the Offline Files option in the Folder Redirection feature. This allows you to keep working on local storage while your file server is unavailable, synchronizing again when the server comes back.
For other services, like SQL Server, the client will surface an error to the application indicating that a failover has occurred and the connection has been lost. If the application is properly coded to handle that situation, the end user will be shielded from error message because the application will simply reconnect to the SQL Server using either the same name (in the case of another server taking over the name) or a Failover Partner name (in case of SQL Server Mirroring) or another instance of SQL Server (in case of more complex log shipping or replication scenarios).
Clients of Web Servers and other load balanced workloads without any persistent state might be able to simply retry an operation in case of a failure. This might happen automatically or require the end-user to retry the operation manually.
Another interesting example of client persistence is provided by Outlook connecting to Exchange Server. As we mentioned, Exchange Servers implement a sophisticated method of synchronous replication of mailbox databases between servers, plus a Client Access layer that brokers connection to the right set of mailbox servers. On top of that, the Outlook client will simply continue to work in a cached mode (using only local storage) if for any reason the server becomes unavailable. Whenever the server comes back online, the client will transparent reconnect and synchronize. The entire process is automated, without any action required during or after the failure from either end users and IT Administrators.
Samples of how services implement the REAP principles
Now that you have the principles down, let’s look at how the main services we mentioned implement them.
Service | Redundancy | Entanglement | Awareness | Persistence |
DHCP, using split scopes | Multiple standalone DHCP Servers | Each server uses its own set of scopes, no replication | Active/Active, Clients find DHCP servers via broadcast (whichever responds first) | DHCP responses are cached. Upon failure, only surviving servers will respond to the broadcast |
DHCP, using failover cluster | Multiple DHCP Servers in a failover cluster | Shared block storage (FC, iSCSI, SAS) | Active/Passive, Clients find DHCP servers via broadcast | DHCP responses are cached. Upon failure, failover occurs and a new server responds to broadcasts |
DNS, using zone transfers | Multiple standalone DNS Servers | Zone Transfers between DNS Servers at regular intervals | Active/Active, Clients configured with IP addresses of Primary and Alternate servers (static or via DHCP) | DNS responses are cached. If query to primary DNS server fails, alternate DNS server is used |
DNS, using Active Directory integration | Multiple DNS Servers in a Domain | Active Directory Replication | Active/Active, Clients configured with IP addresses of Primary and Alternate servers (static or via DHCP) | DNS responses are cached. If query to primary DNS server fails, alternate DNS server is used |
Active Directory | Multiple Domain Controllers in a Domain | Active Directory Replication | Active/Active, DC Locator service finds closest Domain Controller using DNS service records | Upon failure, DC Locator service finds a new Domain Controller |
File Server, using DFS (Distributed File System) | Multiple file servers, linked through DFS. Multiple DFS servers. | DFS Replication maintains file server data consistency. DFS Namespace links stored in Active Directory. | Active/Active, DFS Namespace used to translate namespaces targets into closest file server. | Upon failure of file server, client uses alternate file server target. Upon DFS Namespace failure, alternate is used. |
File Server for general use, using failover cluster | Multiple File Servers in a failover cluster | Shared Storage (FC, iSCSI, SAS) | Active/Passive, Name and IP address resources, published to DNS | Failover, SMB Continuous Availability, Witness Service |
File Server, using Scale-Out Cluster | Multiple File Servers in a failover cluster | Shared Storage, Cluster Shared Volume (FC, iSCSI, SAS) | Active/Active, Name resource published to DNS (Distributed Network Name) | SMB Continuous Availability, Witness Service |
Web Server, static content | Multiple Web Servers | Initial copy only | Active/Active. DNS round robin, load balancer or combination | Client retry |
Web Server, file server back-end | Multiple Web Servers | Shared File Server Back End | Active/Active. DNS round robin, load balancer or combination | Client retry |
Web Server, SQL Server back-end | Multiple Web Servers | SQL Server database | Active/Active. DNS round robin, load balancer or combination | Client retry |
Hyper-V, failover cluster | Multiple servers in a cluster | Shared Storage (FC, iSCSI, SAS, SMB File Share) | Active/Passive. Clients connect to IP exposed by the VM | VM restarted upon failure |
Hyper-V, Replica | Multiple servers | Replication, per VM | Active/Passive. Clients connect to IP exposed by the VM | Manual failover (test option available) |
SQL Server, Replication | Multiple servers | Replication, per database (several methods) | Active/Active. Clients connect by server name | Application may detect failures and switch servers |
SQL Server, Log Shipping | Multiple servers | Log shipping, per database | Active/Passive. Clients connect by server name | Manual failover |
SQL Server, Mirroring | Multiple servers, optional witness | Mirroring, per database | Active/Passive, Failover Partner specified in connection string | Automatic failover if synchronous, with witness. Application needs to reconnect |
SQL Server, AlwaysOn Failover Cluster Instances | Multiple servers in a cluster | Shared Storage (FC, iSCSI, SAS, SMB File Share) | Active/Passive, Name and IP address resources, published to DNS | Automatic Failover, Application needs to reconnect |
SQL Server, AlwaysOn Availability Groups | Multiple servers in a cluster | Mirroring, per availability group | Active/Passive, Availability Group listener with a Name and IP address, published to DNS | Automatic Failover if using synchronous-commit mode. Application needs to reconnect |
SharePoint Server (front end) | Multiple Servers | SQL Server Storage | Active/Active. DNS round robin, load balancer or combination. | Client retry |
Exchange Server (DAG) with Outlook | Multiple Servers in a Cluster | Database Access Groups (Synchronous Replication) | Active/Active. Client Access Point (uses AD for Mailbox/DAG information). Names published to DNS. | Outlook client goes in cached mode, reconnects |
Conclusion
I hope this post helped you understand the principles behind increasing server availability.
As a final note, please take into consideration that not all services require the highest possible level of availability. This might be an easier decision for certain services like DHCP, DNS and Active Directory, where the additional cost is relatively small and the benefits are sizable. You might want to think twice when increasing the availability of a large backup server, where some hours of down time might be acceptable and the cost of duplicating the infrastructure is significantly higher.
Depending on how much availability you service level agreement states, you might need different types of solutions. We generally measure availability in “nines”, as described in the table below:
Nines | %Availability | Downtime per year | Downtime per week |
1 | 90% | ~ 36 days | ~ 16 hours |
2 | 99% | ~ 3.6 days | ~ 90 minutes |
3 | 99.9% | ~ 8 hours | ~ 10 minutes |
4 | 99.99% | ~ 52 minutes | ~ 1 minute |
5 | 99.999% | ~ 5 minutes | ~ 6 seconds |
You should consider your overall requirements and the related infrastructure investments that would give you the most “nines” per dollar.