HyperSwap® configuration by using interswitch links

You can use interswitch links (ISLs) in paths between nodes to configure an HyperSwap® topology system. If the cable distance between the two production sites exceeds 100 km, potential performance impacts can result.

Using ISLs for node-to-node communication requires configuring two separate SANs, each of them with two separate redundant fabrics:
  1. Configure one SAN with two separate fabrics so that it is dedicated for node-to-node communication. This SAN is referred to as a private SAN. This private SAN can be used by more than one HyperSwap® topology system if it has enough bandwidth for all these systems. For more information, see Additional bandwidth requirements.
  2. Configure one SAN with two separate fabrics so that it is dedicated for host attachment; storage system attachment; and Global Mirror, Metro Mirror, or HyperSwap® operations.

The network hardware that carries the ISLs must maintain the physical separation and independence of the two redundant fabrics. For example, do not connect the two fabrics into a single dark fibre link. If there are two dark fibre links, dedicate one link for each fabric. Do not cross-connect the two fabrics onto each of the two links.

Rules for HyperSwap® configurations that use ISLs

In a HyperSwap® configuration, a site is defined as an independent failure domain. Different types of sites protect against different types of fault. For example, if configured properly, the system continues to operate after the loss of one failure domain.

However, the system does not guarantee that it can survive the failure of two sites.

Each SAN consists of at least one fabric that spans both production sites. At least one fabric of the public SAN includes also the quorum site. You can use different approaches to configure private and public SANs.
  • Use dedicated Fibre Channel switches for each SAN.
  • Use separate virtual fabrics or virtual SANs for each SAN.
    Note: ISLs must not be shared between private and public virtual fabrics.

To implement private and public SANs with dedicated switches, any combination of supported switches can be used. For the list of supported switches and for supported switch partitioning and virtual fabric options, see the interoperability website:

Like for every managed disk, all control enclosures need access to the quorum disk by using the same storage system ports. If a storage system with active/passive controllers (such as IBM® DS3000, IBM® DS4000™, IBM® DS5000, or IBM® FAStT) is attached to a fabric, the storage system must be connected with both internal controllers to this fabric.

By using FCIP, passive WDM, or active WDM for quorum site connectivity, you can add to the extension. The connections must be reliable. It is strictly required that the links from both production sites to the quorum site are independent and do not share any long-distance equipment. FCIP links are supported also for ISLs between the two production sites in public and private SANs. A private SAN and a public SAN can be routed across the same FCIP link. However, to ensure bandwidth to the private SAN, it is typically necessary to configure FCIP tunnels. Similarly, it is permissible to multiplex multiple ISL links across a DWDM link.

Note: It is not required to UPS-protect FCIP routers or active WDM devices that are used only for the control enclosure-to-quorum communication.

A HyperSwap® configuration is supported only when the storage system that hosts the quorum disks supports extended quorum. Although the system can use other types of storage systems for providing quorum disks, access to these quorum disks is always through a single path.

For quorum disk configuration requirements, see the Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates technote at the following Web sites:

Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates

Additional bandwidth requirements

A bandwidth equal to the peak write bandwidth (as sum from all hosts) is required for intersite communication between I/O groups. This bandwidth must be available in the private SAN. Additionally, you need intersite bandwidth in the public SAN for host-to-node communication if a host accesses nodes in the other sites. For example, after a failure of the local I/O group of the host, or to access volumes that do not use the HyperSwap® function. The guideline for a bandwidth equal to the peak write bandwidth for private SANs gives the minimal bandwidth supported for HyperSwap operations. In some non-optimal configurations, additional bandwidth is required to avoid potential performance issues. For example, if hosts at different sites share a volume, then the private SAN needs bandwidth equal to two times the peak write bandwidth plus the peak read bandwidth.