You can use interswitch links (ISLs) in paths between nodes to configure an IBMHyperSwap® topology system. If the cable distance between the two production sites exceeds 100 km, potential performance impacts can result.
The network hardware that carries the ISLs must maintain the physical separation and independence of the two redundant fabrics. For example, do not connect the two fabrics into a single dark fibre link. If there are two dark fibre links, dedicate one link for each fabric. Do not cross-connect the two fabrics onto each of the two links.
In a HyperSwap configuration, a site is defined as an independent failure domain. Different types of sites protect against different types of fault. For example, if configured properly, the system continues to operate after the loss of one failure domain.
However, the system does not guarantee that it can survive the failure of two sites.
Storage systems that are configured to one of the main sites (1 or 2) need be zoned only to be visible by the control enclosures in that site. Storage systems in site 3 or storage systems that have no site that is defined must be zoned to all control enclosures.
http://support.lenovo.com/us/en/products/servers/lenovo-storage
To implement private and public SANs with dedicated switches, any combination of supported switches can be used. For the list of supported switches and for supported switch partitioning and virtual fabric options, see the interoperability website:
http://support.lenovo.com/us/en/products/servers/lenovo-storage
Like for every managed disk, all control enclosures need access to the quorum disk by using the same storage system ports. If a storage system with active/passive controllers (such as IBM® DS3000, IBM DS4000®, IBM DS5000, or IBM FAStT) is attached to a fabric, the storage system must be connected with both internal controllers to this fabric.
By using FCIP, passive WDM, or active WDM for quorum site connectivity, you can add to the extension. The connections must be reliable. It is strictly required that the links from both production sites to the quorum site are independent and do not share any long-distance equipment. FCIP links are supported also for ISLs between the two production sites in public and private SANs. A private SAN and a public SAN can be routed across the same FCIP link. However, to ensure bandwidth to the private SAN, it is typically necessary to configure FCIP tunnels. Similarly, it is permissible to multiplex multiple ISL links across a DWDM link.
A HyperSwap configuration is supported only when the storage system that hosts the quorum disks supports extended quorum. Although the system can use other types of storage systems for providing quorum disks, access to these quorum disks is always through a single path.
For quorum disk configuration requirements, see the Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates technote at the following website:
Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates
A bandwidth equal to the peak write bandwidth (as sum from all hosts) is required for intersite communication between I/O groups. This bandwidth must be available in the private SAN. Additionally, you need intersite bandwidth in the public SAN for host-to-node communication if a host accesses nodes in the other sites. For example, after a failure of the local I/O group of the host, or to access volumes that do not use the HyperSwap function.