The
links between clustered
system pairs that perform remote mirroring must meet specific configuration, latency, and distance
requirements.
Redundant fabrics shows an
example of a configuration that uses dual redundant fabrics that can be configured for Fibre Channel
connections. Part of each fabric is located at the local system and the remote system. There is no
direct connection between the two fabrics.
Figure 1. Redundant fabrics
You can use Fibre Channel extenders or SAN routers to increase the distance
between two systems. Fibre Channel extenders transmit Fibre
Channel packets across long links without changing the contents of the packets. SAN
routers provide virtual N_ports on two or more SANs to extend the scope of the SAN. The SAN router
distributes the traffic from one virtual N_port to the other virtual N_port. The two Fibre
Channel fabrics are independent of each other. Therefore, N_ports on each of the fabrics
cannot directly log in to each other. See the following website for specific firmware levels and the
latest supported hardware:
https://datacentersupport.lenovo.com/
If you use Fibre Channel extenders or SAN routers, you must meet the following
requirements:
- The maximum supported round-trip latency between sites depends on the type of
partnership between systems, the version of software, and the system hardware that is used. Maximum supported round-trip latency between sites lists the maximum round-trip
latency. This restriction applies to all variant of remote mirroring. More configuration
requirements and guidelines apply to systems that perform remote mirroring over extended distances,
where the round-trip time is greater than 80 ms.
Table 1. Maximum supported round-trip latency between sitesSoftware version |
System node hardware |
Partnership |
FC |
1 Gbps IP |
10 Gbps IP |
7.3.0 and earlier |
All |
80 ms |
80 ms |
10 ms |
7.4.0 and later |
|
250 ms |
All other models |
80 ms |
- The round-trip latency between sites cannot exceed 80 ms for either
Fibre Channel extenders or SAN routers. This maximum round-trip latency applies
to all variants of remote mirroring, including Global Mirror with change volumes and IP
partnership.
- Metro Mirror
and Global Mirror require 2.6 Mbps of
bandwidth for intersystem heartbeat traffic.
- If the link between two sites is configured with redundancy so that it can tolerate single
failures, the link must be sized so that the bandwidth and latency statements are correct during
single failure conditions.
- The configuration is tested to confirm that any failover mechanisms in the intersystem links
interoperate satisfactorily with
the systems.
- All other configuration requirements are met.
Configuration requirements for systems that perform remote mirroring over extended distances
(greater than 80 ms round-trip latency between sites)
If you use remote mirroring between systems with 80 - 250 ms round-trip latency, you must meet
the following additional requirements:
In addition to the preceding list of requirements, the following
guidelines are provided for optimizing performance for remote mirroring
by using Global Mirror:
- Partnered systems should use the same number of nodes in each
system for replication.
- For maximum throughput, all nodes in each system should be used
for replication, both in terms of balancing the preferred node assignment
for volumes and for providing intersystem Fibre Channel connectivity.
- On
the system, provisioning dedicated node ports for local node-to-node traffic (by using
port masking) isolates Global Mirror node-to-node traffic between the local nodes from other local
SAN traffic. As a result, optimal response times can be achieved. This configuration of local node
port masking is less of a requirement on Storwize family
systems, where traffic between node canisters in an I/O group is serviced by the dedicated
inter-canister link in the enclosure.
- Where possible, use the minimum number of partnerships between systems. For example, assume site
A contains systems A1 and A2, and site B contains systems B1 and B2. In this scenario, creating
separate partnerships between pairs of systems (such as A1-B1 and A2-B2) offers greater performance
for Global Mirror replication between sites than a configuration with partnerships that are defined
between all four systems.
Limitations on host-to-system distances
There is no limit on the Fibre Channel optical distance between
the system nodes and host servers. You can attach a server to an edge switch in a
core-edge configuration with the
system at the core.
The system can support up to three ISL hops in the fabric. Therefore, the host server
and the
system can be separated by up to five Fibre Channel links. If you use longwave small form-factor pluggable (SFP) transceivers, four of the Fibre
Channel links can be up to 10 km long.