Zoning details

Ensure that you are familiar with these zoning details. These details explain zoning for external storage system zones and host zones. More details are included in the SAN configuration and zoning rules summary.

Paths to hosts

The number of paths through the network from the node canisters to a host must not exceed eight. Configurations in which this number is exceeded are not supported.

To find the worldwide port names (WWPNs) that are required to set up Fibre Channel zoning with hosts, use the lstargetportfc command. This command also displays the current failover status of host I/O ports.

To restrict the number of paths to a host, zone the switches so that each host bus adapter (HBA) port is zoned with one port from each node canister in each I/O group that it accesses volumes from. If a host has multiple HBA ports, zone each port to a different set of ports to maximize performance and redundancy. This also applies to a host with a Converged Network Adapter (CNA) that accesses volumes via FCoE.

External storage system zones

Switch zones that contain storage system ports must not have more than 40 ports. A configuration that exceeds 40 ports is not supported.

Zones

The switch fabric must be zoned so that the nodes can detect the back-end storage systems and the front-end host HBAs. Typically, the front-end host HBAs and the back-end storage systems are not in the same zone. The exception to this is where split host and split storage system configuration is in use.

All nodes in a system must be able to detect the same ports on each back-end storage system. Operation in a mode where two nodes detect a different set of ports on the same storage system is degraded, and the system logs errors that request a repair action. This can occur if inappropriate zoning is applied to the fabric or if inappropriate LUN masking is used. This rule has important implications for back-end storage, such as IBM DS4000 storage systems, which impose exclusive rules for mappings between HBA worldwide node names (WWNNs) and storage partitions.

Each port must be zoned so that it can be used for internode communications. When configuring switch zoning, you can zone some node ports to a host or to back-end storage systems.

When configuring zones for communication between nodes in the same system, the minimum configuration requires that all Fibre Channel ports on a node detect at least one Fibre Channel port on each other node in the same system. You cannot reduce the configuration in this environment.

It is critical that you configure storage systems and the SAN so that a system cannot access logical units (LUs) that a host or another system can also access. You can achieve this configuration with storage system logical unit number (LUN) mapping and masking.

If a node can detect a storage system through multiple paths, use zoning to restrict communication to those paths that do not travel over ISLs.

With Metro Mirror and Global Mirror configurations, additional zones are required that contain only the local nodes and the remote nodes. It is valid for the local hosts to see the remote nodes or for the remote hosts to see the local nodes. Any zone that contains the local and the remote back-end storage systems and local nodes or remote nodes, or both, is not valid.

For best results in Metro Mirror and Global Mirror configurations where the round-trip latency between systems is less than 80 milliseconds, zone each node so that it can communicate with at least one Fibre Channel port on each node in each remote system. This configuration maintains redundancy of the fault tolerance of port and node failures within local and remote systems.

However, to accommodate the limitations of some switch vendors on the number of ports or worldwide node names (WWNNs) that are allowed in a zone, you can further reduce the number of ports or WWNNs in a zone. Such a reduction can result in reduced redundancy and additional workload being placed on other system nodes and the Fibre Channel links between the nodes of a system.

If the round-trip latency between systems is greater than 80 milliseconds, stricter configuration requirements apply:
  • Use SAN zoning and port masking to ensure that two Fibre Channel ports on each node that is used for replication are dedicated for replication traffic.
  • Apply SAN zoning to provide separate intersystem zones for each local-to-remote I/O group pair that is used for replication. See the information about long-distance links for Metro Mirror and Global Mirror partnerships for further details.

The minimum configuration requirement is to zone both nodes in one I/O group to both nodes in one I/O group at the secondary site. The I/O group maintains fault tolerance of a node or port failure at either the local or remote site location. It does not matter which I/O groups at either site are zoned because I/O traffic can be routed through other nodes to get to the destination. However, if an I/O group that is doing the routing contains the nodes that are servicing the host I/O, there is no additional burden or latency for those I/O groups because the I/O group nodes are directly connected to the remote system.

If only a subset of the I/O groups within a system is using Metro Mirror and Global Mirror, you can restrict the zoning so that only those nodes can communicate with nodes in remote systems. You can have nodes that are not members of any system zoned to detect all the systems. You can then add a node to the system in case you must replace a node.

Host zones

The configuration rules for host zones are different depending upon the number of hosts that access the system. For configurations of fewer than 64 hosts per system, the system supports a simple set of zoning rules that enable a small set of host zones to be created for different environments. For configurations of more than 64 hosts per system, the system supports a more restrictive set of host zoning rules. These rules apply for both Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) connectivity.

Zoning that contains host HBAs must ensure host HBAs in dissimilar hosts or dissimilar HBAs are in separate zones. Dissimilar hosts means that the hosts are running different operating systems or are different hardware platforms; thus different levels of the same operating system are regarded as similar.

To obtain the best overall performance of the system and to prevent overloading, the workload to each port must be equal. This can typically involve zoning approximately the same number of host Fibre Channel ports to each Fibre Channel port.

Systems with fewer than 64 hosts:

For systems with fewer than 64 hosts that are attached, zones that contain host HBAs must contain no more than 40 initiators, including the ports that act as initiators. A configuration that exceeds 40 initiators is not supported. A valid zone can be 32 host ports plus 8 ports. When it is possible, place each HBA port in a host that connects to a node into a separate zone. Include exactly one port from each node in the I/O groups that are associated with this host. This type of host zoning is not mandatory, but is preferred for smaller configurations.
Note: If the switch vendor recommends fewer ports per zone for a particular SAN, the rules that are imposed by the vendor take precedence over system rules.

To obtain the best performance from a host with multiple Fibre Channel ports, the zoning must ensure that each Fibre Channel port of a host is zoned with a different group of Lenovo Storage V series system ports.

Systems with more than 64 hosts:

Each HBA port must be in a separate zone and each zone must contain exactly one port from each node in each I/O group that the host accesses.
Note: A host can be associated with more than one I/O group and therefore access volumes from different I/O groups in a SAN. However, this reduces the maximum number of hosts that can be used in the SAN. For example, if the same host uses volumes in two different I/O groups, this consumes one of the 256 hosts in each I/O group. If each host accesses volumes in every I/O group, there can be only 256 hosts in the configuration.