To plan for using the Global Mirror feature
in noncycling mode, consider these requirements.
All components in the SAN must be able to sustain the workload that is
generated by application hosts and the Global Mirror background copy
process. If all of the components in the SAN cannot sustain the workload, the Global Mirror relationships are
automatically stopped to protect your application hosts from increased response times.
Note: Errors about Global Mirror operations
are not logged.
When you use the noncycling
Global Mirror feature, follow these
best practices:
- Use IBM Spectrum Control or
an equivalent SAN performance analysis tool to monitor your SAN environment.IBM Spectrum Control provides an easy way to analyze
the
system's performance statistics. For information about IBM Spectrum Control, refer to the IBM Spectrum Control documentation.
- Analyze the
system's
performance statistics to determine the peak application write workload that the link must support.
Gather statistics over a typical application I/O workload cycle.
- Set the background copy rate to a value that can be supported by the intersystem link and the
back-end storage systems at the
remote clustered
system.
- Do not use cache-disabled volumes in noncycling Global Mirror relationships.
- Set the gmlinktolerance parameter to an appropriate value. The default
value is 300 seconds (5 minutes).
- When you complete SAN maintenance tasks, take one of the following
actions:
- Reduce the application I/O workload during the maintenance task.
- Disable the gmlinktolerance feature or increase the value of the
gmlinktolerance parameter.
Note: If you increase the value of the
gmlinktolerance parameter during the maintenance task, do not set it to the
normal value until the maintenance task is complete. If the gmlinktolerance feature is disabled
while maintenance is performed, enable it after the maintenance task is complete.
- Stop the Global Mirror relationships.
- Evenly distribute the preferred nodes for the noncycling Global Mirrorvolumes between the nodes in the systems. Each volume in an
I/O group has a preferred node property that can be used to balance the I/O load between nodes in
the I/O group. The preferred node property is also used by the Global Mirror feature to route I/O
operations between systems. A node that receives a write operation for a volume
is normally the preferred node for that volume. If the volume
is in a Global Mirror
relationship, the node is responsible for sending the write operation to the preferred node of the
secondary volume. By default, the preferred node of a new
volume is the node that owns the fewest volumes of the two
nodes in the I/O group. Each node in the remote system has a set pool of Global Mirror system resources for
each node in the local system. To maximize Global Mirror performance, set the
preferred nodes for the volumes of the remote system to use every combination of
primary nodes and secondary nodes.
- Do not issue the rmvdisk -force command for a secondary volume that is in a
running relationship.
- Stop all relationships before you upgrade a cluster that contains secondary volumes.
- If the secondary volume is the source of a Fibre Channel map, stop the
relationship before you start the Fibre Channel map.