You can use the command-line interface (CLI) to remove
a node from a system.
After the node is deleted,
partner
node enters
write-through mode until another node is added back into the I/O group.
By
default, the rmnode command flushes the cache on
the specified node before taking the node offline. When operating
in a degraded state, the system ensures that data loss does not occur
as a result of deleting the only node with the cache data.
Attention: - If you are removing a single node and the remaining node in the
I/O group is online, the data can be exposed to a single point of
failure if the remaining node fails.
- If both nodes in the I/O group are online and the
volumes are already degraded before deleting the node, redundancy
to the volumes is already degraded. Removing a node might result in
loss of access to data, and data loss might occur if the force option
is used.
- Removing the last node destroys the system. Before you delete
the last node in the system, ensure that you want to destroy the system.
- When you delete a node, you remove all redundancy
from the I/O group. As a result, new or existing failures can cause
I/O errors on the hosts. These failures can occur:
- Host configuration errors
- Zoning errors
- Multipathing software configuration errors
- If you are deleting the last node in an I/O group
and there are volumes assigned to the I/O group, you cannot delete
the node from the system if the node is online. You must back up or
migrate all data that you want to save before you delete the node.
If the node is offline, you can delete the node.
- To take the specified node offline immediately without flushing
the cache or ensuring that data loss does not occur, run the rmnode command
with the force parameter. The force parameter forces continuation of the command
even though any node-dependent volumes will be taken offline. Use
the force parameter with caution; access
to data on node-dependent volumes will be lost.
Complete these steps to delete a node:
- If you are deleting the last node in an I/O group, determine
the volumes that are still assigned to this I/O group:
- Issue this CLI command to request a filtered view of
the volumes:
lsvdisk -filtervalue IO_group_name=name
Where name is
the name of the I/O group.
- Issue this CLI command to list the hosts that this volume
is mapped to:
lsvdiskhostmap vdiskname/identification
Where vdiskname/identification is
the name or identification of the volume.
Note: If volumes are assigned to this I/O group
that contain data that you want to continue to access, back up the
data or migrate the volumes to a different (online) I/O group.
- If this node is not the last node in the clustered system, turn off the power to the node
that you intend to remove. This step ensures that the multipathing
device driver, such as the subsystem device driver (SDD),
does not rediscover the paths that are manually removed before you
issue the delete node request.
Attention: - If you are removing the configuration node, the rmnode command
causes the configuration node to move to a different node within the clustered system. This process might take a short
time, typically less than a minute. The system IP address remains
unchanged, but any SSH client attached to the configuration node might
must reestablish a connection.
- If you turn on the power to the node that has been removed and
it is still connected to the same fabric or zone, it attempts to rejoin
the system. The system causes the node to remove itself from the system
and the node becomes a candidate to add to this system or another
system.
- If you are adding this node into the system, ensure that you add
it to the same I/O group that it was previously a member of. Failure
to do so can result in data corruption.
- In a service
situation, a node should normally be added back into a system using the original node name. As long as the partner node in the
I/O group has not been deleted too, this is the default name used
if -name is not specified.
- Before you delete the node, update the multipathing device
driver configuration on the host to remove all device identifiers
that are presented by the volumes that you intend to remove. If you
are using the subsystem device driver,
the device identifiers are referred to as virtual paths (vpaths).
Attention: Failure to perform this step can result
in data corruption.
See the Lenovo System Storage Multipath Subsystem Device Driver User's Guide for
details about how to dynamically reconfigure SDD for
the given host operating system.
- Issue this CLI command to delete a node from the clustered system:
Attention: Before
you delete the node: The
rmnode command checks
for node-dependent volumes, which are not mirrored at the time that
the command is run. If any node-dependent volumes are found, the command
stops and returns a message. To continue removing the node despite
the potential loss of data, run the rmnode command with the
force parameter.
Alternatively, follow these steps before you remove the node to ensure
that all volumes are mirrored:
- Run the lsdependentvdisks command.
- For each node-dependent volume that is returned, run the lsvdisk command.
- Ensure that each volume returns in-sync status.
rmnode node_name_or_identification
Where node_name_or_identification is
the name or identification of the node.
Note: Before removing a
node, the command checks for any node-dependent volumes that would
go offline. If the node that you selected to delete contains a flash drive that
has dependent volumes, volumes that use the flash
drives go
offline and become unavailable if the node is deleted. To maintain
access to volume data, mirror these volumes before removing the node.
To continue removing the node without mirroring the volumes, specify
the force parameter.