rmnode

The rmnode command deletes a node from the clustered system. You can enter this command any time after a clustered system has been created.

Syntax

rmnode [ -force ] { object_id | object_name }

Parameters

-force
(Optional) Overrides the checks that this command runs. The parameter overrides the following two checks:
  • If the command results in volumes going offline, the command fails unless the force parameter is used.
  • If the command results in a loss of data because there is unwritten data in the write cache that is contained only within the node to be removed, the command fails unless the force parameter is used.
If you use the force parameter as a result of an error about volumes going offline, you force the node removal and run the risk of losing data from the write cache. The force parameter should always be used with caution.
object_id | object_name
(Required) Specifies the object name or ID that you want to modify. The variable that follows the parameter is either:
  • The object name that you assigned when you added the node to the clustered system
  • The object ID that is assigned to the node (not the worldwide node name)

Description

This command removes a node from the clustered system. This makes the node a candidate to be added back into this clustered system or into another system. After the node is deleted, the other node in the I/O group enters write-through mode until another node is added back into the I/O group.

Attention: When you run the rmnode command to remove the configured hardware for a node:
  • Small Computer System Interface-3 (SCSI-3) reservations (through that node) are removed
  • Small Computer System Interface-3 (SCSI-3) registrations (through that node) are removed

By default, the rmnode command flushes the cache on the specified node before the node is taken offline. In some circumstances, such as when the system is already degraded (for example, when both node canisters in the I/O group are online and the virtual disks within the I/O group are degraded), the system ensures that data loss does not occur as a result of deleting the only node with the cache data.

The cache is flushed before the node is deleted to prevent data loss if a failure occurs on the other node in the I/O group.

To take the specified node offline immediately without flushing the cache or ensuring data loss does not occur, run the rmnode command with the -force parameter.

Prerequisites:

Before you issue the rmnode command, perform the following tasks and read the following Attention notices to avoid losing access to data:

  1. Determine which virtual disks (VDisks, or volumes) are still assigned to this I/O group by issuing the following command. The command requests a filtered view of the volumes, where the filter attribute is the I/O group.
    lsvdisk -filtervalue IO_group_name=name
    where name is the name of the I/O group.
  2. Determine the hosts that the volumes are mapped to by issuing the lsvdiskhostmap command.
  3. Determine if any of the volumesthat are assigned to this I/O group contain data that you need to access:
    • If you do not want to maintain access to these volumes, go to step 5.
    • If you do want to maintain access to some or all of the volumes, back up the data or migrate the data to a different (online) I/O group.
  4. Determine if you need to turn the power off to the node:
    • If this is the last node in the clustered system, you do not need to turn the power off to the node. Go to step 5.
    • If this is not the last node in the cluster, turn the power off to the node that you intend to remove. This step ensures that the Subsystem Device Driver (SDD) does not rediscover the paths that are manually removed before you issue the delete node request.
  5. Update the SDD configuration for each virtual path (vpath) that is presented by the volumes that you intend to remove. Updating the SDD configuration removes the vpaths from the volumes. Failure to update the configuration can result in data corruption. See the Multipath Subsystem Device Driver: User's Guide for details about how to dynamically reconfigure SDD for the given host operating system.
  6. Quiesce all I/O operations that are destined for the node that you are deleting. Failure to quiesce the operations can result in failed I/O operations being reported to your host operating systems.
Attention:
  1. Removing the last node in the cluster destroys the clustered system. Before you delete the last node in the clustered system, ensure that you want to destroy the clustered system.
  2. If you are removing a single node and the remaining node in the I/O group is online, the data can be exposed to a single point of failure if the remaining node fails.
  3. This command might take some time to complete since the cache in the I/O group for that node is flushed before the node is removed. If the -force parameter is used, the cache is not flushed and the command completes more quickly. However, if the deleted node is the last node in the I/O group, using the -force option results in the write cache for that node being discarded rather than flushed, and data loss can occur. The -force option should be used with caution.
  4. If both nodes in the I/O group are online and the volumes are already degraded before deleting the node, redundancy to the volumes is already degraded and loss of access to data and loss of data might occur if the -force option is used.
Notes:
  1. If you are removing the configuration node, the rmnode command causes the configuration node to move to a different node within the clustered system. This process might take a short time: typically less than a minute. The clustered system IP address remains unchanged, but any SSH client attached to the configuration node might need to reestablish a connection. The management GUI reattaches to the new configuration node transparently.
  2. If this is the last node in the clustered system or if it is currently assigned as the configuration node, all connections to the system are lost. The user interface and any open CLI sessions are lost if the last node in the clustered system is deleted. A time-out might occur if a command cannot be completed before the node is deleted.

An invocation example for rmnode

rmnode 1

The resulting output:

No feedback