Use the rmmdiskgrp command to delete a storage pool without being
able to recover it.
Syntax
rmmdiskgrp [ -force ] { mdisk_group_id | mdisk_group_name }
Parameters
- -force
- (Optional) Specifies that all volumes and host mappings be deleted. When you use this
parameter, all managed disks in the storage pool are removed and the storage pool itself is
deleted.
Remember: - You must specify -force to delete a child pool if it contains
volume.
- You cannot specify -force to delete a parent pool if it has child
pools.
Note: The command fails if
-force is used to delete an MDisk group if:
- Any of the VDisks in the MDisk group are mirrored across multiple MDisk groups (other
than the one that is being deleted).
- AND any of the VDisk mirrors are out of sync.
- AND an attempt is made to delete the in-sync copy. Deleting the only in-sync copy
requires -force. Otherwise, it isn't needed if the VDisk has another
in-sync copy.
- AND the out-of-sync copy is a thin-provisioned or compressed copy in a data reduction
pool.
- mdisk_group_id | mdisk_group_name
- (Required) Specifies the ID or name of the storage pool that is to be deleted.
Note: You
cannot delete a parent pool that has child pools. You must first delete the child
pools.
Description
Important: Before you issue the command, ensure that you want to delete all mapping
information. Data that is contained on the volume cannot be recovered after the storage pool is
deleted.
The
rmmdiskgrp command deletes the specified storage pool. The
-force parameter is required if there are volumes that have been created
from this storage pool or if there are managed disks in the storage pool. Otherwise, the command
fails.
Note: This command also removes any associated storage pool
throttling.
Deleting a storage pool is essentially the same as deleting a clustered system (system) or part
of a system because the storage pool is the central point of control of virtualization. Because
volumes are created by using available extents in the storage pool, mapping between volume
extents and managed disk extents is controlled based on the storage pool.
The command deletes all volume copies in the specified storage pool. If the volume
has no remaining synchronized copies in other storage pools, the volume is also deleted.
This command deletes the associated MDisk group
(storage pool) throttle if that storage pool is removed.
Remember: This command
is unsuccessful if:
- Volume protection is enabled (by using the chsystem command).
- The MDisk being removed is mapped to any volume that received I/O within the defined volume
protection time period.
Remember: This command partially completes asynchronously. All volumes, host mappings,
and Copy Services relationships are deleted before the command completes. The deletion of the
storage pool then completes
asynchronously.
In detail, if you specify the
-force parameter and the volumes are still
using extents in this storage pool, the following actions are initiated or occur:
- The mappings between that disk and any host objects and the associated Copy Services
relationships are deleted.
- If the volume is a part of a FlashCopy mapping,
the mapping is deleted.
Note: If the mapping is not in the idle_or_copied or stopped states,
the mapping is force-stopped and then deleted. Force-stopping the mapping might cause other
FlashCopy mappings in the system to also be
stopped. For more information, see the description for the -force
parameter in the stopfcmap command.
- Any volume that is in the process of being migrated into or out of the storage pool is
deleted. It frees up any extents that the volume was using in another storage pool.
- Volumes are deleted without first flushing the cache. Therefore, the storage controller
LUNs that underlie any image mode MDisks might not contain the same data as the image mode
volume before the deletion.
- If managed disks exist in the storage pool, all disks are deleted from the storage pool.
They are returned to the unmanaged state.
- The storage pool is deleted.
Attention: If you use the -force parameter to delete
all the storage pools in your system, you are returned to the processing state where you were
after you added nodes to the system. All data that is contained on the volumes is lost and
cannot be recovered.
An invocation example
rmmdiskgrp -force Group3
The
resulting output:
No feedback