Storage pools

In general, a pool or storage pool is an allocated amount of capacity that jointly contains all of the data for a specified set of volumes. The system supports standard pools (parent pools and child pools) and data reduction pools.

Figure 1 shows a basic parent pool with associated child pools. In this graphic, the physical capacity for the parent group is divided between two child pools. Volumes can then be created by using either the capacity from the MDisks through the parent pool or from the child pool.
Figure 1. Storage pool
This figure is described in the surrounding text

Parent Pools

Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of the same size. Volumes are created from the extents that are available in the pool. You can add MDisks to a pool at any time either to increase the number of extents that are available for new volume copies or to expand existing volume copies. The system automatically balances volume extents between the MDisks to provide the best performance to the volumes.

To track the space that is available on an MDisk, the system divides each MDisk into chunks of equal size. These chunks are called extents and are indexed internally. Extent sizes can be 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, or 8192 MB. The choice of extent size affects the total amount of storage that is managed by the system.

You specify the extent size when you create a new parent pool. You cannot change the extent size later; it must remain constant throughout the lifetime of the parent pool.

You cannot use the data migration function to migrate volumes between parent pools that have different extent sizes. However, you can use volume mirroring to move data to a parent pool that has a different extent size.

Use volume mirroring to add a copy of the disk from the destination pool. After the copies are synchronized, you can free up extents by deleting the copy of the data in the source pool. The FlashCopy function can also be used to create a copy of a volume in a different pool.

A system can manage 2^22 extents. For example, with a 16 MB extent size, the system can manage up to 16 MB x 4,194,304 = 64 TB of storage.

When you choose an extent size, consider your future needs. For example, if you currently have 40 TB of storage and you specify an extent size of 16 MB for all parent pools, the capacity of the system is limited to 64 TB of storage in the future. If you select an extent size of 64 MB for all parent pools, the capacity of the system can grow to 256 TB.

Using a larger extent size can waste storage. When a volume is created, the storage capacity for the volume is rounded to a whole number of extents. If you configure the system to have many small volumes and you use a large extent size, storage can be wasted at the end of each volume.

To use the Easy Tier function, you must purchase the license on your system.

When you create or manage a parent pool, consider the following general guidelines:

  • Ensure that all MDisks that are allocated to the same tier of a parent pool are the same RAID type. Allocating MDisks within the same tier ensures that a single failure of a physical disk does not take the entire pool offline. For example, if you have three RAID-5 arrays in one pool and add a non-RAID disk to this pool, you lose access to all the data that is striped across the pool if the non-RAID disk fails. Similarly, for performance reasons, you must not mix RAID types. The performance of all volumes is reduced to the lowest achiever in the tier.
  • An MDisk can be associated with just one parent pool.
  • You can specify a warning capacity for a pool. A warning event is generated when the amount of space that is used in the pool exceeds the warning capacity. The warning threshold is especially useful with thin-provisioned volumes that are configured to automatically use space from the pool.
  • Volumes are associated with just one pool, except when you migrate between parent pools.
  • Volumes that are allocated from a parent pool are striped across all the storage that is placed into that parent pool. This also enables nondisruptive migration of data from one storage system to another storage system and helps simplify the decommissioning process if you want to decommission a storage system later.
  • You can only add MDisks that are in unmanaged mode. When MDisks are added to a parent pool, their mode changes from unmanaged to managed.
  • You can delete MDisks from a parent pool under the following conditions:
    • Volumes are not using any of the extents that are on the MDisk.
    • Enough free extents are available elsewhere in the pool to move any extents that are in use from this MDisk.
    • The system ensures that all extents that are used by volumes in the child pool are migrated to other MDisks in the parent pool to ensure that data is not lost.
    You can delete an array MDisk from a parent pool when either:
    • Volumes are not using any of the extents that are on the MDisk.
    • Enough free extents are available elsewhere in the parent pool to move any extents that are in use from this MDisk.
    • Before you remove MDisks from a parent pool, ensure that the parent pool has enough capacity for any child pools that are associated with the parent pool.
  • If the parent pool is deleted, you cannot recover the mapping that existed between extents that are in the pool or the extents that the volumes use. If the parent pool has associated child pools, then you must delete the child pools first and return its extents to the parent pool. After the child pools are deleted, you can delete the parent pool. The MDisks that were in the parent pool are returned to unmanaged mode and can be added to other parent pools. Because the deletion of a parent pool can cause a loss of data, you must force the deletion if volumes are associated with it.
  • If the volume is mirrored and the synchronized copies of the volume are all in one pool, the mirrored volume is destroyed when the storage pool is deleted. If the volume is mirrored and there is a synchronized copy in another pool, the volume remains after the pool is deleted.

Child Pools

Instead of being created directly from MDisks, child pools are created from existing capacity that is allocated to a parent pool. As with parent pools, volumes can be created that specifically use the capacity that is allocated to the child pool. Child pools are similar to parent pools with similar properties and can be used for volume copy operation.

Child pools are created with fully allocated physical capacity. The capacity of the child pool must be smaller than the free capacity that is available to the parent pool. The allocated capacity of the child pool is no longer reported as the free space of its parent pool.

When you create or work with a child pool, consider the following general guidelines:
  • Child pools can be created and changed with the command-line interface or through the IBM Spectrum Control when creating VMware vSphere Virtual Volumes. You can use the management GUI to view child pools and their properties.
  • As with parent pools, you can specify a warning threshold that alerts you when the capacity of the child pool is reaching its upper limit. Use this threshold to ensure that access is not lost when the capacity of the child pool is close to its allocated capacity.
  • On systems with encryption enabled, child pools can be created to migrate existing volumes in a non-encrypted pool to encrypted child pools. When you create a child pool after encryption is enabled, an encryption key is created for the child pool even when the parent pool is not encrypted. You can then use volume mirroring to migrate the volumes from the non-encrypted parent pool to the encrypted child pool.
  • Ensure that any child pools that are associated with a parent pool have enough capacity for the volumes that are in the child pool before removing MDisks from a parent pool. The system automatically migrates all extents that are used by volumes to other MDisks in the parent pool to ensure data is not lost.
  • You cannot shrink the capacity of a child pool below its real capacity. The system uses reserved extents from the parent pool that use multiple extents. The system also resets the warning level when the child pool is shrunk and issues a warning if the level is reached when the capacity is shrunk.
  • The system supports migrating a copy of volumes between child pools within the same parent pool or migrating a copy of a volume between a child pool and its parent pool. Migrations between a source and target child pool with different parent pools are not supported. However, you can migrate a copy of the volume from the source child pool to its parent pool. The volume copy can then be migrated from the parent pool to the parent pool of the target child pool. Finally, the volume copy can be migrated from the target parent pool to the target child pool.
  • A child pool cannot be created from a data reduction pool.

Data Reduction Pools

To use data reduction technologies on the system, you need to create a data reduction pool, create thin-provisioned or compressed volumes, and map these volumes to hosts that support SCSI unmap commands.

Data reduction can increase storage efficiency and performance and reduce storage costs, especially for flash storage. Data reduction reduces the amount of data that is stored on external storage systems and internal drives by reclaiming previously used storage resources that are no longer needed by host systems. To estimate potential capacity savings that data reduction technologies can provide on the system, use the Data Reduction Estimator Tool (DRET). This tool analyzes existing user workloads which are being migrated to a new system. The tool scans target workloads on all attached storage arrays, consolidates these results, and generates an estimate of potential data reduction savings for the entire system.

Go to https://www-945.ibm.com/support/fixcentral/ to find the tool and its readme. Data reduction is only supported Lenovo Storage V5030 and Lenovo Storage V5030F systems.

Note: Data Reduction Estimator Tool also provides some analysis of potential compression savings for volumes; however, it is recommended that you also use management GUI or the command-line interface to run the integrated Comprestimator Utility to gather data for potential compression savings for volumes in data reduction pools.

The system supports data reduction pools, which can contain thin-provisioned or compressed volumes. Data reduction pools also support additional capacity savings on thin-provisioned and compressed volumes by supporting data deduplication. When deduplication is specified for a thin-provisioned or compressed volume, duplicate versions of data are eliminated and not written to storage, thus saving additional capacity. Data reduction pools also contain specific volumes that track when space is freed from hosts and possible unused capacity that can be collected and reused within the storage pool. When space is freed from hosts, the process is called unmapping. Unmap is a set of SCSI commands that hosts use to indicate that allocated capacity is no longer required on a target volume. The freed space can be collected and reused on the system without the reallocation of capacity on the storage. The pool can also reclaim unused capacity in a data reduction pool and redistribute it to free extents. Reclaimable capacity is unused capacity that is created when data is overwritten, volumes are deleted, or when data is marked as unneeded by a host by using the SCSI unmap command. When a host no longer needs the data that is stored on a volume, the host system using SCSI unmap commands to release that storage from the volume. When these volumes are in data reduction pools, that space becomes reclaimable capacity and is monitored and collected and eventually redistributed back to the pool for use by the system. In the management GUI, reclaimable capacity is added to the available capacity for the data reduction pool. For standard pools, available capacity does not include any reclaimable capacity. In the command line interface, lsmdiskgrp command displays the different values that apply to data reduction and standard pools. For data reduction pools, the value for reclaimable_capacity indicates the amount of unused capacity that is available after data is reduced in the pool. Unlike with the management GUI, reclaimable_capacity is not included in the free_capacity value that is displayed in the lsmdiskgrp. Reclaimable capacity is collected as metadata and is also stored in the data reduction pool, thus using storage on the external storage system. The system periodically returns this capacity back to the pool, however, the system can use up 85% of the available logical capacity with reclaimable data, which can generate out-of-space warnings on the external storage system incorrectly. When creating data reduction pools, ensure that 15% of the total capacity that is allocated is reserved for these operations. Reclaimable capacity can be used for other volumes, which more efficiently uses existing storage resources. Monitor physical capacity of data reduction pools in the management GUI by selecting Pools > Pools. In the command-line interface, use the lsmdiskgrp command to display the physical capacity of a data reduction pool.

Support for the host SCSI unmap command is disabled by default. To enable support for a host to use SCSI unmap commands, enter the following command:
chsystem -hostunmap on

Verify whether the storage system supports data reduction technologies, like compression. If you use storage systems that support data reduction technologies, you can also configure data reduction on the storage systems. The storage system can reclaim that freed storage and reorganize the data on other volumes to more efficiently use the capacity. For volumes that are fully allocated on storage, the system fully controls storage on these storage systems. When a volume is deleted, capacity is freed on the system and can be reallocated; the storage system is not aware of this freed space. However, if the storage system uses compression, thin-provisioning, or deduplication, the storage system controls the use of the physical capacity. In this configuration, when capacity is freed, the system notifies the storage system that capacity is no longer needed. The storage system can then reuse that capacity or free it as reclaimable capacity. The system also supports reclaimable capacity from certain internal drives, such as the 15 TB tier 1 flash drives, which can improve performance on these types of drives.

When you create a data reduction pool, ensure that the size of the pool can accommodate the capacity that is needed to track unmap and reclaim operations within the pool. A general guideline is to ensure that the volume capacity with the data reduction pool does not exceed 85% of the total capacity of the data reduction pool. Minimum capacity requirements for data reduction pools includes the minimum data reduction pool capacity that is required to be able to create a volume within the pool.
Table 1. Minimum capacity requirements for data reduction pools
Extent size (in gigabytes) Capacity requirements (in terabytes)1
1 GB or smaller 1.1 TB
2 GB 2.1 TB
4 GB 4.2 TB
8 GB 8.5 TB
1Fully allocated volumes are not included into the minimum capacity values. When you are planning capacity for data reduction pools, determine the capacity that is needed for any fully allocated volumes first, then ensure that the minimum capacity values for the data reduction pool are included.

Pool states

Table 2 describes the operational states of a pool. Child pools adopt the state of the parent pool. States that indicate an error must be resolved on the parent pool.
Table 2. Pool states
State Description
Online The pool is online and available. All the MDisks in the pool are available.
Degraded paths This state indicates that one or more nodes in the system cannot access all the MDisks in the pool. A degraded path state is most likely the result of incorrect configuration of either the storage system or the Fibre Channel fabric. However, hardware failures in the storage system, Fibre Channel fabric, or node might also be a contributing factor to this state. To recover from this state, follow these steps:
  1. Verify that the fabric configuration rules for storage systems are correct.
  2. Ensure that you configured the storage system properly.
  3. Correct any errors in the event log.
Degraded ports This state indicates that one or more 1220 errors were logged against the MDisks in the pool. The 1220 error indicates that the remote Fibre Channel port was excluded from the MDisk. This error might cause reduced performance on the storage system and usually indicates a hardware problem with the storage system. To fix this problem, you must resolve any hardware problems on the storage system and fix the 1220 errors in the event log. To resolve these errors in the log, click Monitor > Events in the management GUI. This action displays a list of unfixed errors that are currently in the event log. For these unfixed errors, select the error name to begin a guided maintenance procedure to resolve them. Errors are listed in descending order with the highest priority error listed first. Resolve highest priority errors first.
Offline The pool is offline and unavailable. No nodes in the system can access the MDisks. The most likely cause is that one or more MDisks are offline or excluded.
Attention: If a single array MDisk in a pool is offline and cannot be seen by any of the online nodes in the system, the pool of which this MDisk is a member goes offline. This causes all of the volume copies that are being presented by this pool to go offline. Take care when you create pools to ensure an optimal configuration.

Easy Tier

To use the Easy Tier function, you must purchase the license on your system.

Easy Tier eliminates manual intervention when you assign highly active data on volumes to faster responding storage. In this dynamically tiered environment, data movement is seamless to the host application regardless of the storage tier in which the data belongs. However, you can manually change the default behavior. For example, you can turn off Easy Tier on pools that have any combination of the four types of MDisks.

The system supports these tiers:
Tier 0 flash
Tier 0 flash tier exists when the pool contains high performance flash drives.
Tier 1 flash
Tier 1 flash tier exists when the pool contains tier 1 flash drives. Tier 1 flash drives typically offer larger capacities, but slightly lower performance and write endurance characteristics.
Enterprise tier
Enterprise tier exists when the pool contains enterprise-class MDisks, which are disk drives that are optimized for performance.
Nearline tier
Nearline tier exists when the pool contains nearline-class MDisks, which are disk drives that are optimized for capacity.

All MDisks belong to one of the tiers, which includes MDisks that are not yet part of a pool.

A child pool inherits the Easy Tier settings from its parent pool. You cannot change the Easy Tier settings on a child pool. You can only change them on a parent pool.