Data migration on partitioned IBM® DS5000, IBM DS4000, and IBM DS3000 systems

You can migrate data on partitioned IBM® DS5000, IBM® DS4000™, and IBM® DS3000 systems.

You can enable the system to be introduced to an existing SAN environment, so that you have the option of using image mode LUNs to import the existing data into the virtualization environment without requiring a backup and restore cycle. Each partition can only access a unique set of HBA ports, as defined by the worldwide port names (WWPNs). For a single host to access multiple partitions, unique host fibre ports (WWPNs) must be assigned to each partition. All LUNs within a partition are identified to the assigned host fibre ports (no subpartition LUN mapping).

To allow Host A to access the LUNs in partition B, you must remove one of the HBAs (for example, A1) from the access list for partition 0 and add it to partition 1. A1 cannot be on the access list for more than one partition.

To add a system into this configuration without backup and restore cycles requires a set of unique system HBA port WWPNs for each partition. This allows the IBM® DS5000, IBM® DS4000™, or IBM® DS3000 system to make the LUNs known to the system, which then configures these LUNs as image-mode LUNs and identifies them to the required hosts. This violates a requirement that all Lenovo Storage V series nodes must be able to see all back-end storage. For example, to fix this problem for an IBM® DS4000™ system, change the configuration to allow more than 32 LUNs in one storage partition, so that you can move all the LUNs from all the other partitions into one partition and map to the Lenovo Storage V series clustered system.

Scenario: the Lenovo Storage V series nodes cannot see all back-end storage

The IBM® DS4000™ series has eight partitions with 30 LUNs in each.

Complete the following steps to allow the system nodes to see all back-end storage.

  1. Change the mappings for the first four partitions on the IBM® DS4000™ system such that each partition is mapped to one port on each node. This maintains redundancy across the system.
  2. Create a new partition on the system that is mapped to all four ports on all the nodes.
  3. Gradually migrate the data into the managed disks (MDisks) in the target partition. As storage is freed from the source partitions, it can be reused as new storage in the target partition. As partitions are deleted, new partitions that must be migrated can be mapped and migrated in the same way. The host side data access and integrity is maintained throughout this process.