You can use Fibre Channel (FC) or Fibre Channel over Ethernet
(FCoE) connections to migrate data from a Lenovo Storage V series system to a Lenovo Storage V series system.
- Install and configure the systems.
Lenovo Storage V series supports FC and FCoE connections to migrate data
from the following
Lenovo Storage V series systems:
- Storwize V3500 for Lenovo
- Storwize V3700 for Lenovo
- Lenovo Storage V series
- Storwize V7000 for Lenovo, Storwize V7000 Gen2 for Lenovo, and Storwize V7000 Gen2+ for Lenovo
- Ensure that all systems are running a level of software that enable
them to recognize the other nodes in the cluster. For example, software
level 7.7.1 is required to support and recognize nodes.
- Ensure that all systems are running a level of software that can
support the type of Fibre Channel adapters that are being used.
As an example, software level 7.7.0 or later or higher is required to support and recognize systems that have
a 4-port 16 Gbps Fibre Channel adapter installed.
- Ensure that the systems use Fibre Channel adapters at the same
speed. To avoid performance bottlenecks, do not use a combination
of 8 Gbps and 16 Gbps links.
- Verify that the host attachment ports are available.
- Ensure that you have the appropriate number of cables to connect
to each system or switch, as needed.
- Stop all host I/O operations.
- Unmap the logical drives that contain the data from the hosts.
- Verify that the Lenovo Storage V series system is configured as a replication layer system.
To do so, enter the following command.
svcinfo lssystem
If the
Lenovo Storage V series system is not configured as a replication layer
system, enter the following command.
svctask chsystem -layer replication
- Verify that the Lenovo Storage V series system is configured as a storage layer system. To do so, enter
the following command.
svcinfo lssystem
If the
Lenovo Storage V series system is not configured as a storage layer system, enter the
following command.
svctask chsystem -layer storage
Hardware configuration
- Unplug the FC cables between the Lenovo Storage V series system and the hosts.
Ensure that no host ports on the Lenovo Storage V series system are occupied.
- Connect four FC host ports on the Lenovo Storage V series system to the FC host ports on the Lenovo Storage V series system.
The cabling for the port connections can vary, depending on
whether the systems are connected directly or though a switch. To
provide extra redundancy and ensure that the detected MDisks are not
degraded, ensure that each controller is connected to both node canisters
or switches, if applicable.
Figure 1 shows
Fibre Channel connections that use direct cabling between the systems.
To enable migration, a 4-port FC host interface adapter is installed
on both systems. Any two ports on the host interface adapter can be
used. In this example, ports 1 and 2 are used. In this example, data is being migrated from a Lenovo Storage V series system to a Lenovo Storage V5030 system.
Figure 1. Direct Fibre Channel connections between systems
Figure 2 also shows a Fibre Channel connection between two systems.
However, in this example, two switches are used and each system is
connected to each switch.
Figure 2. Fibre Channel connections using switches between the systems
- Connect the remaining host ports on the Lenovo Storage V series system to the host server ports.
Software configuration
- On the Lenovo Storage V series system, enter the following command to get the worldwide port
name (WWPN) of the FC port on the Lenovo Storage V series system.
svcinfo lsfcportcandidate
- On the Lenovo Storage V series system, define a new host by using the WWPNs that were detected
from the Lenovo Storage V series system in step 4.
Enter the following command, where
wwpn_list is a colon-separated list of WWPNs of the FC ports on the
Lenovo Storage V series system. If you are using the management GUI, set
the
Host type (operating system) to
Generic.
svctask mkhost -fcwwpnwwpn_list
- On the Lenovo Storage V series system, map the logical drives to the newly created host as
a logical unit.
- On the Lenovo Storage V series system, complete the following steps to manage
the logical unit.
- To create one empty storage pool,
enter the following command.
svctask mkmdiskgrp -extextent_size
The logical unit that is mapped from the Lenovo Storage V series system appears as an unmanaged-mode MDisk to the Lenovo Storage V series system.
- To list the unmanaged-mode MDisks, enter the following
command.
- If the new unmanaged-mode MDisk is not listed, perform
a fabric-level discovery.
Enter the following command to scan the network for the unmanaged-mode
MDisks.
svctask detectmdisk
- To convert the unmanaged-mode MDisk
to an image mode volume disk, enter the following command.
svctask mkvdisk -vtype image -iogrpiogrp_name-mdiskgrp -mdiskmdiskgrp_namemdisk_name-mirrorwritepriority redundancy
- iogrp_name
- Name or ID of the
I/O group.
- mdiskgrp_name
- Name or ID of
the storage pool that you created in step 7.a.
- mdisk_name
- Name or ID of the
unmanaged-mode MDisk.
- To list the WWPN of hosts that were previously using
the data that the MDisk now contains, enter the following command.
svcinfo lsfcportcandidate
- If the host does not exist on
the system, enter the following command, where wwpn_list is the colon-separated list of the WWPNs of the FC ports on the
host server.
svctask mkhost -fcwwpnwwpn_list
- Map the new volume to the hosts.
Enter the following command to create a new mapping between
a volume and a host. The image mode volume becomes accessible for
I/O operations to the host.
svctask mkvdiskhostmap -hosthostname diskname
- hostname
- Name or ID of the
host you created in step 7.f.
- diskname
- Name or ID of the
virtual disk you created in step 7.d.
- On the host server, you can detect the new
volume that is presented from the Lenovo Storage V series system and start I/O operations towards it.
- If you have more than one logical drive to migrate, repeat
steps 6 through 8.
- On the Lenovo Storage V series system, complete the following steps to start
migration for each image-mode volume. For more information about migrating
data to volumes, see Managing volumes.
- To create one empty internal storage
pool, enter the following command.
svctask mkmdiskgrp -extextent_size
- To create arrays with internal drives and add them to
the internal storage pool, enter the following command.
svctask mkarray -levelraidtype-drivedrivelist mdiskgrpname
- raidtype
- Type of RAID array
to be created.
- drivelist
- List of drive IDs.
- mdiskgrpname
- Name or ID of
storage pool created in step 7.d.
- After you determine the volume that you want to migrate
and the new storage pool that you want to migrate it to, enter the
following command.
svctask addvdiskcopy -mdiskgrpnewmdiskgrname vdiskname
The copy ID of the new copy is returned. The copies are
now synchronized such that the data is stored in both storage pools.
- To check the progress of the synchronization, enter
the following command.
svcinfo lsvdisksyncprogress
- After the volume reports that it is fully synchronized
and you are ready to stop using the external storage system, enter
the following command on the image-mode copy of the volume.
The image-mode copy is deleted, and its associated MDisk
becomes unmanaged.
The data on the logical drives from the Lenovo Storage V series system is migrated to the Lenovo Storage V series system. Host I/O is also switched to the Lenovo Storage V series system.
You can now disconnect the cabling between the switches (if
applicable), the Lenovo Storage V series system, and the Lenovo Storage V series system.