Upgrading system hardware concurrently

You can upgrade a system to a Lenovo Storage V3700 V2 XP or Lenovo Storage V5030 system without taking the system offline.

  1. Concurrent upgrade paths for Lenovo Storage V series shows the upgrade paths that are allowed for the 2077 machine type. The same upgrade paths are also available for the 2078 machine type, which is the identical product with an extended warranty.
    Table 1. Concurrent upgrade paths for Lenovo Storage V series
    Upgrade from: Upgrade to:
    Lenovo Storage V3700 V2 2077-112 Lenovo Storage V3700 V2 XP 2077-212
    Lenovo Storage V3700 V2 2077-112 Lenovo Storage V5030 2077-312
    Lenovo Storage V3700 V2 XP 2077-212 Lenovo Storage V5030 2077-312
    Lenovo Storage V3700 V2 2077-124 Lenovo Storage V3700 V2 XP 2077-224
    Lenovo Storage V3700 V2 2077-124 Lenovo Storage V5030 2077-324
    Lenovo Storage V3700 V2 XP 2077-224 Lenovo Storage V5030 2077-324
  2. Extra onboard SAS host ports cannot be used during the upgrade.
  3. Before you upgrade a Lenovo Storage V3700 V2 XP system to a Lenovo Storage V5030 system, disconnect any SAS hosts that are directly attached to the onboard ports.
  4. Do not change Host Interface Adapters during the upgrade. Consequences of changing Host Interface Adapters shows the possible consequences.
    Table 2. Consequences of changing Host Interface Adapters
    From: To: Consequence
    SAS FC, or 1 Gb iSCSI, or 10 Gb iSCSI All SAS hosts are lost.
    FC, or 1 Gb iSCSI, or 10 Gb iSCSI SAS All non-SAS hosts are lost except those hosts that use onboard iSCSI ports.
    FC 1 Gb iSCSI All FC hosts are lost.
    FC 10 Gb iSCSI All FC hosts are lost unless your switches have FCoE capability
    1 Gb iSCSI or 10 Gb iSCSI FC All non-FC hosts are lost except those hosts that use onboard iSCSI ports.
    1 Gb iSCSI 10 Gb iSCSI All iSCSI hosts are lost except in the following cases:
    • The hosts use onboard iSCSI ports
    • Your switches have both RJ45 and optical ports
    10 Gb iSCSI 1 Gb iSCSI All FCoE and iSCSI hosts are lost except in the following cases:
    • The hosts use onboard iSCSI ports
    • Your switches have both RJ45 and optical ports
  5. Do not add expansion enclosures during the upgrade.
  6. License keys are seeded from the machine type and model (MTM) and serial number of the system. Because the upgraded system will have a different MTM than the one you began with, the system will require a new license key after the upgrade.
  7. Make sure that cluster software is version 7.8 or later before you start the upgrade.
  8. Use the lservicestatus command to determine the software version on the replacement canisters. This version must be 7.8. Canisters with a lower software level require a service mode upgrade. You must place the node into service state and apply an update package, as described in Updating all nodes except the configuration node.
  1. Determine the ID of the node canister that is not the configuration node by referring to the management GUI or to the lsnodecanister CLI command. Note its slot number and node ID.
  2. Use the management GUI to select the non-configuration node that you identified in the previous step and select Show Dependent Volumes. Alternatively, run the lsvdiskdependency -node node_ID command where node_ID is the ID of the node to be removed. Resolve any node dependencies before you continue.
  3. Remove the identified node canister from the system configuration by using the rmnodecanister command or the management GUI.
    Wait for this operation to complete and for the canister to be in the Service state before you continue to the next step. The canister is in Service state when the system status LED is flashing and the fault LED is on.
  4. Remove the existing canister from the enclosure physically.
  5. If you are reusing the adapter from the old canister, transfer the adapter to the upgrade canister.
  6. Install the upgrade canister in place of the old one.
  7. Wait 25 minutes for the system to process the addition of the new canister.
    When the upgrade is complete, the system status LED changes to solid green.
    When the new canister is powered on, it automatically attempts to join the cluster.
  8. Confirm that the new canister joined the cluster by running the lsnodecanister command or by using the management GUI.
    When you have canisters with different MTMs in the same system, the system is considered to be in a mixed state. In a mixed-state system, all of the advanced features that are unique to the higher-level canister are disabled.
  9. If any I/O is taking place on the system, wait 30 minutes to ensure that the multipathing drivers are fully redundant, with every path available, and online.
  10. Repeat steps #tb5_concurrent_upgrade/id_node - 8 for the second canister in the enclosure.
    These changes are not committed to the system until both canisters are upgraded to the same level. At that point, the MTM for the system is updated and any new features that are associated with the upgraded hardware are made available.
  11. Update the license key for the system.
    1. Download a license key for the upgraded system from the data storage feature activation (DSFA) server.
    2. Use the activatefeature command to activate the license.
    3. Use the deactivatefeature command to remove the old license key after the activation completes.