Use this procedure to restore the system configuration in the
following situations: only if the recover system procedure fails or if the data that is stored on
the volumes is not required.
This configuration restore procedure is designed to restore information about your configuration,
such as volumes, local Metro Mirror information, local
Global Mirror
information, storage pools, and nodes. The data that you wrote to the volumes is not restored.
To restore the data on the volumes, you must restore application data from any application that uses
the volumes on the clustered system as storage separately. Therefore, you must have a backup of this data before you follow the
configuration recovery process.
If USB encryption was enabled on the system when its configuration was backed
up, then at least 3 USB flash drives need to be present in the node canister USB ports for the configuration restore to work. The 3 USB flash drives
must be inserted into the single node from which the configuration restore commands are run. Any USB
flash drives in other nodes (that might become part of the system) are ignored. If
you are not recovering a cloud backup configuration, the USB flash drives do not need to contain any
keys. They are for generation of new keys as part of the restore process. If you are recovering a
cloud backup configuration, the USB flash drives must contain the previous set of keys to allow the
current encrypted data to be unlocked and reencrypted with the new keys.
During T4 recovery, a new system is created with a new
certificate. If the system has key server encryption, the new certificate must be exported by using
the chsystemcert-export command, and then installed on all key servers in the correct device
group before you run the T4 recovery. The device group that is used is the one in which the previous
system was defined. It might also be necessary to get the new system's certificate signed. In a
T4 recovery, inform the key server administrator that the active keys are considered
compromised.
You must regularly back up your configuration data and
your application data to avoid data loss. If a system is lost after
a severe failure occurs, both configuration for the system and application
data is lost. You must restore the system to the exact state it was
in before the failure, and then recover the application data.
During the restore process, the nodes and the storage enclosure are restored to the system, and
then the MDisks and the array are re-created and configured. If multiple storage enclosures are
involved, the arrays and MDisks are restored on the proper enclosures based on the enclosure
IDs.
Important: - There are two phases during the restore process: prepare and execute. You must not change the
fabric or system between these two phases.
- For
systems that contain nodes that are attached to external controllers
virtualized by iSCSI, all nodes must be added into the system before
you restore your data. Additionally, the system cfgportip settings and iSCSI storage ports must be manually reapplied before
you restore your data. See step #svc_clustconfrestoretsk_1e4k7g/iscsi.
- For VMware vSphere Virtual Volumes (sometimes referred to as VVols) environments, after a T4 restoration,
some of the Virtual Volumes configuration steps are already completed: metadatavdisk created,
usergroup and user created, adminlun hosts created. However, the user
must then complete the last two configuration steps manually (creating
a storage container on Spectrum Control Base Edition and
creating virtual machines on VMware vCenter). See Configuring Virtual Volumes.
- Restoring the system configuration should be performed
via one of the nodes previously in IO group zero. For example, property name="IO_group_id" value="0". The remaining
enclosures should be added, as required, in the appropriate order
based on the previous IO_group_id of its node
canisters.
- If the system has USB encryption, run the recovery from any node in
the system that has a USB flash drive inserted
which contains the encryption key.
- If the system has key server encryption, run the recovery on a node
that is attached to the key server. The keys are fetched remotely from the key server.
- If the system uses both USB and key server
encryption, providing either a USB flash drive
or a connection to the key server (only one is needed, but both will work also) will unlock the
system.
- For systems with a cloud backup configuration, during a T4 recovery
the USB key that contained the system master key from the original
system must be inserted into the configuration node of the new system.
Alternatively, if a key server is used, the key server must contain
the system master key from the original system. If the original system
master key is not available, and the system data is encrypted in the
cloud provider, then the data in the cloud is not accessible.
- If the system contains an encrypted cloud
account that is configured with both USB and key server encryption, the master keys from both need
to be available at the time of a T4 recovery.
- After a T4 recovery,
cloud accounts are in an offline state. It is necessary to re-enter
the authentication information to bring the accounts back online.
- If you use USB flash drives to manage encryption keys, the T4 recovery causes the connection
to a cloud service provider to go offline if the USB flash drive is not inserted into the system. To fix this issue, insert the USB flash drive with the current keys into the system.
- If you use key servers
to manage encryption keys, the T4 recovery causes the connection to
a cloud service provider to go offline if the key server is offline.
To fix this issue, ensure that the key server is online and available
during T4 recovery.
- If you use both key servers
and USB flash drives to manage encryption keys, the T4 recovery causes the connection
to a cloud service provider to go offline if the key server is offline.
To fix this issue, ensure that both the key server is online and a USB flash drive is inserted into the system during T4 recovery.
If you do not understand the instructions to run the
CLI commands, see the command-line interface reference information.
To
restore your configuration data, follow these steps:
- Verify that all nodes are available as candidate nodes
before you run this recovery procedure. You must remove errors 550
or 578 to put the node in candidate state. For all nodes that display these errors,
follow these steps:
- Point your browser to the service IP address
of one of the nodes (for example, https://node_service_ip_address/service/).
- Log on to the service assistant.
- From the Home page,
put the node canister into service state if it is not already in that
state.
- Select Manage System.
- Click Remove System Data.
- Confirm that you want to remove the
system data when prompted.
- Exit service state from the Home page. The 550 or 578 errors are removed, and the
node appears as a candidate node.
- Remove the system data for the other nodes
that display a 550 or a 578 error.
All nodes
previously in this system must have a node status of Candidate and have no errors listed against them.
Note: A node that is powered off might not
show up in this list of nodes for the system. Diagnose hardware problems
directly on the node using the service assistant IP address and by
physically verifying the LEDs for the hardware components.
Warning: If you use the management GUI for
the initial setup to restore the system configuration, check if a
default call home email user was created. If it was created, delete
the default call home email user in order for the T4 system recovery
to proceed successfully.
- Verify that all nodes are available
as candidate nodes with blank system fields. Perform the following
steps on one node in each control enclosure:
- Connect to the service assistant on either of the nodes
in the control enclosure.
- Select Configure Enclosure.
- Select the Reset the system ID option.
Do not make any other changes on the panel.
- Click Modify.
- Create a system.
- If your system is a Lenovo Storage V series system, use the technician
port.
- In a supported browser, enter the IP address that you used to initialize the system and the
default superuser password (passw0rd).
- The setup wizard is shown.
Be aware of the following items:
- Accept the license agreements.
- Set the values for the system name, date and time settings,
and the system licensing. The original settings are restored during
the configuration restore process.
- Verify the hardware. Only the control enclosure on which
the clustered
system was
created and directly attached expansion enclosures are displayed. Any other control enclosures and expansion enclosures
in other I/O groups are added to the system later.
Once the setup wizard finishes, make no other configuration changes.
- If you set up email notification
in the setup wizard, you must now remove that email user and server
so that the original configuration can be restored.
Issue
the following CLI command to remove the new email user:
rmemailuser 0
Issue the following CLI command to remove the new email server:
rmemailserver 0
- From
the management GUI,
click and configure an SSH key for the superuser.
- By default, the newly initialized system is created in the storage layer. The layer of the
system is not restored automatically from the configuration backup XML file. If the system you are
restoring was previously configured in the replication layer, you must change the layer manually
now. For more information about the replication layer and
storage layer, see the System layers topic in the Related concepts section at the end
of the page.
- If the clustered system was previously configured as replication layer, then use the
chsystem command to change the layer setting.
- For configurations with more than one I/O group, add the rest of the control enclosures into
the clustered system by using the addcontrolenclosure CLI command.
The remaining enclosures are added in the appropriate order based on the previous
IO_group_id of its node canisters. The following example shows the command to
add a control enclosure to I/O group 2.
svctask addcontrolenclosure-sernumSVT5M48-iogrp2
- Identify the
configuration backup file from which you want to restore.
The
file can be either a local copy of the configuration backup XML file
that you saved when you backed-up the configuration or an up-to-date
file on one of the nodes.
Configuration data is automatically
backed up daily at 01:00 system time on the configuration node.
Download
and check the configuration backup files on all nodes that were previously
in the system to identify the one containing the most recent complete
backup
- From the management GUI, click .
- Expand Manual Upload Instructions and select Download
Support Package.
- On the Download New Support Package or Log File page, select
Download Existing Package.
- For each node (canister) in the system, complete the following steps:
- Select the node to operate on from the selection box at the top of the table.
- Find all the files with names that match the pattern
svc.config.*.xml*.
- Select the files and click Download to download them to your
computer.
The XML files contain a date and time that can be used to identify
the most recent backup. After you identify the backup XML file that
is to be used when you restore the system, rename the file to svc.config.backup.xml.
- Copy onto the system the XML
backup file from which you want to restore.
pscp full_path_to_identified_svc.config.file
superuser@cluster_ip:/tmp/svc.config.backup.xml
- If the system contains any iSCSI storage controllers, these controllers must be detected
manually now. The nodes that are connected to these controllers, the iSCSI port IP addresses, and
the iSCSI storage ports must be added to the system before you restore your data.
- To add these nodes, determine the panel name, node name, and I/O groups of any such nodes from
the configuration backup file. To add the nodes to the system, run the following command:
svctask addnode-panelnamepanel_name-iogrpiogrp_name_or_id-namenode_name
Where panel_name is the name that is
displayed on the panel, iogrp_name_or_id is the name or ID of the I/O group to
which you want to add this node, and node_name is the name of the node.
- To restore iSCSI port IP addresses, use the cfgportip command.
- To restore IPv4 address, determine id (port_id), node_id, node_name, IP_address, mask, gateway,
host (0/1 stands for no/yes), remote_copy (0/1 stands for no/yes), and storage (0/1 stands for
no/yes) from the configuration backup file, run the following command:
svctask cfgportip-nodenode_name_or_id-ipipv4_address-gwipv4_gw-hostyes | no-remotecopyremote_copy_port_group_id-storageyes | noport_id
Where node_name_or_id is the name or id of the node,
ipv4_address is the IP v4 version protocol address of the port, and
ipv4_gw is the IPv4 gateway address for the port.
- To restore IPv6 address, determine id (port_id), node_id, node_name, IP_address_6, mask,
gateway_6, prefix_6, host_6 (0/1 stands for no/yes), remote_copy_6 (0/1 stands for no/yes), and
storage_6 (0/1 stands for no/yes) from the configuration backup file, run the following command:
svctask cfgportip-nodenode_name_or_id-ip_6ipv6_address-gw_6ipv6_gw-prefix_6prefix-host_6yes | no-remotecopy_6remote_copy_port_group_id-storage_6yes | noport_id
Where node_name_or_id is the name or id of the node,
ipv6_address is the IP v6 version protocol address of the port,
ipv6_gw is the IPv6 gateway address for the port, and prefix
is the IPv6 prefix.
Complete steps b.i and b.ii for all (earlier configured) IP ports in the
node_ethernet_portip_ip sections from the backup configuration file.
- Next, detect and add the iSCSI storage port candidates by using the
detectiscsistorageportcandidate and addiscsistorageport
commands. Make sure that you detect the iSCSI storage ports and add these ports in the same order as
you see them in the configuration backup file. If you do not follow the correct order, it might
result in a T4 failure. Step c.i must be followed by steps c.ii and c.iii. You must repeat these
steps for all the iSCSI sessions that are listed in the backup configuration file exactly in the
same order.
- To detect iSCSI storage ports, determine src_port_id,
IO_group_id (optional, not required if the value is 255),target_ipv4/target_ipv6 (the target IP that is not blank is required),
iscsi_user_name (not required if blank), iscsi_chap_secret
(not required if blank), and site (not required if blank) from the configuration
backup file, run the following command:
svctask detectiscsistorageportcandidate-srcportidsrc_port_id-iogrpIO_group_id-targetip/targetip6target_ipv4/target_ipv6-usernameiscsi_user_name-chapsecretiscsi_chap_secret-sitesite_id_or_name
Where src_port_id is the source Ethernet port ID of the configured port,
IO_group_id is the I/O group ID or name being detected,
target_ipv4/target_ipv6 is the IPv4/IPv6 target iSCSI controller IPv4/IPv6
address, iscsi_user_name is the target controller user name being detected,
iscsi_chap_secret is the target controller chap secret being detected, and
site_id_or_name is the specified id or name of the site being detected.
- Match the discovered target_iscsiname with the
target_iscsiname for this particular session in the backup configuration file by
running the lsiscsistorageportcandidate command, and use the matching index to
add iSCSI storage ports in step c.iii.
Run the svcinfo
lsiscsistorageportcandidate command and determine the id field of the row whose
target_iscsiname matches with the target_iscsiname from the
configuration backup file. This is your candidate_id to be used in step
c.iii.
- To add the iSCSI storage port, determine IO_group_id (optional, not required
if the value is 255), site (not required if blank),
iscsi_user_name (not required if blank in backup file), and
iscsi_chap_secret (not required if blank) from the configuration backup file,
provide the target_iscsiname_index matched in step c.ii, and then run the
following command:
addiscsistorageport-iogrpiogrp_id-usernameiscsi_user_name-chapsecretiscsi_chap_secret-sitesite_id_or_namecandidate_id
Where iogrp_id is the I/O group ID or name that is added,
iscsi_user_name is the target controller user name that is being added,
iscsi_chap_secret is the target controller chap secret being added, and
site_id_or_name specified the ID or name of the site being that is
added.
- If the configuration is a HyperSwap or stretched system, the controller name and site needs to be restored. To restore
the controller name and site, determine ccontroller_name and controller
site_id/name from the backup xml file by matching the inter_WWPN field with the
newly added iSCSI controller, and then run the following command:
chcontroller-namecontroller_name-sitesite_id/namecontroller_id/name
Where
controller_name is the name of the controller from the backup xml file,
site_id/name is the ID or name of the site of iSCSI controller from the backup
xml file, and controller_id/name is the ID or current name of the
controller.
- Configure encryption for systems where USB encryption was previously enabled.
For Lenovo Storage V series
systems with less than 3 USB ports, you must manually configure encryption. See Activating encryption license and Enabling encryption with USB flash drives.
For Lenovo Storage V series
systems with 3 or more USB ports where USB encryption was previously enabled, insert 3 USB flash
drives and run svcconfigrestore-prepare as usual.
Important: If you are manually configuring encryption, follow the encryption setup
procedure only to the point where the new keys are created and committed. Do not create any
encrypted system objects (including arrays, storage pools, and
managed disks) because system objects are recovered automatically
by the recovery procedure.
- Issue the following CLI command to compare the current
configuration with the backup configuration data file:
svcconfigrestore-prepare
This
CLI command creates a log file in the
/tmp directory of the
configuration node. The name of the log file is
svc.config.restore.prepare.log.
Note: It
can take up to a minute for each 256-MDisk batch to be discovered. If you receive error message
CMMVC6200W for an MDisk after you enter this command, all the managed
disks (MDisks) might not be discovered yet. Allow a suitable time to elapse and try the
svcconfig restore -prepare command again.
- Issue the following command to copy the log file to another
server that is accessible to the system:
pscp superuser@cluster_ip:/tmp/svc.config.restore.prepare.log
full_path_for_where_to_copy_log_files
- Open the log file from the server where the copy is now
stored.
- Check the log file for errors.
- Issue the following CLI command to
restore the configuration:
svcconfigrestore-execute
This
CLI command creates a log file in the /tmp directory
of the configuration node. The name of the log file is svc.config.restore.execute.log.
- Issue the following command to copy the log file to another
server that is accessible to the system:
pscp superuser@cluster_ip:/tmp/svc.config.restore.execute.log
full_path_for_where_to_copy_log_files
- Open the log file from the server where the copy is now
stored.
- Check the log file to ensure that no errors or warnings
occurred.
Note: You might receive a warning that states that a licensed feature is not enabled. This message
means that after the recovery process, the current license settings do not match the previous
license settings. The recovery process continues normally and you can enter the correct license
settings in the management GUI later.
When you log in to the CLI again over SSH,
you see this output:
IBM_Storwize:your_cluster_name:superuser>
- After the configuration
is restored, verify that the quorum disks are restored to the MDisks
that you want by using the lsquorum command. To
restore the quorum disks to the correct MDisks, issue the appropriate chquorum CLI
commands.
Note: If IP Quorum was enabled on the system, it is not recovered automatically as the system certificate is regenerated. It is necessary to manually re-enable IP
Quorum by downloading a java application from the>> tab in the GUI, and then installing the application on the host server.
You can remove any unwanted configuration backup and restore files from the
/tmp directory on your configuration by issuing the following CLI
command:
svcconfigclear-all