You can use the command-line interface (CLI) to install software updates.
Follow these steps to update to version 8.1.0 or
later from version 7.7.0 or later.
You can use the command-line
interface to resolve multipathing issues when nodes go offline for
updates. You can add the ability to override the default 30 minute
mid-point delay, pause an update, and resume a stalled update by following
these steps:
- To start an update but pause at the halfway point, enter the following
command:
applysoftware -file filename -pause
- To start an update but then pause before you take the node offline
for an update, enter the following command:
applysoftware -file filename -pause -all
- To resume a stalled update and pause at the halfway point, enter
the following command:
applysoftware -resume -pause
- To resume a stalled update and pause before you take the remaining
nodes offline for an update, enter the following command:
applysoftware -resume -pause -all
Note:
The -all parameter enables the update to pause indefinitely
before each node goes offline for an update. This pause happens before
the existing object-dependent volume check is carried out. The -resume parameter enables the user to continue the update.
To update the system, follow these steps.
- You must download, install, and run the latest version
of the test utility to verify that no issues exist with the current
system.
- Download the latest code from the https://datacentersupport.lenovo.com/ site.
- If you want to write the code to a CD, you must download the
CD image.
- If you do not want to write the code to a CD, you must download
the installation image.
- Use PuTTY scp (pscp) to copy the update files to
the node.
- Ensure that the update file was successfully copied.
Before you begin the update, you must be aware of the following
situations:
- The installation process fails under the following conditions:
- If the code that is installed on the remote
system is not compatible with the new code or if an intersystem communication
error does not allow the system to check that the code is compatible.
- If any node in the system has a hardware
type that is not supported by the new code.
- If the system determines that one or
more volumes in the system would be. taken
offline by rebooting the nodes as part of the update process. You can find details about which volumes would be affected by using the lsdependentvdisks command. If you are prepared to lose access to data during the update,
you can use the force flag to override this restriction.
- The update is distributed to all the nodes
in the system by using internal connections between the nodes.
- Nodes are updated one at a time.
- Nodes run the new code concurrently with
normal system activity.
- While the node is updated, it does not participate
in I/O activity in the I/O group. As a result, all I/O activity for the volumes in the I/O group is directed to the other node in
the I/O group by the host multipathing software.
- There is a thirty-minute delay between node
updates. The delay allows time for the host multipathing software
to rediscover paths to the nodes that are updated. There is no loss
of access when another node in the I/O group is updated.
- The update is not committed until all nodes
in the system are successfully updated to the new code level. If all
nodes are successfully restarted with the new code level, the new
level is committed. When the new level is committed, the system vital
product data (VPD) is updated to reflect the new code level.
- Wait until all member nodes are updated and the update is committed before
you invoke the new functions of the updated code.
- Because the update process takes some time, the installation command
completes as soon as the code level is verified by the system. To
determine when the update is completed, you must either display the
code level in the system VPD or look for the Software update
complete event in the error/event log. If any node fails
to restart with the new code level or fails at any other time during
the process, the code level is backed off.
- During an update, the version number of each node is updated when
the code is installed and the node is restarted. The system code version
number is updated when the new code level is committed.
- When the update starts, an entry is made in the error or event
log and another entry is made when the update completes or fails.
- Issue this CLI command to start the update process:
applysoftware -file software_update_file
Where software_update_file is the name
of the code update file in the directory you copied the file to in
step 3.
If the system identifies
any volumes that would go offline as a result of
rebooting the nodes as part of the system update, the code update
does not start. An optional force parameter can
be used to indicate that the update continues regardless of the problem
identified. If you use
the force parameter, you are prompted to confirm
that you want to continue. The behavior of the force parameter changes, and it is no longer required when you apply an
update to a system with errors in the event log.
- Issue the following CLI command
to check the status of the code update process:
lsupdate
This command displays success when the update is complete.Note: If a status of stalled_non_redundant is displayed, proceeding with the remaining set of node updates
might result in offline volumes. Contact a service
representative to complete the update.
- To verify that the update successfully
completes, issue the lsnodecanistervpd CLI command
for each node that is in the system.
The code version field displays the new code level.
When a new code level is applied, it is automatically installed
on all the nodes that are in the system.
Note: An automatic system
update can take up to 30 minutes per node.
Note: When
you update your system software to 8.1.0 from a previous version on
a system where you have already installed more than 64 GB of RAM,
all nodes return from the update with an error code of 841. Version
8.1.0 allocates memory in a different way than previous versions,
so the RAM must be "accepted" again. To resolve the error, complete
the following steps:
- On a single node, run the svctask chnodehw command.
Do not run the command on more than one node at a time.
- Wait for the node to restart and return without the error.
- Wait an additional 30 minutes for multipath drives to recover
on the host.
- Repeat this process for each node individually until you clear
the error on all nodes.