Object overview

Before you set up your system, you must understand the concepts and the objects in the environment.

Each single processing unit is a node canister, which is also called a node. The two nodes within the control enclosure make an I/O group that is attached to the SAN fabric. Each single processing unit is a node canister, which is also called a node. The two nodes within the enclosure make an I/O group that connects to host systems and optionally, to external storage systems.

Volumes are logical disks that are presented by the system. Both node canisters provide access to the volumes. When an application server performs I/O to a volume, it can access the volume by using either of the nodes in the I/O group.

The node canisters are connected to the drives inside the enclosures. The drives are used to create one or more Redundant Arrays of Independent Disks (RAID). These arrays are also known as managed disks (MDisks).

The system is also able to detect logical units that are presented by back-end storage systems as MDisks.

Each MDisk is divided into a number of extents.

MDisks are collected into groups, which are known as storage pools.

The system can also detect logical units that are presented by Fibre Channel external storage systems as MDisks.

Each volume is made up of one or two volume copies. Each volume copy is an independent physical copy of the data that is stored on the volume. A volume with two copies is known as a mirrored volume. Volume copies are made out of MDisk extents. All the MDisks that contribute to a particular volume copy belong to the same storage pool.

A volume can be thin-provisioned. This means that the virtual capacity of the volume, as seen by host systems, can be different from the amount of storage that is allocated to the volume from MDisks, called the real capacity. Thin-provisioned volumes can be configured to automatically expand their real capacity by allocating new extents.

At any one time, a single node in a system can manage configuration activity. This node is known as the configuration node and manages a cache of the information that describes the system configuration and provides a focal point for configuration.

For a SCSI over Fibre Channel or Fibre Channel over Ethernet (FCoE) connection, the node canisters detect the FC or FCoE ports that are connected to the SAN. These ports correspond to the worldwide port names (WWPNs) of the FC or FCoE host bus adapters (HBAs) that are present in the application servers. You can create logical host objects that group WWPNs that belong to a single application server or to a set of them.

Host servers access volumes by using Fibre Channel, iSCSI, or SAS.The system uses world wide port names (WWPNs) to identify the iSCSI, SAS, and Fibre Channel ports on the host server. The system uses the iSCSI qualified name (IQN) to identify iSCSI hosts.

For a SCSI over Ethernet connection, the iSCSI qualified name (IQN) identifies the iSCSI target (destination) adapter. Host objects can have both IQNs and WWPNs.

System (IQN) hosts are virtual representations of physical host systems that share a set of volumes. You can create host objects that group WWPNs or IQNs that belong to a single application server or to a set of them.

The system provides block-level aggregation and volume management for disk storage within the SAN. The system manages a number of back-end storage systems and maps the physical storage within those storage systems into logical disk images that can be seen by application servers and workstations in the SAN. The SAN is configured in such a way that the application servers cannot see the back-end physical storage. This prevents any possible conflict between the system and the application servers both trying to manage the back-end storage.

A host mapping is an association between a volume and a host. A particular volume can only be accessed from a particular WWPN or IQN if that WWPN or IQN is included in a host with a mapping to the volume.