Shared Storage Pools:
A shared storage pool is a pool of SAN storage devices that can span multiple Virtual I/O Servers. It is based on a cluster of Virtual I/O Servers and a distributed data object repository. (This repository is using a cluster filesystem that has been developed specifically for the purpose of storage virtualization and you can see something like this: /var/vio/SSP/bb_cluster/D_E_F_A_U_L_T_061310)
When using shared storage pools, the Virtual I/O Server provides storage through logical units that are assigned to client partitions. A logical unit is a file backed storage device that resides in the cluster filesystem in the shared storage pool. It appears as a virtual SCSI disk in the client partition.
The Virtual I/O Servers that are part of the shared storage pool are joined together to form a cluster. Only Virtual I/O Server partitions can be part of a cluster. The Virtual I/O Server clustering model is based on Cluster Aware AIX (CAA) and RSCT technology.
Cluster can consist:
VIOS version 2.2.0.11, Fix Pack 24, Service Pack 1 <--1 node
VIOS version 2.2.1.3 <--4 node
VIOS Version 2.2.2.0 <--16 node
------------------------------------------------------------------------------
Thin provisioning
A thin-provisioned device represents a larger image than the actual physical disk space it is using. It is not fully backed by physical storage as long as the blocks are not in use. A thin-provisioned logical unit is defined with a user-specified size when it is created. It appears in the client partition as a virtual SCSI disk with that user-specified size. However, on a thin-provisioned logical unit, blocks on the physical disks in the shared storage pool are only allocated when they are used.
Consider a shared storage pool that has a size of 20 GB. If you create a logical unit with a size of 15 GB, the client partition will see a virtual disk with a size of 15 GB. But as long as the client partition does not write to the disk, only a small portion of that space will initially be used from the shared storage pool. If you create a second logical unit also with a size of 15 GB, the client partition will see two virtual SCSI disks, each with a size of 15 GB. So although the shared storage pool has only 20 GB of physical disk space, the client partition sees 30 GB of disk space in total.
After the client partition starts writing to the disks, physical blocks will be allocated in the shared storage pool and the amount of free space in the shared storage pool will decrease. Deleting files or logical volumes from the shared storage pool, on a client partition does not increase free space of the shared storage pool.
When the shared storage pool is full, client partitions will see an I/O error on the virtual SCSI disk. Therefore even though the client partition will report free space to be available on a disk, that information might not be accurate if the shared storage pool is full.
To prevent such a situation, the shared storage pool provides a threshold that, if reached, writes an event in the errorlog of the Virtual I/O Server.
(If you use -thick flag with mkdbsp command, not a thin provisioned disk, but a usual disk (thick) will be created and client will have all the disk space.)
------------------------------------------------------------------------------
When a cluster is created, you must specify one physical volume for the repository and one for the storage pool physical volume. The storage pool physical volumes are used to provide storage to the client partitions. The repository physical volume is used to perform cluster communication and store the cluster
configuration.
If you need to increase the free space in the shared storage pool, you can either add an additional physical volume or you can replace an existing volume with a bigger one. Physical disks cannot be removed from the shared storage pool.
Requirements:
-each VIO Server must resolve correctly other VIO servers in cluster (DNS or /etc/hosts must be filled up with all VIO Servers)
-hostname command should show FQDN (with domain.com)
-VLAN tagging interfaces are not supported in earlier VIO versions for cluster communications
-fibre channel adapter should be ste to dyntrk=yes, fc_err_recov=fast_fail
-disks reserve policy should be set to no_reserve and all VIO Server must have these disk in available state.
-1 disk is needed for repository (min 10GB) and 1 or more for data (min 10GB) (these should be SAN FC LUNs)
-Active Memory Sharing paging space cannot be on SSP disk
------------------------------------------------------------------------------
Commands for create:
cluster -create -clustername bb_cluster -spname bb_pool -repopvs hdiskpower1 -sppvs hdiskpower2-hostname bb_vio1
clustername bb_cluster <--name of the cluster
-spname bb_pool <--storage pool name
-repopvs hdiskpower1 <--disk of repository
-sppvs hdiskpower2 <--storage pool disk
-hostname bb_vio1 <--VIO Server hostname (where to create cluster)
(This command will create cluster, start CAA daemons and create shared storage pool)
cluster -addnode -clustername bb_cluster -hostname bb_vio2 adding node to the cluster (16 node can be added)
chsp -add -clustername bb_cluster -sp bb_pool hdiskpower2 adding disk to a shared storage poool
mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk1 -vadapter vhost0 creating a 10G LUN and assigning to vhost0 (lsmap -all will show it)
mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk2 creating a 10G LUN
mkbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk2 -vadapter vhost0 assigning LUN to a vhost adapter (command works only if bd name is unique)
mkbdsp -clustername bb_cluster -sp bb_pool -luudid c7ef7a2 -vadapter vhost0 assigning and earlier created LUN by LUN ID to a vhost adapter (same as above)
Commands for display:
cluster -list display cluster name and ID
cluster -status -clustername bb_cluster display cluster state and pool state on each node
lssp -clustername bb_cluster list storage pool details (pool size, free space...)
lssp -clustername bb_cluster -sp bb_pool -bd list created LUNs in the storage pool (backing devices in lsmap -all)
lspv -clustername bb_cluster -sp bb_pool list physical volumes of shared storage pool (disk size, id)
lspv -clustername bb_cluster -capable list which disk can be added to the cluster
lscluster -c list cluster configuration
lscluster -d list disk details of the cluster
lscluster -m list info about nodes (interfaces) of the cluster
lscluster -s list network statistics of the local node (packets sent...)
lscluster -i -n bb_cluster list interface information of the cluster
odmget -q "name=hdiskpower2 and attribute=unique_id" CuAt checking LUN ID (as root)
Commands for remove:
rmbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk1 remove created LUN (backing device will be deleted from vhost adapter)
(disks cannot be removed from cluster (for example hdiskpower...)
cluster -rmnode -clustername bb_cluster -hostname bb_vios1 remove node from cluster
cluster -delete -clustername bb_cluster remove cluster completely
------------------------------------------------------------------------------
Create cluster and Shared Storage Pool:
1. create a cluster and pool: cluster -create ...
2. adding additional nodes to the cluster: cluster -addnode
3. checking which physical volume can be added: lspv -cluatername clusterX -capable
4. adding physical volume: chsp -add
5. create and map LUNS to clients: mkdsp -clustername...
------------------------------------------------------------------------------
cleandisk -r hdiskX clean cluster signature from hdisk
cleandisk -s hdiskX clean storage pool signature from hdisk
/var/vio/SSP cluster related directory (and files) will be created in this path
------------------------------------------------------------------------------
Managing snapshots:
Snapshots from a LUN can be created which later can be restored in case of any problems
# snapshot -create bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1 <--create a snapshot
#snapshot -list -clustername bb_cluster -spname bb_pool <--list snapshots of a storage pool
Lu Name Size(mb) ProvisionType Lu Udid
bb_disk1 10240 THIN 4aafb883c949d36a7ac148debc6d4ee7
Snapshot
bb_disk1_snap
# snapshot -rollback bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1 <--rollback a snapshot to a LUN
$ snapshot -delete bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1 <--delete a snapshot
------------------------------------------------------------------------------
Setting alerts for Shared Storage Pools:
As thin provisioning is in place, real storage free space cannot be seen exactly. If storage pool gets 100% full, IO error will occur on client LPAR. To avoid this alerts can be configured:
$ alert -list -clustername bb_cluster -spname bb_pool
PoolName PoolID Threshold%
bb_pool 000000000A8C1517000000005150C18D 35 <--it shows the free percentage
# alert -set -clustername bb_cluster -spname bb_pool -type threshold -value 25 <--if free space goes below 25% it will alert
# alert -list -clustername bb_cluster -spname bb_pool
PoolName PoolID Threshold%
bb_pool 000000000A8C1517000000005150C18D 25 <--new value can be seen here
$ alert -unset -clustername bb_cluster -spname bb_pool <--unset an alert
in errlog you can see the warning
----------------------------------------
A shared storage pool is a pool of SAN storage devices that can span multiple Virtual I/O Servers. It is based on a cluster of Virtual I/O Servers and a distributed data object repository. (This repository is using a cluster filesystem that has been developed specifically for the purpose of storage virtualization and you can see something like this: /var/vio/SSP/bb_cluster/D_E_F_A_U_L_T_061310)
When using shared storage pools, the Virtual I/O Server provides storage through logical units that are assigned to client partitions. A logical unit is a file backed storage device that resides in the cluster filesystem in the shared storage pool. It appears as a virtual SCSI disk in the client partition.
The Virtual I/O Servers that are part of the shared storage pool are joined together to form a cluster. Only Virtual I/O Server partitions can be part of a cluster. The Virtual I/O Server clustering model is based on Cluster Aware AIX (CAA) and RSCT technology.
Cluster can consist:
VIOS version 2.2.0.11, Fix Pack 24, Service Pack 1 <--1 node
VIOS version 2.2.1.3 <--4 node
VIOS Version 2.2.2.0 <--16 node
------------------------------------------------------------------------------
Thin provisioning
A thin-provisioned device represents a larger image than the actual physical disk space it is using. It is not fully backed by physical storage as long as the blocks are not in use. A thin-provisioned logical unit is defined with a user-specified size when it is created. It appears in the client partition as a virtual SCSI disk with that user-specified size. However, on a thin-provisioned logical unit, blocks on the physical disks in the shared storage pool are only allocated when they are used.
Consider a shared storage pool that has a size of 20 GB. If you create a logical unit with a size of 15 GB, the client partition will see a virtual disk with a size of 15 GB. But as long as the client partition does not write to the disk, only a small portion of that space will initially be used from the shared storage pool. If you create a second logical unit also with a size of 15 GB, the client partition will see two virtual SCSI disks, each with a size of 15 GB. So although the shared storage pool has only 20 GB of physical disk space, the client partition sees 30 GB of disk space in total.
After the client partition starts writing to the disks, physical blocks will be allocated in the shared storage pool and the amount of free space in the shared storage pool will decrease. Deleting files or logical volumes from the shared storage pool, on a client partition does not increase free space of the shared storage pool.
When the shared storage pool is full, client partitions will see an I/O error on the virtual SCSI disk. Therefore even though the client partition will report free space to be available on a disk, that information might not be accurate if the shared storage pool is full.
To prevent such a situation, the shared storage pool provides a threshold that, if reached, writes an event in the errorlog of the Virtual I/O Server.
(If you use -thick flag with mkdbsp command, not a thin provisioned disk, but a usual disk (thick) will be created and client will have all the disk space.)
------------------------------------------------------------------------------
When a cluster is created, you must specify one physical volume for the repository and one for the storage pool physical volume. The storage pool physical volumes are used to provide storage to the client partitions. The repository physical volume is used to perform cluster communication and store the cluster
configuration.
If you need to increase the free space in the shared storage pool, you can either add an additional physical volume or you can replace an existing volume with a bigger one. Physical disks cannot be removed from the shared storage pool.
Requirements:
-each VIO Server must resolve correctly other VIO servers in cluster (DNS or /etc/hosts must be filled up with all VIO Servers)
-hostname command should show FQDN (with domain.com)
-VLAN tagging interfaces are not supported in earlier VIO versions for cluster communications
-fibre channel adapter should be ste to dyntrk=yes, fc_err_recov=fast_fail
-disks reserve policy should be set to no_reserve and all VIO Server must have these disk in available state.
-1 disk is needed for repository (min 10GB) and 1 or more for data (min 10GB) (these should be SAN FC LUNs)
-Active Memory Sharing paging space cannot be on SSP disk
------------------------------------------------------------------------------
Commands for create:
cluster -create -clustername bb_cluster -spname bb_pool -repopvs hdiskpower1 -sppvs hdiskpower2-hostname bb_vio1
clustername bb_cluster <--name of the cluster
-spname bb_pool <--storage pool name
-repopvs hdiskpower1 <--disk of repository
-sppvs hdiskpower2 <--storage pool disk
-hostname bb_vio1 <--VIO Server hostname (where to create cluster)
(This command will create cluster, start CAA daemons and create shared storage pool)
cluster -addnode -clustername bb_cluster -hostname bb_vio2 adding node to the cluster (16 node can be added)
chsp -add -clustername bb_cluster -sp bb_pool hdiskpower2 adding disk to a shared storage poool
mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk1 -vadapter vhost0 creating a 10G LUN and assigning to vhost0 (lsmap -all will show it)
mkbdsp -clustername bb_cluster -sp bb_pool 10G -bd bb_disk2 creating a 10G LUN
mkbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk2 -vadapter vhost0 assigning LUN to a vhost adapter (command works only if bd name is unique)
mkbdsp -clustername bb_cluster -sp bb_pool -luudid c7ef7a2 -vadapter vhost0 assigning and earlier created LUN by LUN ID to a vhost adapter (same as above)
Commands for display:
cluster -list display cluster name and ID
cluster -status -clustername bb_cluster display cluster state and pool state on each node
lssp -clustername bb_cluster list storage pool details (pool size, free space...)
lssp -clustername bb_cluster -sp bb_pool -bd list created LUNs in the storage pool (backing devices in lsmap -all)
lspv -clustername bb_cluster -sp bb_pool list physical volumes of shared storage pool (disk size, id)
lspv -clustername bb_cluster -capable list which disk can be added to the cluster
lscluster -c list cluster configuration
lscluster -d list disk details of the cluster
lscluster -m list info about nodes (interfaces) of the cluster
lscluster -s list network statistics of the local node (packets sent...)
lscluster -i -n bb_cluster list interface information of the cluster
odmget -q "name=hdiskpower2 and attribute=unique_id" CuAt checking LUN ID (as root)
Commands for remove:
rmbdsp -clustername bb_cluster -sp bb_pool -bd bb_disk1 remove created LUN (backing device will be deleted from vhost adapter)
(disks cannot be removed from cluster (for example hdiskpower...)
cluster -rmnode -clustername bb_cluster -hostname bb_vios1 remove node from cluster
cluster -delete -clustername bb_cluster remove cluster completely
------------------------------------------------------------------------------
Create cluster and Shared Storage Pool:
1. create a cluster and pool: cluster -create ...
2. adding additional nodes to the cluster: cluster -addnode
3. checking which physical volume can be added: lspv -cluatername clusterX -capable
4. adding physical volume: chsp -add
5. create and map LUNS to clients: mkdsp -clustername...
------------------------------------------------------------------------------
cleandisk -r hdiskX clean cluster signature from hdisk
cleandisk -s hdiskX clean storage pool signature from hdisk
/var/vio/SSP cluster related directory (and files) will be created in this path
------------------------------------------------------------------------------
Managing snapshots:
Snapshots from a LUN can be created which later can be restored in case of any problems
# snapshot -create bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1 <--create a snapshot
#snapshot -list -clustername bb_cluster -spname bb_pool <--list snapshots of a storage pool
Lu Name Size(mb) ProvisionType Lu Udid
bb_disk1 10240 THIN 4aafb883c949d36a7ac148debc6d4ee7
Snapshot
bb_disk1_snap
# snapshot -rollback bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1 <--rollback a snapshot to a LUN
$ snapshot -delete bb_disk1_snap -clustername bb_cluster -spname bb_pool -lu bb_disk1 <--delete a snapshot
------------------------------------------------------------------------------
Setting alerts for Shared Storage Pools:
As thin provisioning is in place, real storage free space cannot be seen exactly. If storage pool gets 100% full, IO error will occur on client LPAR. To avoid this alerts can be configured:
$ alert -list -clustername bb_cluster -spname bb_pool
PoolName PoolID Threshold%
bb_pool 000000000A8C1517000000005150C18D 35 <--it shows the free percentage
# alert -set -clustername bb_cluster -spname bb_pool -type threshold -value 25 <--if free space goes below 25% it will alert
# alert -list -clustername bb_cluster -spname bb_pool
PoolName PoolID Threshold%
bb_pool 000000000A8C1517000000005150C18D 25 <--new value can be seen here
$ alert -unset -clustername bb_cluster -spname bb_pool <--unset an alert
in errlog you can see the warning
----------------------------------------