IBM POWERVC (Power Virtualization Center)
IBM PowerVC is a tool/application from IBM which can be installed on a Linux server. With the help of the GUI we can manage the virtualization of Power Systems (stop/start LPARs, create/delete/migrate LPARS, add storage to them…) It is based on the OpenStack initiative, which is an opensource cloud management project without any hardware dependency. PowerVC is using the components of OpenStack (compute, storage, network..), so it is compatible with OpenStack standards.
When a Power server is controlled by PowerVC, it can be managed:
- By the graphical user interface (GUI)
- By scripts containing the IBM PowerVC REST APIs
- By higher-level tools that call IBM PowerVC by using standard OpenStack API
---------------------------------------------------------------
NOVALINK
To management of the virtualization can go through an HMC or a NovaLink LPAR. Novalink partition is a special LPAR, which exists on each Power server (if we decided to use this method), and it will do the same functions as an HMC. (A combined solution is also possible, where both HMC and NovaLink exist together.)
The NovaLink architecture enables OpenStack to work with PowerVM (and PowerVC) by providing a direct connection to the PowerVM server (rather than communicating through an HMC). By using NovaLink, IBM PowerVC can increase its scaling. In an existing HMC-managed environment, IBM PowerVC can manage up to 30 hosts and up to 3000 VMs. In a PowerVM NovaLink based environment, IBM PowerVC can manage up to 200 hosts and 5000 VMs. It is possible to use IBM PowerVC to manage PowerVM NovaLink systems while still managing HMC managed systems.
NovaLink is enabled via a software package that runs in a Linux LPAR/VM on a POWER8 host. NovaLink provides a consistent interface (with other supported Hypervisors such as KVM), so OpenStack services can communicate with the LPARs consistently through the NovaLink partition.
---------------------------------------------------------------
PowerVC and Openstack
As PowerVC is built on Openstack, the main OpenStack functions are built into PowerVC as well. These functions are:
- Image management (in OpenStack it is called "Glance")
- Compute management (this is in PowerVC the "Virtual Machines", in Openstack it is called "Nova")
- Network management (in OpenStack it is called "Neutron")
- Storage management (in OpenStack it is called "Cinder")
---------------------------------------------------------------
Deploying Virtual Machines (Host Group - Placement Policy)
In order to use PowerVC and to create new LPARs, we need add Image, Host, Network…. resources. A new LPAR is created from an Image, and during creation we will choose which Power Server (Host) and which Network etc… to use.
Power servers are called "Hosts" in PowerVC. After adding several Hosts to PowerVC, we can group these Hosts, by creating different groups, which are called "Host Group". For each Host Group we can assign a Placement Policy, which defines where will be our new LPAR created. For example if we choose the policy "Memory Utilization Balanced", our new LPAR (or Virt. Machine in PowerVC terminology) will be deployed on that host where the memory utilization is the lowest. Every host must be in a host group and during migration VMs are kept within the host group. Out of the box, PowerVC comes with a “default” host group (a special group that can’t be deleted), that will house any host that is registered with PowerVC but not added to a specific host group.
When a new Host group is created, we can choose from these placement policies:
- Striping: It distributes VMs evenly across all hosts. (CPU/RAM/Storage/Network)
- Packing: It places VMs on a single host, which contains the most VMs (until its resources are fully used )
- CPU utilization balance: It places VMs on the host with the lowest CPU utilization in the host group.
- CPU allocation balance: It places VMs on the host with the lowest percentage of its CPU that is allocated to VMs.
- Memory utilization balanced: It places virtual machines on the host that has the lowest memory utilization in the host group
- Memory allocation balance: It places VMs on the host with the lowest percentage of its memory that is allocated post-deployment or after relocation
(Some tips from the Redbook: Use the striping policy rather than the packing policy. Limit the number of concurrent deployments to match the number of hosts.)
When a new host is added to a host group that is managed by PowerVC, and the placement policy is set to the striping mode, new VMs are deployed on the new host until the resource usage of this host is about the same as on the previously installed hosts (until it catches up with the existing hosts). (Before updating the firmware or replacing hardware on a host, the host should move into maintenance mode.)
The above mentioned placement policies are predefined, it is not possible to create new policies, and if during VM deployment we choose a specific host and not a Host group, then the placement policy is ignored for that VM.
---------------------------------------------------------------
Collocation rules
Collocation rules are telling to VMs on which hosts they can be run. A collocation rule has a policy, which can be either “affinity” or “anti-affinity”. An affinity rule means that the VMs in the collocation rule must be running on the same host (i.e., “best friends”) and an anti-affinity rule means that the VMs need to be running on different hosts (i.e., “worst enemies”). PowerVC is following these rules when performing live migration, remote restart or host evacuation operations (i.e. any mobility operation). Automation becomes much simpler as administrators don't need to keep these in mind.
You can only add a VM to a collocation rule post-deployment; launching a VM into a group from the PowerVC GUI at deployment time is not supported at this time. Collocation rules can be created in the "Configuration" menu under "Collocation Rules".
It is possible that a user starts a mobility operation outside of PowerVC (e.g., directly on the HMC), so the VM could be moved to a host that causes a violation of the collocation rule. In such a case, the policy state will be displayed as “violated” in PowerVC and serve as a visual indicator to the user that some remedial action need be taken.
You cannot migrate or remote restart any VM that is a member of an “affinity” collocation rule; this restriction exists because it would inherently violate the collocation rule as there would be a period of time in which the VMs are not on the same host. If you need to perform a mobility operation on a VM in an “affinity” collocation rule, you need to remove it from the rule, perform the mobility operation and then re-add it to the rule.
---------------------------------------------------------------
Templates
Rather than defining all characteristics for each VM (CPU/RAM…) or each storage unit that must be created, we can use a template that was previously defined.
Three types of templates are available:
- Compute templates: These templates are used to define processing units and memory that are needed by a partition.
- Deploy templates: These templates are used to allow users to quickly deploy an image. (more details below)
- Storage templates: These templates are used to define storage settings, such as a specific volume type, storage pool, and storage provider.
Deploy templates:
A deploy template includes everything necessary to create quickly a VM. It includes:
- the deployment target (a Host group or a specific Host), Storage Connectivity Group and any other policies
- compute template (needed CPU, RAM configuration)
- which image to use during deployment
- network (VLAN) needed for the new VM
- any other scripts which will be called during first boot (this section is handled by cloud init)
A deploy template basically is just a bunch of information which will be needed for the creation of the new VM. Comparing to an image, deploy templates are not using storage space. (Images are using storage space. For example an AIX image can be on a 100GB LUN), so creating new images will take up more and more space on the storage. But creating new deploy templates will not use more storage space.)
Creating Deploy Templates:
1. From the Images window, select the image that you want to use to create a deploy template and click Create Template from Image.
2. Fill out the information in the window that opens, then click Create Deploy Template.
3. The deploy template is now listed on the Deploy Templates tab of the Images window.
---------------------------------------------------------------
STORAGE:
Storage provider: Any system that provides storage volumes. (SVC, EMC...or SSP). PowerVC may call as storage controllers.
Fabric: The name for a collection of SAN switches
Storage pool: A storage resource (managed by storage providers) in which volumes are created. PowerVC discovers them (can't create one).
Shared storage pool: PowerVM feature, which is created on VIOS before PowerVC can create volumes on SSP. (PowerVC cannot modify it.)
Volume: This is a disk or a LUN. It is created from the storage pools and presented as virtual disks to the partitions.
VMs can access their storage by using vSCSI, NPIV or an SSP (which will create vSCSI luns).
Storage templates:
Storage templates are used to speed up the creation of a disk. A storage template defines several properties of the disk (thin, io group, mirroring...). Disk size is not part of the template. When you register a storage provider, a default storage template is created for that provider. After a disk is created and uses a template, you cannot modify the template settings.
Storage connectivity groups
In short, it refers to a set of VIOSs with access to the same storage controllers. When a VM is created, PowerVC needs to identify which host has connectivity to the requested storage. Also, when a VM is migrated, PowerVC must ensure that the target host also provides connectivity to the volumes of the VM. The purpose of a storage connectivity group is to define settings that control how volumes are attached to VMs, including the connectivity type for boot and data volumes, physical FC port restrictions, fabrics, and redundancy requirements for VIOSs, ports, and fabrics. A storage connectivity group contains a set of VIOSs that are allowed to participate in volume connectivity.
Custom storage connectivity groups provide flexibility when different policies are needed for different types of VMs. For example, a storage connectivity group is needed to use VIOS_1 and VIOS_2 for production VMs and another storage connectivity group is needed for VIOS_3 for development VMs. Many other connectivity policies are available with storage connectivity groups.
When a VM is deployed with PowerVC, a storage connectivity group must be specified. The VM is associated with that storage connectivity group during the VM's existence. A VM can be deployed only on Power Systems hosts that satisfy the storage connectivity group settings. The VM can be migrated only within its associated storage connectivity group and host group.
The default storage connectivity group for NPIV connectivity, vSCSI connectivity and for SSP is created when PowerVC recognizes that the needed resource is needed for the management. After you add the storage providers and define the storage templates, you can create storage volumes.
Only data volumes must be created manually. Boot volumes are handled by PowerVC automatically. When you deploy a partition, IBM PowerVC automatically creates the boot volumes and data volumes that are included in the images.
Shared storage pool
SSPs are supported on hosts that are managed either by HMC or NovaLink. The SSP is configured manually, without PowerVC (creation of a cluster on VIO servers, adding disks to the pool). After that PowerVC will discover the SSP when it discovers the VIOSs. When a VM is created PowerVC will create logical units (LUs) in the SSP, then PowerVC instructs the VIOS to map these LUs to the VM (VIO client partition) as a vSCSI device.
---------------------------------------------------------------
NETWORK
During network configuration if the PowerVC, create all of the networks that will be needed for future VM creation. Contact your network administrator to add all of the needed VLANs on the switch ports that are used by the SEA (PowerVM).
PowerVC requires that the SEAs are created before it starts to manage the systems. If you are using SEA in sharing/auto mode with VLAN tagging, create it without any VLANs that are assigned on the Virtual Ethernet Adapters. PowerVC adds or removes the VLANs on the SEAs when necessary (at VM deletion and creation), For example:
- If you deploy a VM on a new network, IBM PowerVC adds the VLAN on the SEA.
- If you delete the last VM of a specific network (for a host), the VLAN is automatically deleted.
- If the VLAN is the last VLAN that was defined on the Virtual Ethernet Adapter, this VEA is removed from the SEA.
When a network is created in PowerVC, a SEA is automatically chosen from each registered host. If the VLAN does not exist yet on the SEA, PowerVC deploys that VLAN to the SEA that is specified. To split a single VLAN across multiple SEAs, break those SEAs into separate virtual switches. PowerVC supports the use of virtual switches in the system. Use multiple virtual switches when you want to separate a single VLAN across multiple distinct physical networks. To manage PowerVM, PowerVC requires that at least one SEA is defined on the host.
In environments with dual VIOSs, the secondary SEA is not shown except as an attribute on the primary SEA. If VLANs are added to SEA adapters and to VIOS profiles after the host is brought into management of IBM PowerVC, the new VLAN is not automatically discovered by PowerVC. To discover a newly added VLAN, run the Verify Environment function in the IBM PowerVC system.
PowerVC supports Dynamic Host Configuration Protocol (DHCP) or static IP address assignment. For DHCP, an external DHCP server is required to provide the address on the VLANs of the objects that are managed by PowerVC. When DHCP is used, PowerVC is not aware of the IP addresses of the VMs that it manages. PowerVC also supports IP addresses by using hardcoded (/etc/hosts) or Domain Name Server (DNS)-based host name resolution.
Since Version 1.2.2, PowerVC can dynamically add a network interface controller (NIC) to a VM or remove a NIC from a VM. PowerVC does not set the IP address for new network interfaces that are created after the machine deployment. Any removal of a NIC results in freeing the IP address that was set on it.
---------------------------------------------------------------
Environment checker:
This is a single interface to confirm that resources (Compute, Storage, Network etc.) registered in PowerVC meet the configuration and hardware level requirements.
The environment checker tool verifies these (and more):
- Management server has the required resources in terms of memory, disk space etc.
- Hosts and storage are the correct machine type and model.
- The allowed number of hosts is not exceeded.
- The correct level of Virtual I/O Server is installed on your hosts.
- The Virtual I/O Server is configured correctly on all of your hosts.
- Storage and SAN switches are configured correctly.
…
==============================================
Commands:
/opt/ibm/powervc/version.properties conatins version infor and other properties of PowerVC
https://ip_address/powervc/version gets the current version of PowerVC
powervc-backup --targetdir /powervc/backup creating a backup
powervc-restore --targetdir /powervc/backup/<backup dir>/ restoring a powervc backup
===========================================
PowerVC backup
1. mount a remote nfs share, where backup will be saved
[root@powervc ~]# mount nim01:/repository/BACKUP /mnt
mount.nfs: Remote I/O error
Remote I/O error can happen, because PowerVC is running on Linux (Red Hat) and it tries NFS 4 by default which is not configured at AIX side, choose NFS 3 during mount:
[root@powervc ~]# mount -o vers=3 nim01:/repository/BACKUP /mnt
2. start the backup, which takes about 5 mins and during that time Web interface is not working
[root@powervc ~]# powervc-backup --targetdir /mnt
Continuing with this operation will stop all PowerVC services. Do you want to continue? (y/N):y
Stopping PowerVC services...
Backing up the databases and data files...
Database and file backup completed. Backup data is in archive /mnt/20180622105847651966/powervc_backup.tar.gz
Starting PowerVC httpd services...
Starting PowerVC bumblebee services...
Starting PowerVC services...
PowerVC backup completed successfully.