Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP
More options
HP.com home
Managing Serviceguard Fifteenth Edition > Chapter 7 Cluster and Package Maintenance

Reconfiguring a Cluster

» 

Technical documentation

Complete book in PDF
» Feedback
Content starts here

 » Table of Contents

 » Index

You can reconfigure a cluster either when it is halted or while it is still running. Some operations can only be done when the cluster is halted. Table 7-1 “Types of Changes to the Cluster Configuration ” shows the required cluster state for many kinds of changes.

Table 7-1 Types of Changes to the Cluster Configuration

Change to the Cluster Configuration

Required Cluster State

Add a new node

All systems configured as members of this cluster must be running.

Delete a node

A node can be deleted even though it is unavailable or unreachable.

Add a volume group

Cluster can be running.

Delete a volume group

Cluster can be running. Packages that use the volume group will not be able to start again until their configuration is modified.

Change Maximum Configured Packages

Cluster can be running.

Change Quorum Server Configuration

Cluster must not be running.

Change Cluster Lock Configuration (LVM lock disk)

Cluster can be running under certain conditions; see “Updating the Cluster Lock Configuration”.

Change Cluster Lock Configuration (lock LUN)

Cluster must not be running. See “Updating the Cluster Lock LUN Configuration Offline”.

Add NICs and their IP addresses, if any, to the cluster configuration

Cluster can be running. See “Changing the Cluster Networking Configuration while the Cluster Is Running”.

Delete NICs and their IP addresses, if any, from the cluster configuration

Cluster can be running.

“Changing the Cluster Networking Configuration while the Cluster Is Running”.

If removing the NIC from the system, see “Removing a LAN or VLAN Interface from a Node”.

Change the designation of an existing interface from HEARTBEAT_IP to STATIONARY_IP, or vice versa

Cluster can be running. See “Changing the Cluster Networking Configuration while the Cluster Is Running”.
Reconfigure IP addresses for a NIC used by the clusterMust delete the interface from the cluster configuration, reconfigure it, then add it back into the cluster configuration. See “What You Must Keep in Mind”. Cluster can be running throughout.

Change NETWORK_FAILURE_DETECTION parameter (see “Monitoring LAN Interfaces and Detecting Failure ”)

Cluster can be running.

Change NETWORK_POLLING_INTERVALCluster can be running.
Change HEARTBEAT_INTERVAL, NODE_TIMEOUT, AUTO_START_TIMEOUTCluster can be running, except in CVM environment; see the NOTE below this table.

Change Access Control Policy

Cluster and package can be running.

Failover Optimization to enable or disable Faster Failover product

Cluster must not be running.

 

NOTE: If you are using CVM or CFS, you cannot change HEARTBEAT_INTERVAL, NODE_TIMEOUT, or AUTO_START_TIMEOUT while the cluster is running. This is because they affect the aggregate failover time, which is only reported to the CVM stack on cluster startup.

Updating the Cluster Lock Configuration

Use the procedures that follow whenever you need to change the device file names of the cluster lock physical volumes - for example, when you are migrating cluster nodes to the agile addressing scheme available as of HP-UX 11i v3 (see “About Device File Names (Device Special Files)”).

Updating the Cluster Lock Disk Configuration Online

You can change the device file names (DSFs) of the cluster lock physical volumes (that is, the values of the FIRST_CLUSTER_LOCK_PV and SECOND_CLUSTER_LOCK_PV parameters in the cluster configuration file) without bringing down the cluster, under the following conditions:

  • You are not changing the physical disks themselves

  • You are changing values that already exist in the cluster configuration file, not adding or deleting them

  • The node on which you are making the change is not running in the cluster (that is, you have halted it by means of cmhaltnode, or by selecting Halt Node in Serviceguard Manager)

  • The cluster nodes are running Serviceguard 11.17.01 or later

To update the values of the FIRST_CLUSTER_LOCK_PV and SECOND_CLUSTER_LOCK_PV parameters without bringing down the cluster, proceed as follows:

  1. Halt the node (cmhaltnode) on which you want to make the changes.

  2. In the cluster configuration file, modify the values of FIRST_CLUSTER_LOCK_PV and SECOND_CLUSTER_LOCK_PV for this node.

  3. Run cmcheckconf to check the configuration.

  4. Run cmapplyconf to apply the configuration.

  5. Restart the node (cmrunnode).

  6. Repeat this procedure on each node on which you want to make the changes.

For information about replacing the physical disk, see “Replacing a Lock Disk”.

Updating the Cluster Lock Disk Configuration Offline

If you cannot meet the conditions spelled out above for updating the configuration online, or you prefer to make the changes while the cluster is down, proceed as follows:

  1. Halt the cluster.

  2. In the cluster configuration file, modify the values of FIRST_CLUSTER_LOCK_PV and SECOND_CLUSTER_LOCK_PV for each node.

  3. Run cmcheckconf to check the configuration.

  4. Run cmapplyconf to apply the configuration.

For information about replacing the physical disk, see “Replacing a Lock Disk”.

Updating the Cluster Lock LUN Configuration Offline

The cluster must be halted before you change the lock LUN configuration. Proceed as follows:

  1. Halt the cluster.

  2. In the cluster configuration file, modify the values of CLUSTER_LOCK_LUN for each node.

  3. Run cmcheckconf to check the configuration.

  4. Run cmapplyconf to apply the configuration.

For information about replacing the physical device, see “Replacing a Lock LUN”.

Reconfiguring a Halted Cluster

You can make a permanent change in the cluster configuration when the cluster is halted. This procedure must be used for changes marked “Cluster must not be running” in the table on Table 7-1 “Types of Changes to the Cluster Configuration ”, but it can be used for any other cluster configuration changes as well.

Use the following steps:

  1. Halt the cluster on all nodes, using Serviceguard Manager’s Halt Cluster command, or cmhaltcl on the command line.

  2. On one node, reconfigure the cluster as described in the chapter “Building an HA Cluster Configuration.” You can do this by using Serviceguard Manager, or by entering cmquerycl on the command line to generate an ASCII file, which you then edit.

  3. Make sure that all nodes listed in the cluster configuration file are powered up and accessible. To copy the binary cluster configuration file to all nodes, use Serviceguard Manager’s Apply button, or enter cmapplyconf on the command line. This file overwrites any previous version of the binary cluster configuration file.

  4. Start the cluster on all nodes or on a subset of nodes. Use Serviceguard Manager’s Run Cluster command, or cmruncl on the command line.

Reconfiguring a Running Cluster

This section provides instructions for changing the cluster configuration while the cluster is up and running. Note the following restrictions:

  • You cannot change the quorum server or lock disk configuration while the cluster is running.

  • You cannot remove an active node from the cluster. You must halt the node first.

  • You cannot delete an active volume group from the cluster configuration. You must halt any package that uses the volume group and ensure that the volume is inactive before deleting it.

  • The only configuration change allowed while a node is unreachable (for example, completely disconnected from the network) is to delete the unreachable node from the cluster configuration. If there are also packages that depend upon that node, the package configuration must also be modified to delete the node. This all must be done in one configuration request (cmapplyconf command).

Changes to the package configuration are described in a later section.

Adding Nodes to the Cluster While the Cluster is Running

You can use Serviceguard Manager to add nodes to a running cluster, or use Serviceguard commands as in the example below.

In this example, nodes ftsys8 and ftsys9 are already configured in a running cluster named cluster1, and you are adding node ftsys10.

  1. Use the following command to store a current copy of the existing cluster configuration in a temporary file:

    cmgetconf -c cluster1 temp.ascii

  2. Specify a new set of nodes to be configured and generate a template of the new configuration. Specify the node name (39 bytes or less) without its full domain name; for example, ftsys8 rather than ftsys8.cup.hp.com:

    cmquerycl -C clconfig.ascii -c cluster1 \
    -n ftsys8 -n ftsys9 -n ftsys10

  3. Open clconfig.ascii in an editor and check that the information about the new node is what you want.

  4. Verify the new configuration:

    cmcheckconf -C clconfig.ascii

  5. Apply the changes to the configuration and distribute the new binary configuration file to all cluster nodes:

    cmapplyconf -C clconfig.ascii

Use cmrunnode to start the new node, and, if you so decide, set the AUTOSTART_CMCLD parameter to 1 in the /etc/rc.config.d/cmcluster file to enable the new node to join the cluster automatically each time it reboots.

NOTE: Before you can add a node to a running cluster that uses Veritas CVM (on systems that support it), the node must already be connected to the disk devices for all CVM disk groups. The disk groups will be available for import when the node joins the cluster.

Removing Nodes from the Cluster while the Cluster Is Running

You can use Serviceguard Manager to delete nodes, or Serviceguard commands as shown below. The following restrictions apply:

  • If the node you want to delete is unreachable (disconnected from the LAN, for example), you can delete the node only if there are no packages which specify the unreachable node. If there are packages that depend on the unreachable node, halt the cluster or use Serviceguard commands as described in the next section.

Use the following procedure to delete a node with HP-UX commands. In this example, nodes ftsys8, ftsys9 and ftsys10 are already configured in a running cluster named cluster1, and you are deleting node ftsys10.

NOTE: If you want to remove a node from the cluster, run the cmapplyconf command from another node in the same cluster. If you try to issue the command on the node you want removed, you will get an error message.
  1. Use the following command to store a current copy of the existing cluster configuration in a temporary file:

    cmgetconf -c cluster1 temp.ascii

  2. Specify the new set of nodes to be configured (omitting ftsys10) and generate a template of the new configuration:

    cmquerycl -C clconfig.ascii -c cluster1 -n ftsys8 -n ftsys9

  3. Edit the file clconfig.ascii to check the information about the nodes that remain in the cluster.

  4. Halt the node you are going to remove (ftsys10 in this example):

    cmhaltnode -f -v ftsys10

  5. Verify the new configuration:

    cmcheckconf -C clconfig.ascii

  6. From ftsys8 or ftsys9, apply the changes to the configuration and distribute the new binary configuration file to all cluster nodes.:

    cmapplyconf -C clconfig.ascii

NOTE: If you are trying to remove an unreachable node on which many packages are configured to run (especially if the packages use a large number of EMS resources) you may see the following message:
The configuration change is too large to process while the cluster is running.
Split the configuration change into multiple requests or halt the cluster.

In this situation, you must halt the cluster to remove the node.

Changing the Cluster Networking Configuration while the Cluster Is Running

What You Can Do

Online operations you can perform include:

  • Add a network interface with its HEARTBEAT_IP or STATIONARY_IP.

  • Add a standby interface.

  • Delete a network interface with its HEARTBEAT_IP or STATIONARY_IP.

  • Delete a standby interface.

  • Change the designation of an existing interface from HEARTBEAT_IP to STATIONARY_IP, or vice versa.

  • Change the NETWORK_POLLING_INTERVAL.

  • Change the NETWORK_FAILURE_DETECTION parameter.

  • A combination of any of these in one transaction (cmapplyconf), given the restrictions below.

What You Must Keep in Mind

The following restrictions apply:

  • You must not change the configuration of all heartbeats at one time, or change or delete the only configured heartbeat.

    At least one working heartbeat, preferably with a standby, must remain unchanged.

  • In a CVM configuration, you can add and delete only data LANs and IP addresses.

    You cannot change the heartbeat configuration while a cluster that uses CVM is running.

  • You cannot add interfaces or modify their characteristics unless those interfaces, and all other interfaces in the cluster configuration, are healthy.

    There must be no bad NICs or non-functional or locally switched subnets in the configuration, unless you are deleting those components in the same operation.

  • You cannot change the designation of an existing interface from HEARTBEAT_IP to STATIONARY_IP, or vice versa, without also making the same change to all peer network interfaces on the same subnet on all other nodes in the cluster.

  • You cannot change the designation of an interface from STATIONARY_IP to HEARTBEAT_IP unless the subnet is common to all nodes.

    Remember that the HEARTBEAT_IP must be an IPv4 address, and must be on the same subnet on all nodes (except in cross-subnet configurations; see “Cross-Subnet Configurations”).

  • You cannot delete a primary interface without also deleting any standby interfaces, unless the standby is being used by another primary interface that is not being deleted.

  • You cannot delete a subnet or IP address from a node while a package that uses it (as a monitored_subnet, ip_subnet, or ip_address) is configured to run on that node.

    See monitored_subnet for more information about the package networking parameters.

  • You cannot change the IP configuration of an interface (NIC) used by the cluster in a single transaction (cmapplyconf).

    You must first delete the NIC from the cluster configuration, then reconfigure the NIC (using ifconfig (1m), for example), then add the NIC back into the cluster.

    Examples of when you must do this include:

    • moving a NIC from one subnet to another

    • adding an IP address to a NIC

    • removing an IP address from a NIC

    CAUTION: Do not add IP addresses to network interfaces that are configured into the Serviceguard cluster, unless those IP addresses themselves will be immediately configured into the cluster as stationary IP addresses. If you configure any address other than a stationary IP address on a Serviceguard network interface, it could collide with a relocatable package address assigned by Serviceguard.

Some sample procedures follow.

Example: Adding a Heartbeat LAN

Suppose that a subnet 15.13.170.0 is shared by nodes ftsys9 and ftsys10 in a two-node cluster cluster1, and you want to add it to the cluster configuration as a heartbeat subnet. Proceed as follows.

  1. Run cmquerycl to get a cluster configuration template file that includes networking information for interfaces that are available to be added to the cluster configuration:

    cmquerycl -c cluster1 -C clconfig.ascii

    NOTE: As of Serviceguard A.11.18, cmquerycl -c produces output that includes commented-out entries for interfaces that are not currently part of the cluster configuration, but are available.

    The networking portion of the resulting clconfig.ascii file looks something like this:

    NODE_NAME ftsys9 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.3.17.18 #NETWORK_INTERFACE lan0 #STATIONARY_IP 15.13.170.18 NETWORK_INTERFACE lan3# Possible standby Network Interfaces for lan1, lan0: lan2.NODE_NAME ftsys10 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.3.17.19 #NETWORK_INTERFACE lan0 # STATIONARY_IP 15.13.170.19 NETWORK_INTERFACE lan3# Possible standby Network Interfaces for lan0, lan1: lan2
  2. Edit the file to uncomment the entries for the subnet that is being added (lan0 in this example), and change STATIONARY_IP to HEARTBEAT_IP:

    NODE_NAME ftsys9 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.3.17.18 NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.170.18 NETWORK_INTERFACE lan3# Possible standby Network Interfaces for lan1, lan0: lan2.NODE_NAME ftsys10 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.3.17.19 NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.170.19 NETWORK_INTERFACE lan3 # Possible standby Network Interfaces for lan0, lan1: lan2
  3. Verify the new configuration:

    cmcheckconf -C clconfig.ascii

  4. Apply the changes to the configuration and distribute the new binary configuration file to all cluster nodes.:

    cmapplyconf -C clconfig.ascii

If you were configuring the subnet for data instead, and wanted to add it to a package configuration, you would now need to:

  1. Halt the package

  2. Add the new networking information to the package configuration file

  3. In the case of a legacy package, add the new networking information to the package control script if necessary

  4. Apply the new package configuration, and redistribute the control script if necessary.

For more information, see “Reconfiguring a Package on a Running Cluster ”.

Example: Deleting a Subnet Used by a Package

In this example, we are deleting subnet 15.13.170.0 (lan0). This will also mean deleting lan3, which is a standby for lan0 and not shared by any other primary LAN. Proceed as follows.

  1. Halt any package that uses this subnet and delete the corresponding networking information (monitored_subnet, ip_subnet, ip_address; see monitored_subnet).

    See “Reconfiguring a Package on a Running Cluster ” for more information.

  2. Run cmquerycl to get the cluster configuration file:

    cmquerycl -c cluster1 -C clconfig.ascii

  3. Comment out the network interfaces lan0 and lan3 and their network interfaces, if any, on all affected nodes. The networking portion of the resulting file looks something like this:

    NODE_NAME ftsys9 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.3.17.18 # NETWORK_INTERFACE lan0 # STATIONARY_IP 15.13.170.18 # NETWORK_INTERFACE lan3# Possible standby Network Interfaces for lan1, lan0: lan2.NODE_NAME ftsys10 NETWORK_INTERFACE lan1 HEARTBEAT_IP 192.3.17.19 # NETWORK_INTERFACE lan0 # STATIONARY_IP 15.13.170.19 # NETWORK_INTERFACE lan3 # Possible standby Network Interfaces for lan0, lan1: lan2
  4. Verify the new configuration:

    cmcheckconf -C clconfig.ascii

  5. Apply the changes to the configuration and distribute the new binary configuration file to all cluster nodes.:

    cmapplyconf -C clconfig.ascii

Removing a LAN or VLAN Interface from a Node

You must remove a LAN or VLAN interface from the cluster configuration before removing the interface from the system.

On an HP-UX 11i v3 system, you can then remove the interface without shutting down the node. Follow these steps on the affected node:

NOTE: This can be done on a running system only on HP-UX 11i v3. You must shut down an HP-UX 11i v2 system before removing the interface.
  1. If you are not sure whether or not a physical interface (NIC) is part of the cluster configuration, run olrad -C with the affected I/O slot ID as argument. If the NIC is part of the cluster configuration, you’ll see a warning message telling you to remove it from the configuration before you proceed. See the olrad(1M) manpage for more information about olrad.

  2. Use the cmgetconf command to store a copy of the cluster’s existing cluster configuration in a temporary file. For example:

    cmgetconf clconfig.ascii 

  3. Edit clconfig.ascii and delete the line(s) specifying the NIC name and its IP address(es) (if any) from the configuration.

  4. Run cmcheckconf to verify the new configuration.

  5. Run cmapplyconf to apply the changes to the configuration and distribute the new configuration file to all the cluster nodes.

  6. Run olrad -d to remove the NIC.

See also “Replacing LAN or Fibre Channel Cards”.

Changing the LVM Configuration while the Cluster is Running

You can do this in Serviceguard Manager, or use HP-UX commands as in the example that follows.

NOTE: You cannot change the volume group or physical volume configuration of the cluster lock disk while the cluster is running.If you are removing a volume group from the cluster configuration, make sure that you also modify any package that activates and deactivates this volume group. In addition, you should use the LVM vgexport command on the removed volume group; do this on each node that will no longer be using the volume group.

From the LVM’s cluster, follow these steps:

  1. Use the cmgetconf command to store a copy of the cluster's existing cluster configuration in a temporary file. For example: cmgetconf clconfig.ascii 

  2. Edit the file clconfig.ascii to add or delete volume groups.

  3. Use the cmcheckconf command to verify the new configuration.

  4. Use the cmapplyconf command to apply the changes to the configuration and distribute the new binary configuration file to all cluster nodes.

NOTE: If the volume group that you are deleting from the cluster is currently activated by a package, the configuration will be changed but the deletion will not take effect until the package is halted; thereafter, the package will no longer be able to run without further modification, such as removing the volume group from the package configuration file or control script.

Changing the VxVM or CVM Storage Configuration

You can add VxVM disk groups to the cluster configuration while the cluster is running. Before you can add new CVM disk groups, the cluster must be running.

NOTE: Check the Serviceguard, SGeRAC, and SMS Compatibility and Feature Matrix and the latest Release Notes for your version of Serviceguard for up-to-date information about support for CVM and CFS: http://www.docs.hp.com -> High Availability -> Serviceguard.

Create CVM disk groups from the CVM Master Node:

  • For CVM 3.5, and for CVM 4.1 and later without CFS, edit the configuration file of the package that uses CVM storage. Add the CVM storage group by means of the cvm_dg parameter (or STORAGE_GROUP in a legacy package). Then run the cmapplyconf command.

  • For CVM 4.1 and later with CFS, edit the configuration file of the package that uses CFS. Configure the three dependency_ parameters. Then run the cmapplyconf command.

Similarly, you can delete VxVM or CVM disk groups provided they are not being used by a cluster node at the time.

CAUTION: Serviceguard manages the Veritas processes, specifically gab and LLT. This means that you should never use administration commands such as gabconfig, llthosts, and lltconfig to administer a cluster. It is safe to use the read-only variants of these commands, such as gabconfig -a. But a Veritas administrative command could potentially crash nodes or the entire cluster.
NOTE: If you are removing a disk group from the cluster configuration, make sure that you also modify or delete any package configuration file (or legacy package control script) that imports and deports this disk group. If you are removing a disk group managed by CVM without CFS, be sure to remove the corresponding entries for the disk group from the package configuration file. If you are removing a disk group managed by CVM with CFS, be sure to remove the corresponding dependency_ parameters.

Changing MAX_CONFIGURED_PACKAGES

As of Serviceguard A.11.17, you can change MAX_CONFIGURED_PACKAGES while the cluster is running. The default for MAX_CONFIGURED_PACKAGES is the maximum number allowed in the cluster. You can use Serviceguard Manager to change MAX_CONFIGURED_PACKAGES, or Serviceguard commands as shown below.

Use cmgetconf to obtain a current copy of the cluster's existing configuration; for example:

cmgetconf -c <cluster_name> clconfig.ascii 

Edit the clconfig.ascii file to include the new value for MAX_CONFIGURED_PACKAGES. Then use the cmcheckconf command to verify the new configuration. Using the -k or -K option can significantly reduce the response time.

Use cmapplyconf to apply the changes to the configuration and send the new configuration file to all cluster nodes. Using -k or -K can significantly reduce the response time.

Printable version
Privacy statement Using this site means you accept its terms Feedback to webmaster
© Hewlett-Packard Development Company, L.P.