Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP
More options
HP.com home
Managing Serviceguard Fifteenth Edition > Chapter 5 Building an HA Cluster Configuration

Preparing Your Systems


Technical documentation

Complete book in PDF
» Feedback
Content starts here

 » Table of Contents

 » Index

This section describes the tasks that should be done on the prospective cluster nodes before you actually configure the cluster. It covers the following topics:

Installing and Updating Serviceguard

For information about installing Serviceguard, see the Release Notes for your version at http://docs.hp.com -> High Availability -> Serviceguard -> Release Notes.

For information about installing and updating HP-UX, see the HP-UX Installation and Update Guide for the version you need: go to http://docs.hp.com and choose the HP-UX version from the list under Operating Environments, then choose Installing and Updating.

Appendix E “Software Upgrades ” of this manual provides instructions for upgrading Serviceguard without halting the cluster. Make sure you read the entire Appendix, and the corresponding section in the Release Notes, before you begin.

Learning Where Serviceguard Files Are Kept

Serviceguard uses a special file, /etc/cmcluster.conf, to define the locations for configuration and log files within the HP-UX filesystem. The following locations are defined in the file:

################## cmcluster.conf ###############

# Highly Available Cluster file locations

# This file must not be edited


NOTE: If these variables are not defined on your system, then source the file /etc/cmcluster.conf in your login profile for user root. For example, you can add this line to root’s .profile file:
. /etc/cmcluster.conf

Throughout this book, system filenames are usually given with one of these location prefixes. Thus, references to $SGCONF/filename can be resolved by supplying the definition of the prefix that is found in this file. For example, if SGCONF is defined as /etc/cmcluster/, then the complete pathname for file $SGCONF/cmclconfig is /etc/cmcluster/cmclconfig.

NOTE: Do not edit the /etc/cmcluster.conf configuration file.

Configuring Root-Level Access

The subsections that follow explain how to set up HP-UX root access between the nodes in the prospective cluster. (When you proceed to configuring the cluster, you will define various levels of non-root access as well; see “Controlling Access to the Cluster”.)

NOTE: For more information and advice, see the white paper Securing Serviceguard at http://docs.hp.com -> High Availability -> Serviceguard -> White Papers.

Allowing Root Access to an Unconfigured Node

To enable a system to be included in a cluster, you must enable HP-UX root access to the system by the root user of every other potential cluster node. The Serviceguard mechanism for doing this is the file $SGCONF/cmclnodelist. This is sometimes referred to as a “bootstrap” file because Serviceguard consults it only when configuring a node into a cluster for the first time; it is ignored after that. It does not exist by default, but you will need to create it.

You may want to add a comment such as the following at the top of the file:


# Do not edit this file!

# Serviceguard uses this file only to authorize access to an

# unconfigured node. Once the node is configured,

# Serviceguard will not consult this file.


The format for entries in cmclnodelist is as follows:

[hostname] [user] [#Comment]

For example:

gryf  root     #cluster1, node1
sly   root     #cluster1, node2
bit   root     #cluster1, node3

This example grants root access to the node on which this cmclnodelist file resides to root users on the nodes gryf, sly, and bit.

Serviceguard also accepts the use of a “+” in the cmclnodelist file; this indicates that the root user on any Serviceguard node can configure Serviceguard on this node.

IMPORTANT: If $SGCONF/cmclnodelist does not exist, Serviceguard will look at ~/.rhosts. HP strongly recommends that you use cmclnodelist.
NOTE: When you upgrade a cluster from Version A.11.15 or earlier, entries in $SGCONF/cmclnodelist are automatically updated to Access Control Policies in the cluster configuration file. All non-root user-hostname pairs are assigned the role of Monitor.

Ensuring that the Root User on Another Node Is Recognized

The HP-UX root user on any cluster node can configure the cluster. This requires that Serviceguard on one node be able to recognize the root user on another.

Serviceguard uses the identd daemon to verify user names, and, in the case of a root user, verification succeeds only if identd returns the username root. Because identd may return the username for the first match on UID 0, you must check /etc/passwd on each node you intend to configure into the cluster, and ensure that the entry for the root user comes before any other entry with a UID of 0.

About identd

HP strongly recommends that you use identd for user verification, so you should make sure that each prospective cluster node is configured to run it. identd is usually started by inetd from /etc/inetd.conf.

NOTE: If the -T option to identd is available on your system, you should set it to 120 (-T120); this ensures that a connection inadvertently left open will be closed after two minutes. The identd entry in /etc/inetd.conf should look like this:

auth stream tcp6 wait bin /usr/lbin/identd identd -T120

Check the man page for identd to determine whether the -T option is supported for your version of identd

(It is possible to disable identd, though HP recommends against doing so. If for some reason you have to disable identd, see “Disabling identd”.)

For more information about identd, see the white paper Securing Serviceguard at http://docs.hp.com -> High Availability -> Serviceguard -> White Papers, and the identd (1M) manpage.

Configuring Name Resolution

Serviceguard uses the name resolution services built in to HP-UX.

Serviceguard nodes can communicate over any of the cluster’s shared networks, so the network resolution service you are using (such as DNS, NIS, or LDAP) must be able to resolve each of their primary addresses on each of those networks to the primary hostname of the node in question.

In addition, HP recommends that you define name resolution in each node’s /etc/hosts file, rather than rely solely on a service such as DNS. Configure the name service switch to consult the /etc/hosts file before other services. See “Safeguarding against Loss of Name Resolution Services” for instructions.

NOTE: If you are using private IP addresses for communication within the cluster, and these addresses are not known to DNS (or the name resolution service you use) these addresses must be listed in /etc/hosts.

For example, consider a two node cluster (gryf and sly) with two private subnets and a public subnet. These nodes will be granting access by a non-cluster node (bit) which does not share the private subnets. The /etc/hosts file on both cluster nodes should contain:  gryf.uksr.hp.com     gryf      gryf.uksr.hp.com     gryf      gryf.uksr.hp.com     gryf  sly.uksr.hp.com      sly      sly.uksr.hp.com     sly       sly.uksr.hp.com     sly   bit.uksr.hp.com     bit
NOTE: Serviceguard recognizes only the hostname (the first element) in a fully qualified domain name (a name with four elements separated by periods, like those in the example above). This means, for example, that gryf.uksr.hp.com and gryf.cup.hp.com cannot be nodes in the same cluster, as Serviceguard would see them as the same host gryf.

If applications require the use of hostname aliases, the Serviceguard hostname must be one of the aliases. For example: gryf.uksr.hp.com    gryf  node1    gryf2.uksr.hp.com    gryf     gryf3.uksr.hp.com    gryf sly.uksr.hp.com      sly   node2    sly2.uksr.hp.com    sly      sly3.uksr.hp.com    sly

Safeguarding against Loss of Name Resolution Services

When you employ any user-level Serviceguard command (including cmviewcl), the command uses the name service you have configured (such as DNS) to obtain the addresses of all the cluster nodes. If the name service is not available, the command could hang or return an unexpected networking error message.

NOTE: If such a hang or error occurs, Serviceguard and all protected applications will continue working even though the command you issued does not. That is, only the Serviceguard configuration commands (and corresponding Serviceguard Manager functions) are affected, not the cluster daemon or package services.

The procedure that follows shows how to create a robust name-resolution configuration that will allow cluster nodes to continue communicating with one another if a name service fails. If a standby LAN is configured, this approach also allows the cluster to continue to function fully (including commands such as cmrunnode and cmruncl) after the primary LAN has failed.

NOTE: If a NIC fails, the affected node will be able to fail over to a standby LAN so long as the node is running in the cluster. But if a NIC that is used by Serviceguard fails when the affected node is not running in the cluster, Serviceguard will not be able to restart the node. (For instructions on replacing a failed NIC, see “Replacing LAN or Fibre Channel Cards”.)
  1. Edit the /etc/hosts file on all nodes in the cluster. Add name resolution for all heartbeat IP addresses, and other IP addresses from all the cluster nodes; see “Configuring Name Resolution” for discussion and examples.

    NOTE: For each cluster node, the public-network IP address must be the first address listed. This enables other applications to talk to other nodes on public networks.
  2. If you are using DNS, make sure your name servers are configured in /etc/resolv.conf, for example:

    domain cup.hp.com

    search cup.hp.com hp.com



  3. Edit or create the /etc/nsswitch.conf file on all nodes and add the following text (on one line), if it does not already exist:

    • for DNS, enter (one line):

      hosts: files [NOTFOUND=continue UNAVAIL=continue] dns [NOTFOUND=return UNAVAIL=return]
    • for NIS, enter (one line):

      hosts: files [NOTFOUND=continue UNAVAIL=continue] nis {NOTFOUND=return UNAVAIL=return]

    If a line beginning with the string “hosts:” already exists, then make sure that the text immediately to the right of this string is (on one line):

    files [NOTFOUND=continue UNAVAIL=continue] dns [NOTFOUND=return UNAVAIL=return]


    files [NOTFOUND=continue UNAVAIL=continue] nis [NOTFOUND=return UNAVAIL=return]

    This step is critical, allowing the cluster nodes to resolve hostnames to IP addresses while DNS, NIS, or the primary LAN is down.

  4. Create a $SGCONF/cmclnodelist file on all nodes that you intend to configure into the cluster, and allow access by all cluster nodes. See “Allowing Root Access to an Unconfigured Node”.

NOTE: HP recommends that you also make the name service itself highly available, either by using multiple name servers or by configuring the name service into a Serviceguard package.

Ensuring Consistency of Kernel Configuration

Make sure that the kernel configurations of all cluster nodes are consistent with the expected behavior of the cluster during failover. In particular, if you change any kernel parameters on one cluster node, they may also need to be changed on other cluster nodes that can run the same packages.

Enabling the Network Time Protocol

HP strongly recommends that you enable network time protocol (NTP) services on each node in the cluster. The use of NTP, which runs as a daemon process on each system, ensures that the system time on all nodes is consistent, resulting in consistent timestamps in log files and consistent behavior of message services. This ensures that applications running in the cluster are correctly synchronized. The NTP services daemon, xntpd, should be running on all nodes before you begin cluster configuration. The NTP configuration file is /etc/ntp.conf.

For information about configuring NTP services, refer to the HP-UX manual HP-UX Internet Services Administrator’s Guide posted at http://docs.hp.com -> Networking and Communication -> Internet Services.

Tuning Network and Kernel Parameters

Serviceguard and its extension products, such as SGeSAP, SGeRAC, and SGeFF, have been tested with default values of the supported network and kernel parameters in the ndd and kmtune utilities.

Adjust these parameters with care.

If you experience problems, return the parameters to their default values. When contacting HP support for any issues regarding Serviceguard and networking, please be sure to mention any parameters that were changed from the defaults.

Third-party applications that are running in a Serviceguard environment may require tuning of network and kernel parameters:

  • ndd is the network tuning utility. For more information, see the man page for ndd(1M)

  • kmtune is the system tuning utility. For more information, see the man page for kmtune(1M).

Serviceguard has also been tested with non-default values for these two network parameters:

  • ip6_nd_dad_solicit_count - This network parameter enables the Duplicate Address Detection feature for IPv6 address. For more information, see “IPv6 Relocatable Address and Duplicate Address Detection Feature” of this manual.

  • tcp_keepalive_interval - This network parameter controls the length of time the node will allow an unused network socket to exist before reclaiming its resources so they can be reused.

    The following requirements must be met:

    • The maximum value for tcp_keepalive_interval is 7200000 (2 hours, the HP-UX default value).

    • The minimum value for tcp_keepalive_interval is 60000 (60 seconds).

    • The tcp_keepalive_interval value must be set on a node before Serviceguard is started on that node. This can be done by configuring the new tcp_keepalive_interval in the /etc/rc.config.d/nddconf file, which will automatically set any ndd parameters at system boot time.

    • The tcp_keepalive_interval value must be the same for all nodes in the cluster.

Creating Mirrors of Root Logical Volumes

HP strongly recommends that you use mirrored root volumes on all cluster nodes. The following procedure assumes that you are using separate boot and root volumes; you create a mirror of the boot volume (/dev/vg00/lvol1), primary swap (/dev/vg00/lvol2), and root volume (/dev/vg00/lvol3). In this example and in the following commands, /dev/dsk/c4t5d0 is the primary disk and /dev/dsk/c4t6d0 is the mirror; be sure to use the correct device file names for the root disks on your system.

NOTE: Under agile addressing, the physical devices in these examples would have names such as /dev/[r]disk/disk1, and /dev/[r]disk/disk2. See “About Device File Names (Device Special Files)”.
  1. Create a bootable LVM disk to be used for the mirror.

     pvcreate -B /dev/rdsk/c4t6d0 
  2. Add this disk to the current root volume group.

     vgextend /dev/vg00 /dev/dsk/c4t6d0 
  3. Make the new disk a boot disk.

     mkboot -l /dev/rdsk/c4t6d0  
  4. Mirror the boot, primary swap, and root logical volumes to the new bootable disk. Ensure that all devices in vg00, such as /usr, /swap, etc., are mirrored.

    NOTE: The boot, root, and swap logical volumes must be done in exactly the following order to ensure that the boot volume occupies the first contiguous set of extents on the new disk, followed by the swap and the root.

    The following is an example of mirroring the boot logical volume:

     lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c4t6d0 

    The following is an example of mirroring the primary swap logical volume:

     lvextend -m 1 /dev/vg00/lvol2 /dev/dsk/c4t6d0 

    The following is an example of mirroring the root logical volume:

     lvextend -m 1 /dev/vg00/lvol3 /dev/dsk/c4t6d0 
  5. Update the boot information contained in the BDRA for the mirror copies of boot, root and primary swap.

     /usr/sbin/lvlnboot -b /dev/vg00/lvol1
     /usr/sbin/lvlnboot -s /dev/vg00/lvol2
     /usr/sbin/lvlnboot -r /dev/vg00/lvol3 
  6. Verify that the mirrors were properly created.

     lvlnboot -v

    The output of this command is shown in a display like the following:

    Boot Definitions for Volume Group /dev/vg00:
    Physical Volumes belonging in Root Volume Group:
             /dev/dsk/c4t5d0 (10/0.5.0) -- Boot Disk
             /dev/dsk/c4t6d0 (10/0.6.0) -- Boot Disk
    Boot:  lvol1    on:      /dev/dsk/c4t5d0
    Root:  lvol3    on:      /dev/dsk/c4t5d0
    Swap:  lvol2    on:      /dev/dsk/c4t5d0
    Dump:  lvol2    on:      /dev/dsk/c4t6d0, 0

Choosing Cluster Lock Disks

The following guidelines apply if you are using a lock disk. See “Cluster Lock” and “Cluster Lock Planning” for discussion of cluster lock options.

The cluster lock disk is configured on an LVM volume group that is physically connected to all cluster nodes. This volume group may also contain data that is used by packages.

When you are using dual cluster lock disks, it is required that the default I/O timeout values are used for the cluster lock physical volumes. Changing the I/O timeout values for the cluster lock physical volumes can prevent the nodes in the cluster from detecting a failed lock disk within the allotted time period which can prevent cluster re-formations from succeeding. To view the existing IO timeout value, run the following command:

pvdisplay <lock device file name>

The I/O Timeout value should be displayed as “default.” To set the IO Timeout back to the default value, run the command:

pvchange -t 0 <lock device file name>

The use of a dual cluster lock is only allowed with certain specific configurations of hardware. Refer to the discussion in Chapter 3 on “Dual Cluster Lock.” For instructions on setting up a lock disk, see “Specifying a Lock Disk”.

Backing Up Cluster Lock Disk Information

After you configure the cluster and create the cluster lock volume group and physical volume, you should create a backup of the volume group configuration data on each lock volume group. Use the vgcfgbackup command for each lock volume group you have configured, and save the backup file in case the lock configuration must be restored to a new disk with the vgcfgrestore command following a disk failure.

NOTE: You must use the vgcfgbackup and vgcfgrestore commands to back up and restore the lock volume group configuration data regardless of how you create the lock volume group.

Setting Up a Lock LUN

LUN stands for Logical Unit Number. The term can refer to a single physical disk, but these days is more often used in a SAN (Storage Area Network) or NAS (Network-Attached Storage) context to denote a virtual entity derived from one or more physical disks.

Keep the following points in mind when choosing a device for a lock LUN:

  • All the cluster nodes must be physically connected to the lock LUN.

  • All existing data on the LUN will be destroyed when you configure it as a lock LUN.

    This means that if you use an existing lock disk, the existing lock information will be lost, and if you use a LUN that was previously used as a lock LUN for a Linux cluster, that lock information will also be lost.

  • A lock LUN cannot also be used in an LVM physical volume or VxVM or CVM disk group.

  • A lock LUN cannot be shared by more than one cluster.

  • A lock LUN cannot be used in a dual-lock configuration.

  • You do not need to back up the lock LUN data, and in fact there is no way to do so.

A lock LUN needs only a small amount of storage, about 100 KB.

  • If you are using a disk array, create the smallest LUN the array will allow, or, on an HP Integrity server, you can partition a LUN; see “Creating a Disk Partition on an HP Integrity System”.

  • If you are using individual disks, use either a small disk, or a portion of a disk. On an HP Integrity server, you can partition a disk; see “Creating a Disk Partition on an HP Integrity System”.

    IMPORTANT: On HP 9000 systems, there is no means of partitioning a disk or LUN, so you will need to dedicate an entire small disk or LUN for the lock LUN. This means that in a mixed cluster containing both Integrity and HP-PA systems, you must also use an entire disk or LUN; if you partition the device as described below, the HP-PA nodes will not be able to see the partitions.

Creating a Disk Partition on an HP Integrity System

You can use the idisk utility to create a partition for a lock LUN in a cluster that will contain only HP Integrity servers. Use the procedure that follows; see the idisk (1m) manpage for more information. Do this on one of the nodes in the cluster that will use this lock LUN.

CAUTION: Before you start, make sure the disk or LUN that is to be partitioned has no data on it that you need. idisk will destroy any existing data.
  1. Use a text editor to create a file that contains the partition information. You need to create at least three partitions, for example:

    HPUX 100%

    This defines:

    • A 100 MB EFI (Extensible Firmware Interface) partition (this is required)

    • A 1 MB partition that can be used for the lock LUN

    • A third partition that consumes the remainder of the disk is and can be used for whatever purpose you like.

  2. Save the file; for example you might call it partition.txt.

  3. Create the partition; for example (using partition.txt as input):

    /usr/sbin/idisk -w -p -f partition.txt /dev/rdsk/c1t4d0

    Or, on an HP-UX 11i v3 system using agile addressing (see “About Device File Names (Device Special Files)”:

    /usr/sbin/idisk -w -p -f partition.txt /dev/rdisk/disk12

    This will create three device files, for example

    /dev/dsk/c1t4d0s1, /dev/dsk/c1t4d0s2, and /dev/dsk/c1t4d0s3


    /dev/disk/disk12_p1, /dev/disk/disk12_p2, and /dev/disk/disk12_p3

    NOTE: The first partition, identified by the device file /dev/dsk/c1t4d0s1 or /dev/disk/disk12_p1 in this example, is reserved by EFI and cannot be used for any other purpose.
  4. Create the device files on the other cluster nodes.

    Use the command insf -e on each node. This will create device files corresponding to the three partitions, though the names themselves may differ from node to node depending on each node’s I/O configuration.

  5. Define the lock LUN; see “Defining the Lock LUN”.

Defining the Lock LUN

Use cmquerycl -L to create a cluster configuration file that defines the lock LUN.

  • If the pathname for the lock LUN is the same on all nodes, use a command such as:

    cmquerycl -C $SGCONF/config.ascii -L /dev/dsk/c0t1d1 -n <node1> -n <node2>

  • If the pathname for the lock LUN is different on some nodes, you must specify the path on each node; for example (all on one line):

    cmquerycl -C $SGCONF/config.ascii -n <node1> -L /dev/dsk/c0t1d1 -n <node2> -L /dev/dsk/c0t1d2

These commands create a configuration file which you can apply to the cluster configuration when you are ready to do so; see “Distributing the Binary Configuration File ”. See also “Specifying a Lock LUN”.

CAUTION: Once you have specified the lock LUN in the cluster configuration file, running cmapplyconf will destroy any data on the LUN.

Setting Up and Running the Quorum Server

If you will be using a quorum server rather than a lock disk or LUN, the Quorum Server software must be installed on a system other than the nodes on which your cluster will be running, and must be running during cluster configuration.

For detailed discussion, recommendations, and instructions for installing, updating, configuring, and running the Quorum Server, see HP Serviceguard Quorum Server Version A.03.00 Release Notes at http://www.docs.hp.com -> High Availability -> Quorum Server.

Creating the Storage Infrastructure and Filesystems with LVM and VxVM

In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done several ways:

You can also use a mixture of volume types, depending on your needs.

NOTE: If you are configuring volume groups that use mass storage on HP’s HA disk arrays, you should use redundant I/O channels from each node, connecting them to separate ports on the array. As of HP-UX 11i v3, the I/O subsystem performs load balancing and multipathing automatically.

Creating a Storage Infrastructure with LVM

This section describes storage configuration with LVM. It includes procedures for the following:

  • Creating Volume Groups for Mirrored Individual Disks

  • Distributing Volume Groups to Other Nodes

The Event Monitoring Service HA Disk Monitor provides the capability to monitor the health of LVM disks. If you intend to use this monitor for your mirrored disks, you should configure them in physical volume groups. For more information, refer to the manual Using High Availability Monitors (http://docs.hp.com -> High Availability -> Event Monitoring Service and HA Monitors -> Installation and User’s Guide).

Creating Volume Groups for Mirrored Individual Data Disks

The procedure described in this section uses physical volume groups for mirroring of individual disks to ensure that each logical volume is mirrored to a disk on a different I/O bus. This kind of arrangement is known as PVG-strict mirroring. It is assumed that your disk hardware is already configured in such a way that a disk to be used as a mirror copy is connected to each node on a different bus from the bus that is used for the other (primary) copy.

For more information on using LVM, refer to the Logical Volume Management volume of the HP-UX System Administrator’s Guide.

You can use the System Management Homepage to create or extend volume groups and create logical volumes. From the System Management Homepage, choose Disks and File Systems. Make sure you create mirrored logical volumes with PVG-strict allocation.

When you have created the logical volumes and created or extended the volume groups, specify the filesystem that is to be mounted on the volume group, then skip ahead to the section “Deactivating the Volume Group”.

To configure the volume groups from the command line, proceed as follows.

If your volume groups have not been set up, use the procedures that follow. If you have already done LVM configuration, skip ahead to the section “Configuring the Cluster.”

Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both. Use the following command on each node to list available disks as they are known to each system:

 lssf /dev/d*/*  

In the following examples, we use /dev/rdsk/c1t2d0 and /dev/rdsk/c0t2d0, which happen to be the device names for the same disks on both ftsys9 and ftsys10. In the event that the device file names are different on the different nodes, make a careful note of the correspondences.

NOTE: Under agile addressing, the physical devices in these examples would have names such as /dev/rdisk/disk1 and /dev/rdisk/disk2. See “About Device File Names (Device Special Files)”.

On the configuration node (ftsys9), use the pvcreate command to define disks as physical volumes. This only needs to be done on the configuration node. Use the following commands to create two physical volumes for the sample configuration:

 pvcreate -f /dev/rdsk/c1t2d0 
 pvcreate -f /dev/rdsk/c0t2d0 

Using PV Strict Mirroring Use the following steps to build a volume group on the configuration node (ftsys9). Later, the same volume group will be created on other nodes.

  1. First, set up the group directory for vgdatabase:

     mkdir /dev/vgdatabase 
  2. Next, create a control file named group in the directory /dev/vgdatabase, as follows:

     mknod /dev/vgdatabase/group c 64 0xhh0000 

    The major number is always 64, and the hexadecimal minor number has the form


    where hh must be unique to the volume group you are creating. Use a unique minor number that is available across all the nodes for the mknod command above. (This will avoid further reconfiguration later, when NFS-mounted logical volumes are created in the VG.)

    Use the following command to display a list of existing volume groups:

     ls -l /dev/*/group 
  3. Create the volume group and add physical volumes to it with the following commands:

     vgcreate -g bus0 /dev/vgdatabase /dev/dsk/c1t2d0 
     vgextend -g bus1 /dev/vgdatabase /dev/dsk/c0t2d0  

    The first command creates the volume group and adds a physical volume to it in a physical volume group called bus0. The second command adds the second drive to the volume group, locating it in a different physical volume group named bus1. The use of physical volume groups allows the use of PVG-strict mirroring of disks.

  4. Repeat this procedure for additional volume groups.

Creating Logical Volumes Use the following command to create logical volumes (the example is for /dev/vgdatabase):

 lvcreate -L 120 -m 1 -s g /dev/vgdatabase 

This command creates a 120 MB mirrored volume named lvol1. The name is supplied by default, since no name is specified in the command. The -s g option means that mirroring is PVG-strict, that is, the mirror copies of data will be in different physical volume groups.

NOTE: If you are using disk arrays in RAID 1 or RAID 5 mode, omit the -m 1 and -s g options.

Creating File Systems If your installation uses filesystems, create them next. Use the following commands to create a filesystem for mounting on the logical volume just created:

  1. Create the filesystem on the newly created logical volume:

     newfs -F vxfs /dev/vgdatabase/rlvol1 

    Note the use of the raw device file for the logical volume.

  2. Create a directory to mount the disk:

     mkdir /mnt1 
  3. Mount the disk to verify your work:

     mount /dev/vgdatabase/lvol1 /mnt1 

    Note the mount command uses the block device file for the logical volume.

  4. Verify the configuration:

     vgdisplay -v /dev/vgdatabase 

Distributing Volume Groups to Other Nodes After creating volume groups for cluster data, you must make them available to any cluster node that will need to activate the volume group. The cluster lock volume group must be made available to all nodes.

Deactivating the Volume Group At the time you create the volume group, it is active on the configuration node (ftsys9, for example).The next step is to umount the file system and deactivate the volume group; for example, on ftsys9:

 umount /mnt1 
 vgchange -a n /dev/vgdatabase
NOTE: Do this during this set-up process only, so that activation and mounting can be done by the package control script at run time. You do not need to deactivate and unmount a volume simply in order to create a map file (as in step 1 of the procedure that follows).

Distributing the Volume Group Use the following commands to set up the same volume group on another cluster node. In this example, the commands set up a new volume group on ftsys10 which will hold the same physical volume that was available on ftsys9. You must carry out the same procedure separately for each node on which the volume group's package can run.

To set up the volume group on ftsys10, use the following steps:

  1. On ftsys9, copy the mapping of the volume group to a specified file.

     vgexport -p -s -m /tmp/vgdatabase.map /dev/vgdatabase 
  2. Still on ftsys9, copy the map file to ftsys10:

     rcp /tmp/vgdatabase.map ftsys10:/tmp/vgdatabase.map 
  3. On ftsys10, create the volume group directory:

     mkdir /dev/vgdatabase 
  4. Still on ftsys10, create a control file named group in the directory /dev/vgdatabase, as follows:

     mknod /dev/vgdatabase/group c 64 0xhh0000 

    Use the same minor number as on ftsys9. Use the following command to display a list of existing volume groups:

     ls -l /dev/*/group 
  5. Import the volume group data using the map file from node ftsys9. On node ftsys10, enter:

     vgimport -s -m /tmp/vgdatabase.map /dev/vgdatabase  

    Note that the disk device names on ftsys10 may be different from their names on ftsys9. Make sure the physical volume names are correct throughout the cluster.

    When the volume group can be activated on this node, perform a vgcfgbackup. (This backup will be available in the unlikely event that a vgcfgrestore must be performed on this node because of a disaster on the primary node and an LVM problem with the volume group.) Do this as shown in the example below:

    vgchange -a y /dev/vgdatabase
    vgcfgbackup /dev/vgdatabase
    vgchange -a n /dev/vgdatabase

  6. If you are using mirrored individual disks in physical volume groups, check the /etc/lvmpvg file to ensure that each physical volume group contains the correct physical volume names for ftsys10.

    NOTE: When you use PVG-strict mirroring, the physical volume group configuration is recorded in the /etc/lvmpvg file on the configuration node. This file defines the physical volume groups which are the basis of mirroring and indicate which physical volumes belong to each physical volume group. Note that on each cluster node, the /etc/lvmpvg file must contain the correct physical volume names for the physical volume groups’s disks as they are known on that node. Physical volume names for the same disks could be different on different nodes. After distributing volume groups to other nodes, make sure each node’s /etc/lvmpvg file correctly reflects the contents of all physical volume groups on that node. See the following section, “Making Physical Volume Group Files Consistent.”
  7. Make sure that you have deactivated the volume group on ftsys9. Then enable the volume group on ftsys10:

     vgchange -a y /dev/vgdatabase 
  8. Create a directory to mount the disk:

     mkdir /mnt1 
  9. Mount and verify the volume group on ftsys10:

     mount /dev/vgdatabase/lvol1 /mnt1 
  10. Unmount the volume group on ftsys10:

     umount /mnt1
  11. Deactivate the volume group on ftsys10:

     vgchange -a n /dev/vgdatabase 

Making Physical Volume Group Files Consistent Skip ahead to the next section if you do not use physical volume groups for mirrored individual disks in your disk configuration.

Different volume groups may be activated by different subsets of nodes within a Serviceguard cluster. In addition, the physical volume name for any given disk may be different on one node from what it is on another. For these reasons, you must carefully merge the /etc/lvmpvg files on all nodes so that each node has a complete and consistent view of all cluster-aware disks as well as of its own private (non-cluster-aware) disks. To make merging the files easier, be sure to keep a careful record of the physical volume group names on the volume group planning worksheet (described in Chapter 4 “Planning and Documenting an HA Cluster ”).

Use the following procedure to merge files between the configuration node (ftsys9) and a new node (ftsys10) to which you are importing volume groups:

  1. Copy /etc/lvmpvg from ftsys9 to /etc/lvmpvg.new on ftsys10.

  2. If there are volume groups in /etc/lvmpvg.new that do not exist on ftsys10, remove all entries for that volume group from /etc/lvmpvg.new.

  3. If /etc/lvmpvg on ftsys10 contains entries for volume groups that do not appear in /etc/lvmpvg.new, then copy all physical volume group entries for that volume group to /etc/lvmpvg.new.

  4. Adjust any physical volume names in /etc/lvmpvg.new to reflect their correct names on ftsys10.

  5. On ftsys10, copy /etc/lvmpvg to /etc/lvmpvg.old to create a backup. Copy /etc/lvmvpg.new to /etc/lvmpvg on ftsys10.

Creating Additional Volume Groups The foregoing sections show in general how to create volume groups and logical volumes for use with Serviceguard. Repeat the procedure for as many volume groups as you need to create, substituting other volume group names, logical volume names, and physical volume names. Pay close attention to the disk device names, which can vary from one node to another.

Creating a Storage Infrastructure with VxVM

In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Volume Manager (VxVM), or Veritas Cluster Volume Manager (CVM). You can also use a mixture of volume types, depending on your needs. LVM and VxVM configuration are done before cluster configuration, and CVM configuration is done after cluster configuration.

For a discussion of migration from LVM to VxVM storage, refer to Appendix G.

This section shows how to configure new storage using the command set of the Veritas Volume Manager (VxVM). Once you have created the root disk group (described next), you can use VxVM commands or the Storage Administrator GUI, VEA, to carry out configuration tasks. For more information, see the Veritas Volume Manager documentation posted at http://docs.hp.com -> 11i v3 -> VxVM (or -> 11i v2 -> VxVM, depending on your HP-UX version).

Initializing the Veritas Cluster Volume Manager 3.5

NOTE: Check the Serviceguard, SGeRAC, and SMS Compatibility and Feature Matrix and the latest Release Notes for your version of Serviceguard for up-to-date information about support for CVM (and CFS - Cluster File System): http://www.docs.hp.com -> High Availability -> Serviceguard).

If you are using CVM 3.5 and you are about to create disk groups for the first time, you need to initialize the Volume Manager. This is done by creating a disk group known as rootdg that contains at least one disk. Use the following command once only, immediately after installing VxVM on each node:


This displays a menu-driven program that steps you through the VxVM initialization sequence. From the main menu, choose the “Custom” option, and specify the disk you wish to include in rootdg.

IMPORTANT: The rootdg for the Veritas Cluster Volume Manager 3.5 is not the same as the HP-UX root disk if an LVM volume group is used for the HP-UX root disk filesystem. Note also that rootdg cannot be used for shared storage. However, rootdg can be used for other local filesystems (e.g., /export/home), so it need not be wasted. (CVM 4.1 and later do not require you to create rootdg.)

Note that you should create a rootdg disk group only once on each node.

Converting Disks from LVM to VxVM

You can use the vxvmconvert(1m) utility to convert LVM volume groups into VxVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. Follow the conversion procedures outlined in the Veritas Volume Manager Migration Guide for your version of VxVM. Before you start, be sure to create a backup of each volume group’s configuration with the vgcfgbackup command, and make a backup of the data in the volume group. Appendix G “Migrating from LVM to VxVM Data Storage ” for more information about conversion.

Initializing Disks for VxVM

You need to initialize the physical disks that will be employed in VxVM disk groups. To initialize a disk, log on to one node in the cluster, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example:

/usr/lib/vxvm/bin/vxdisksetup -i c0t3d2

Initializing Disks Previously Used by LVM

If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group. In addition, if the LVM disk was previously used in a cluster, you have to re-initialize the disk with the pvcreate -f command to remove the cluster ID from the disk.

NOTE: These commands make the disk and its data unusable by LVM, and allow it to be initialized by VxVM. (The commands should only be used if you have previously used the disk with LVM and do not want to save the data on it.)

You can remove LVM header data from the disk as in the following example (note that all data on the disk will be erased): pvremove /dev/rdsk/c0t3d2

Then, use the vxdiskadm program to initialize multiple disks for VxVM, or use the vxdisksetup command to initialize one disk at a time, as in the following example:

/usr/lib/vxvm/bin/vxdisksetup -i c0t3d2

Creating Disk Groups

Use vxdiskadm, or use the vxdg command, to create disk groups, as in the following example:

vxdg init logdata c0t3d2

Verify the configuration with the following command:

vxdg list

NAME         STATE                  ID

rootdg        enabled             971995699.1025.node1
logdata       enabled             972078742.1084.node1

Creating Volumes

Use the vxassist command to create logical volumes. The following is an example:

vxassist -g logdata make log_files 1024m

This command creates a 1024 MB volume named log_files in a disk group named logdata. The volume can be referenced with the block device file /dev/vx/dsk/logdata/log_files or the raw (character) device file /dev/vx/rdsk/logdata/log_files. Verify the configuration with the following command:

vxprint -g logdata

The output of this command is shown in the following example:


v   logdata    fsgen        ENABLED  1024000          ACTIVE
pl  logdata-01 system       ENABLED  1024000          ACTIVE
NOTE: The specific commands for creating mirrored and multi-path storage using VxVM are described in the Veritas Volume Manager Reference Guide.

Creating File Systems

If your installation uses filesystems, create them next. Use the following commands to create a filesystem for mounting on the logical volume just created:

  1. Create the filesystem on the newly created volume:

    newfs -F vxfs /dev/vx/rdsk/logdata/log_files

  2. Create a directory to mount the volume:

    mkdir /logs

  3. Mount the volume:

    mount /dev/vx/dsk/logdata/log_files /logs

  4. Check to make sure the filesystem is present, then unmount the filesystem:

    umount /logs

Deporting Disk Groups

After creating the disk groups that are to be used by Serviceguard packages, use the following command with each disk group to allow the disk group to be deported by the package control script on other cluster nodes:

vxdg deport <DiskGroupName>

where <DiskGroupName> is the name of the disk group that will be activated by the control script.

When all disk groups have been deported, you must issue the following command on all cluster nodes to allow them to access the disk groups:

vxdctl enable

Re-Importing Disk Groups

After deporting disk groups, they are not available for use on the node until they are imported again either by a package control script or with a vxdg import command. If you need to import a disk group manually for maintenance or other purposes, you import it, start up all its logical volumes, and mount filesystems as in the following example:

vxdg import dg_01

vxvol -g dg_01 startall

mount /dev/vx/dsk/dg_01/myvol /mountpoint

NOTE: Unlike LVM volume groups, VxVM disk groups are not entered in the cluster configuration file, nor in the package configuration file.

Clearimport at System Reboot Time

At system reboot time, the cmcluster RC script does a vxdisk clearimport on all disks formerly imported by the system, provided they have the noautoimport flag set, and provided they are not currently imported by another running node. The clearimport clears the host ID on the disk group, to allow any node that is connected to the disk group to import it when the package moves from one node to another.

Using the clearimport at reboot time allows Serviceguard to clean up following a node failure, for example, a system crash during a power failure. Disks that were imported at the time of the failure still have the node’s ID written on them, and this ID must be cleared before the rebooting node or any other node can import them with a package control script.

Note that the clearimport is done for disks previously imported with noautoimport set on any system that has Serviceguard installed, whether it is configured in a cluster or not.

Printable version
Privacy statement Using this site means you accept its terms Feedback to webmaster
© Hewlett-Packard Development Company, L.P.