United States-English |
|
|
Managing Serviceguard Fifteenth Edition > Chapter 5 Building
an HA Cluster ConfigurationPreparing Your Systems |
|
This section describes the tasks that should be done on the prospective cluster nodes before you actually configure the cluster. It covers the following topics: For information about installing Serviceguard, see the Release Notes for your version at http://docs.hp.com -> High Availability -> Serviceguard -> Release Notes. For information about installing and updating HP-UX, see the HP-UX Installation and Update Guide for the version you need: go to http://docs.hp.com and choose the HP-UX version from the list under Operating Environments, then choose Installing and Updating. Appendix E “Software Upgrades ” of this manual provides instructions for upgrading Serviceguard without halting the cluster. Make sure you read the entire Appendix, and the corresponding section in the Release Notes, before you begin. Serviceguard uses a special file, /etc/cmcluster.conf, to define the locations for configuration and log files within the HP-UX filesystem. The following locations are defined in the file: ################## cmcluster.conf ############### # Highly Available Cluster file locations # This file must not be edited #################################################
Throughout this book, system filenames are usually given with one of these location prefixes. Thus, references to $SGCONF/filename can be resolved by supplying the definition of the prefix that is found in this file. For example, if SGCONF is defined as /etc/cmcluster/, then the complete pathname for file $SGCONF/cmclconfig is /etc/cmcluster/cmclconfig.
The subsections that follow explain how to set up HP-UX root access between the nodes in the prospective cluster. (When you proceed to configuring the cluster, you will define various levels of non-root access as well; see “Controlling Access to the Cluster”.)
To enable a system to be included in a cluster, you must enable HP-UX root access to the system by the root user of every other potential cluster node. The Serviceguard mechanism for doing this is the file $SGCONF/cmclnodelist. This is sometimes referred to as a “bootstrap” file because Serviceguard consults it only when configuring a node into a cluster for the first time; it is ignored after that. It does not exist by default, but you will need to create it. You may want to add a comment such as the following at the top of the file: ########################################################### # Do not edit this file! # Serviceguard uses this file only to authorize access to an # unconfigured node. Once the node is configured, # Serviceguard will not consult this file. ########################################################### The format for entries in cmclnodelist is as follows: [hostname] [user] [#Comment] For example:
This example grants root access to the node on which this cmclnodelist file resides to root users on the nodes gryf, sly, and bit. Serviceguard also accepts the use of a “+” in the cmclnodelist file; this indicates that the root user on any Serviceguard node can configure Serviceguard on this node.
The HP-UX root user on any cluster node can configure the cluster. This requires that Serviceguard on one node be able to recognize the root user on another. Serviceguard uses the identd daemon to verify user names, and, in the case of a root user, verification succeeds only if identd returns the username root. Because identd may return the username for the first match on UID 0, you must check /etc/passwd on each node you intend to configure into the cluster, and ensure that the entry for the root user comes before any other entry with a UID of 0. HP strongly recommends that you use identd for user verification, so you should make sure that each prospective cluster node is configured to run it. identd is usually started by inetd from /etc/inetd.conf.
(It is possible to disable identd, though HP recommends against doing so. If for some reason you have to disable identd, see “Disabling identd”.) For more information about identd, see the white paper Securing Serviceguard at http://docs.hp.com -> High Availability -> Serviceguard -> White Papers, and the identd (1M) manpage. Serviceguard uses the name resolution services built in to HP-UX. Serviceguard nodes can communicate over any of the cluster’s shared networks, so the network resolution service you are using (such as DNS, NIS, or LDAP) must be able to resolve each of their primary addresses on each of those networks to the primary hostname of the node in question. In addition, HP recommends that you define name resolution in each node’s /etc/hosts file, rather than rely solely on a service such as DNS. Configure the name service switch to consult the /etc/hosts file before other services. See “Safeguarding against Loss of Name Resolution Services” for instructions.
For example, consider a two node cluster (gryf and sly) with two private subnets and a public subnet. These nodes will be granting access by a non-cluster node (bit) which does not share the private subnets. The /etc/hosts file on both cluster nodes should contain:
If applications require the use of hostname aliases, the Serviceguard hostname must be one of the aliases. For example:
When you employ any user-level Serviceguard command (including cmviewcl), the command uses the name service you have configured (such as DNS) to obtain the addresses of all the cluster nodes. If the name service is not available, the command could hang or return an unexpected networking error message.
The procedure that follows shows how to create a robust name-resolution configuration that will allow cluster nodes to continue communicating with one another if a name service fails. If a standby LAN is configured, this approach also allows the cluster to continue to function fully (including commands such as cmrunnode and cmruncl) after the primary LAN has failed.
Make sure that the kernel configurations of all cluster nodes are consistent with the expected behavior of the cluster during failover. In particular, if you change any kernel parameters on one cluster node, they may also need to be changed on other cluster nodes that can run the same packages. HP strongly recommends that you enable network time protocol (NTP) services on each node in the cluster. The use of NTP, which runs as a daemon process on each system, ensures that the system time on all nodes is consistent, resulting in consistent timestamps in log files and consistent behavior of message services. This ensures that applications running in the cluster are correctly synchronized. The NTP services daemon, xntpd, should be running on all nodes before you begin cluster configuration. The NTP configuration file is /etc/ntp.conf. For information about configuring NTP services, refer to the HP-UX manual HP-UX Internet Services Administrator’s Guide posted at http://docs.hp.com -> Networking and Communication -> Internet Services. Serviceguard and its extension products, such as SGeSAP, SGeRAC, and SGeFF, have been tested with default values of the supported network and kernel parameters in the ndd and kmtune utilities. Adjust these parameters with care. If you experience problems, return the parameters to their default values. When contacting HP support for any issues regarding Serviceguard and networking, please be sure to mention any parameters that were changed from the defaults. Third-party applications that are running in a Serviceguard environment may require tuning of network and kernel parameters:
Serviceguard has also been tested with non-default values for these two network parameters:
HP strongly recommends that you use mirrored root volumes on all cluster nodes. The following procedure assumes that you are using separate boot and root volumes; you create a mirror of the boot volume (/dev/vg00/lvol1), primary swap (/dev/vg00/lvol2), and root volume (/dev/vg00/lvol3). In this example and in the following commands, /dev/dsk/c4t5d0 is the primary disk and /dev/dsk/c4t6d0 is the mirror; be sure to use the correct device file names for the root disks on your system.
The following guidelines apply if you are using a lock disk. See “Cluster Lock” and “Cluster Lock Planning” for discussion of cluster lock options. The cluster lock disk is configured on an LVM volume group that is physically connected to all cluster nodes. This volume group may also contain data that is used by packages. When you are using dual cluster lock disks, it is required that the default I/O timeout values are used for the cluster lock physical volumes. Changing the I/O timeout values for the cluster lock physical volumes can prevent the nodes in the cluster from detecting a failed lock disk within the allotted time period which can prevent cluster re-formations from succeeding. To view the existing IO timeout value, run the following command: pvdisplay <lock device file name> The I/O Timeout value should be displayed as “default.” To set the IO Timeout back to the default value, run the command: pvchange -t 0 <lock device file name> The use of a dual cluster lock is only allowed with certain specific configurations of hardware. Refer to the discussion in Chapter 3 on “Dual Cluster Lock.” For instructions on setting up a lock disk, see “Specifying a Lock Disk”. After you configure the cluster and create the cluster lock volume group and physical volume, you should create a backup of the volume group configuration data on each lock volume group. Use the vgcfgbackup command for each lock volume group you have configured, and save the backup file in case the lock configuration must be restored to a new disk with the vgcfgrestore command following a disk failure.
LUN stands for Logical Unit Number. The term can refer to a single physical disk, but these days is more often used in a SAN (Storage Area Network) or NAS (Network-Attached Storage) context to denote a virtual entity derived from one or more physical disks. Keep the following points in mind when choosing a device for a lock LUN:
A lock LUN needs only a small amount of storage, about 100 KB.
You can use the idisk utility to create a partition for a lock LUN in a cluster that will contain only HP Integrity servers. Use the procedure that follows; see the idisk (1m) manpage for more information. Do this on one of the nodes in the cluster that will use this lock LUN.
Use cmquerycl -L to create a cluster configuration file that defines the lock LUN.
These commands create a configuration file which you can apply to the cluster configuration when you are ready to do so; see “Distributing the Binary Configuration File ”. See also “Specifying a Lock LUN”.
If you will be using a quorum server rather than a lock disk or LUN, the Quorum Server software must be installed on a system other than the nodes on which your cluster will be running, and must be running during cluster configuration. For detailed discussion, recommendations, and instructions for installing, updating, configuring, and running the Quorum Server, see HP Serviceguard Quorum Server Version A.03.00 Release Notes at http://www.docs.hp.com -> High Availability -> Quorum Server. In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done several ways:
You can also use a mixture of volume types, depending on your needs.
This section describes storage configuration with LVM. It includes procedures for the following:
The Event Monitoring Service HA Disk Monitor provides the capability to monitor the health of LVM disks. If you intend to use this monitor for your mirrored disks, you should configure them in physical volume groups. For more information, refer to the manual Using High Availability Monitors (http://docs.hp.com -> High Availability -> Event Monitoring Service and HA Monitors -> Installation and User’s Guide). The procedure described in this section uses physical volume groups for mirroring of individual disks to ensure that each logical volume is mirrored to a disk on a different I/O bus. This kind of arrangement is known as PVG-strict mirroring. It is assumed that your disk hardware is already configured in such a way that a disk to be used as a mirror copy is connected to each node on a different bus from the bus that is used for the other (primary) copy. For more information on using LVM, refer to the Logical Volume Management volume of the HP-UX System Administrator’s Guide. You can use the System Management Homepage to create or extend volume groups and create logical volumes. From the System Management Homepage, choose Disks and File Systems. Make sure you create mirrored logical volumes with PVG-strict allocation. When you have created the logical volumes and created or extended the volume groups, specify the filesystem that is to be mounted on the volume group, then skip ahead to the section “Deactivating the Volume Group”. To configure the volume groups from the command line, proceed as follows. If your volume groups have not been set up, use the procedures that follow. If you have already done LVM configuration, skip ahead to the section “Configuring the Cluster.” Obtain a list of the disks on both nodes and identify which device files are used for the same disk on both. Use the following command on each node to list available disks as they are known to each system:
In the following examples, we use /dev/rdsk/c1t2d0 and /dev/rdsk/c0t2d0, which happen to be the device names for the same disks on both ftsys9 and ftsys10. In the event that the device file names are different on the different nodes, make a careful note of the correspondences.
On the configuration node (ftsys9), use the pvcreate command to define disks as physical volumes. This only needs to be done on the configuration node. Use the following commands to create two physical volumes for the sample configuration:
Using PV Strict Mirroring Use the following steps to build a volume group on the configuration node (ftsys9). Later, the same volume group will be created on other nodes.
Creating Logical Volumes Use the following command to create logical volumes (the example is for /dev/vgdatabase):
This command creates a 120 MB mirrored volume named lvol1. The name is supplied by default, since no name is specified in the command. The -s g option means that mirroring is PVG-strict, that is, the mirror copies of data will be in different physical volume groups.
Creating File Systems If your installation uses filesystems, create them next. Use the following commands to create a filesystem for mounting on the logical volume just created:
Distributing Volume Groups to Other Nodes After creating volume groups for cluster data, you must make them available to any cluster node that will need to activate the volume group. The cluster lock volume group must be made available to all nodes. Deactivating the Volume Group At the time you create the volume group, it is active on the configuration node (ftsys9, for example).The next step is to umount the file system and deactivate the volume group; for example, on ftsys9:
Distributing the Volume Group Use the following commands to set up the same volume group on another cluster node. In this example, the commands set up a new volume group on ftsys10 which will hold the same physical volume that was available on ftsys9. You must carry out the same procedure separately for each node on which the volume group's package can run. To set up the volume group on ftsys10, use the following steps:
Making Physical Volume Group Files Consistent Skip ahead to the next section if you do not use physical volume groups for mirrored individual disks in your disk configuration. Different volume groups may be activated by different subsets of nodes within a Serviceguard cluster. In addition, the physical volume name for any given disk may be different on one node from what it is on another. For these reasons, you must carefully merge the /etc/lvmpvg files on all nodes so that each node has a complete and consistent view of all cluster-aware disks as well as of its own private (non-cluster-aware) disks. To make merging the files easier, be sure to keep a careful record of the physical volume group names on the volume group planning worksheet (described in Chapter 4 “Planning and Documenting an HA Cluster ”). Use the following procedure to merge files between the configuration node (ftsys9) and a new node (ftsys10) to which you are importing volume groups:
Creating Additional Volume Groups The foregoing sections show in general how to create volume groups and logical volumes for use with Serviceguard. Repeat the procedure for as many volume groups as you need to create, substituting other volume group names, logical volume names, and physical volume names. Pay close attention to the disk device names, which can vary from one node to another. In addition to configuring the cluster, you create the appropriate logical volume infrastructure to provide access to data from different nodes. This is done with Logical Volume Manager (LVM), Veritas Volume Manager (VxVM), or Veritas Cluster Volume Manager (CVM). You can also use a mixture of volume types, depending on your needs. LVM and VxVM configuration are done before cluster configuration, and CVM configuration is done after cluster configuration. For a discussion of migration from LVM to VxVM storage, refer to Appendix G. This section shows how to configure new storage using the command set of the Veritas Volume Manager (VxVM). Once you have created the root disk group (described next), you can use VxVM commands or the Storage Administrator GUI, VEA, to carry out configuration tasks. For more information, see the Veritas Volume Manager documentation posted at http://docs.hp.com -> 11i v3 -> VxVM (or -> 11i v2 -> VxVM, depending on your HP-UX version).
If you are using CVM 3.5 and you are about to create disk groups for the first time, you need to initialize the Volume Manager. This is done by creating a disk group known as rootdg that contains at least one disk. Use the following command once only, immediately after installing VxVM on each node: vxinstall This displays a menu-driven program that steps you through the VxVM initialization sequence. From the main menu, choose the “Custom” option, and specify the disk you wish to include in rootdg.
You can use the vxvmconvert(1m) utility to convert LVM volume groups into VxVM disk groups. Before you can do this, the volume group must be deactivated, which means that any package that uses the volume group must be halted. Follow the conversion procedures outlined in the Veritas Volume Manager Migration Guide for your version of VxVM. Before you start, be sure to create a backup of each volume group’s configuration with the vgcfgbackup command, and make a backup of the data in the volume group. Appendix G “Migrating from LVM to VxVM Data Storage ” for more information about conversion. You need to initialize the physical disks that will be employed in VxVM disk groups. To initialize a disk, log on to one node in the cluster, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example: /usr/lib/vxvm/bin/vxdisksetup -i c0t3d2 If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group. In addition, if the LVM disk was previously used in a cluster, you have to re-initialize the disk with the pvcreate -f command to remove the cluster ID from the disk.
You can remove LVM header data from the disk as in the following example (note that all data on the disk will be erased): pvremove /dev/rdsk/c0t3d2 Then, use the vxdiskadm program to initialize multiple disks for VxVM, or use the vxdisksetup command to initialize one disk at a time, as in the following example: /usr/lib/vxvm/bin/vxdisksetup -i c0t3d2 Use vxdiskadm, or use the vxdg command, to create disk groups, as in the following example: vxdg init logdata c0t3d2 Verify the configuration with the following command: vxdg list
Use the vxassist command to create logical volumes. The following is an example: vxassist -g logdata make log_files 1024m This command creates a 1024 MB volume named log_files in a disk group named logdata. The volume can be referenced with the block device file /dev/vx/dsk/logdata/log_files or the raw (character) device file /dev/vx/rdsk/logdata/log_files. Verify the configuration with the following command: vxprint -g logdata The output of this command is shown in the following example:
If your installation uses filesystems, create them next. Use the following commands to create a filesystem for mounting on the logical volume just created:
After creating the disk groups that are to be used by Serviceguard packages, use the following command with each disk group to allow the disk group to be deported by the package control script on other cluster nodes: vxdg deport <DiskGroupName> where <DiskGroupName> is the name of the disk group that will be activated by the control script. When all disk groups have been deported, you must issue the following command on all cluster nodes to allow them to access the disk groups: vxdctl enable After deporting disk groups, they are not available for use on the node until they are imported again either by a package control script or with a vxdg import command. If you need to import a disk group manually for maintenance or other purposes, you import it, start up all its logical volumes, and mount filesystems as in the following example: vxdg import dg_01 vxvol -g dg_01 startall mount /dev/vx/dsk/dg_01/myvol /mountpoint
At system reboot time, the cmcluster RC script does a vxdisk clearimport on all disks formerly imported by the system, provided they have the noautoimport flag set, and provided they are not currently imported by another running node. The clearimport clears the host ID on the disk group, to allow any node that is connected to the disk group to import it when the package moves from one node to another. Using the clearimport at reboot time allows Serviceguard to clean up following a node failure, for example, a system crash during a power failure. Disks that were imported at the time of the failure still have the node’s ID written on them, and this ID must be cleared before the rebooting node or any other node can import them with a package control script. Note that the clearimport is done for disks previously imported with noautoimport set on any system that has Serviceguard installed, whether it is configured in a cluster or not. |
Printable version | ||
|