Jump to content United States-English
HP.com Home Products and Services Support and Drivers Solutions How to Buy
» Contact HP
More options
HP.com home
Arbitration For Data Integrity in Serviceguard Clusters: > Chapter 1 Arbitration for Data Integrity in Serviceguard Clusters

Use of a Lock Disk as the Cluster Lock


Technical documentation

Complete book in PDF
» Feedback
Content starts here

 » Table of Contents

 » Index

The cluster lock disk (used only in HP-UX clusters) is a disk that can be written to by all members of the cluster. When a node obtains the cluster lock, this disk is marked so that other nodes will recognize the lock as “taken.” This mark will survive an off-on power cycle of the disk device unlike SCSI disk reservations. A lock disk may be used for clusters of up to four nodes.

The lock is created in a special area on a particular LVM physical volume. The cluster lock volume group and physical volume names are identified in the cluster configuration file.

The lock disk is not dedicated for use as the cluster lock; thus, it can be employed as part of a normal volume group with user data on it. The usable space on the disk is not impacted; the lock disk takes no space away from the disk’s volume group. Further, the activation of the volume group on one node does not affect the ability of another node to acquire the cluster lock.

The lock area on the disk is not mirrored, even though the physical volume may be a part of a volume group that contains mirrored logical volumes.

The operation of the lock disk is shown in Figure 5. The node that acquires the lock (in this case node 2) continues running in the cluster. The other node halts.

Figure 1-5 Lock Disk Operation

Lock Disk Operation

Serviceguard periodically checks the health of the lock disk and writes messages to the syslog file when a lock disk fails the health check. This file should be monitored for early detection of lock disk problems.

You can choose between two lock disk options—a single or dual lock disk—based on the kind of high availability configuration you are building. A single lock disk is recommended where possible. With both single and dual locks, however, it is important that the cluster lock be available even if the power circuit to one node fails; thus, the choice of a lock configuration depends partly on the number of power circuits available. Regardless of your choice, all nodes in the cluster must have access to the cluster lock to maintain high availability.

Single Cluster Lock

It is recommended that you use a single lock disk. A single lock disk should be configured on a power circuit separate from that of any node in the cluster. For example, it is highly recommended to use three power circuits for a two-node cluster, with a single, separately powered disk for the cluster lock. For two-node clusters, this single lock disk may not share a power circuit with either node, and it must be an external disk. For three or four node clusters, the disk should not share a power circuit with 50% or more of the nodes.

Dual Cluster Lock

In an extended distance cluster, where the cluster contains nodes running in two separate data centers, a single lock disk would be a single point of failure should the data center it resides in suffer a catastrophic failure. In this case only, a dual cluster lock, with two separately powered disks, should be used to eliminate the lock disk as a single point of failure. The use of the dual cluster lock is further shown in “Use of Dual Lock Disks in Extended Distance Clusters” on page 26.

Printable version
Privacy statement Using this site means you accept its terms Feedback to webmaster
© Hewlett-Packard Development Company, L.P.