|HP-UX System Administrator's Guide: Logical Volume Management: HP-UX 11i Version 3 > Chapter 2 Configuring LVM
The amount of memory used by LVM is based on the values used at volume group creation time and on the number of open logical volumes. The largest portion of LVM memory is used for extent maps. The memory used is proportional to the maximum number of physical volumes multiplied by the maximum number of physical extents per physical volume for each volume group.
The other factors to be concerned with regarding memory parameters are expected system growth and number of logical volumes required. You can set the volume group maximum parameters to exactly what is required on the system today. However, if you want to extend the volume group by another disk (or perhaps replace one disk with a larger disk), you must use the vgmodify command.
The scheduling policy is significant only with mirroring. When mirroring, the sequential scheduling policy requires more time to perform writes proportional to the number of mirrors. For instance, a logical volume with three copies of data requires three times as long to perform a write using the sequential scheduling policy, as compared to the parallel policy. Read requests are always directed to only one device. Under the parallel scheduling policy, LVM directs each read request to the least busy device. Under the sequential scheduling policy, LVM directs all read requests to the device shown on the left hand side of an lvdisplay –v output.
The purpose of the Mirror Write Consistency cache (MWC) is to provide a list of mirrored areas that might be out of sync. When a volume group is activated, LVM copies all areas with an entry in the MWC from one of the good copies to all the other copies. This process ensures that the mirrors are consistent but does not guarantee the quality of the data.
On each write request to a mirrored logical volume that uses MWC, LVM potentially introduces one extra serial disk write to maintain the MWC. Whether this condition occurs depends on the degree to which accesses are random.
The more random the accesses, the higher the probability of missing the MWC. Getting an MWC entry can involve waiting for one to be available. If all the MWC entries are currently being used by I/O in progress, a given request might have to wait in a queue of requests until an entry becomes available.
Another performance consideration for mirrored logical volumes is the method of reconciling inconsistencies between mirror copies after a system crash. Two methods of resynchronization are available: Mirror Consistency Recovery (MCR) and none. Whether you use the MWC depends on which aspect of system performance is more important to your environment, run time or recovery time.
For example, a customer using mirroring on a database system might choose "none" for the database logical volume because the database logging mechanism already provides consistency recovery. The logical volume used for the log uses the MWC if quick recovery time was an issue, or MCR if higher runtime performance is required. A database log is typically used by one process and is sequentially accessed, which means it suffers little performance degradation using MWC because the cache is hit most of the time.
The number of volume groups is directly related to the MWC issues. Because there is only one MWC per volume group, disk space that is used for many small random write requests must be kept in distinct volume groups if possible when the MWC is being used. This is the only performance consideration that affects the decision regarding the number of volume groups.
This factor can be used to enforce the separation of different mirror copies across I/O channels. You must define the physical volume groups. This factor increases the availability by decreasing the single points of failure and provides faster I/O throughput because of less contention at the hardware level.
For example, in a system with several disk devices on each card and several cards on each bus converter, create physical volume groups so that all disks off of one bus converter are in one group and all the disks on the other are in another group. This configuration ensures that all mirrors are created with devices accessed through different I/O paths.
Disk striping distributes logically contiguous data blocks (for example, chunks of the same file) across multiple disks, which speeds I/O throughput for large files when they are read and written sequentially (but not necessarily when access is random).
When you use disk striping, you create a logical volume that spans multiple disks, allowing successive blocks of data to go to logical extents on different disks. For example, a three-way striped logical volume has data allocated on three disks, with each disk storing every third block of data. The size of each of these blocks is called the stripe size of the logical volume. The stripe size (in K) must be a power of two in the range 4 to 32768 for a Version 1.0 volume group, and a power of two in the range 4 to 262144 for a Version 2.x volume group.
Disk striping can increase the performance of applications that read and write large, sequentially accessed files. Data access is performed over the multiple disks simultaneously, resulting in a decreased amount of required time as compared to the same operation on a single disk. If all of the striped disks have their own controllers, each can process data simultaneously.
The logical volume stripe size identifies the size of each of the blocks of data that make up the stripe. You can set the stripe size to a power of two in the range 4 to 32768 for a Version 1.0 volume group, or a power of two in the range 4 to 262144 for a Version 2.x volume group. The default is 8192.
Mirroring a striped logical volume improves the read I/O performance in a same way that it does for a nonstriped logical volume. Simultaneous read I/O requests targeting a single logical extent are served by two or three different physical volumes instead of one. A striped and mirrored logical volume follows a strict allocation policy; that is, the data is always mirrored on different physical volumes.
I/O channel separation is an approach to LVM configuration requiring that mirrored copies of data reside on LVM disks accessed using separate host bus adapters (HBAs) and cables. I/O channel separation achieves higher availability and better performance by reducing the number of single points of possible hardware failure. If you mirror data on two separate disks, but through one card, your system can fail if the card fails.
You can separate I/O channels on a system with multiple HBAs and a single bus, by mirroring disks across different HBAs. You can further ensure channel separation by establishing a policy called PVG-strict allocation, which requires logical extents to be mirrored in separate physical volume groups. Physical volume groups are subgroups of physical volumes within a volume group.
An ASCII file, /etc/lvmpvg, contains all the mapping information for the physical volume group, but the mapping is not recorded on disk. Physical volume groups have no fixed naming convention; you can name them PVG0, PVG1, and so on. The /etc/lvmpvg file is created and updated using the vgcreate, vgextend, and vgreduce commands, but you can edit the file with a text editor.
I/O channel separation is useful for databases, because it heightens availability (LVM has more flexibility in reading data on the most accessible logical extent), resulting in better performance. If you define your physical volume groups to span I/O devices, you ensure against data loss even if one HBA fails.