There are several reasons why an LVM configuration
cannot boot. In addition to the problems associated with boots from
non-LVM disks, the following problems can cause an LVM-based system
not to boot.
Insufficient Quorum |
|
In this scenario, not enough disks are present
in the root volume group to meet the quorum requirements. At boot time, a message indicating that not enough
physical volumes are available appears:
panic: LVM: Configuration failure |
To activate the root volume group and successfully
boot the system, the number of available LVM disks must be more than
half the number of LVM disks that were attached when the volume group
was last active. Thus, if during the last activation there were two
disks attached in the root volume group, the “more than half”
requirement means that both must be available. For information on
how to deal with quorum failures, see“Volume Group Activation Failures”.
Corrupted LVM Data Structures on Disk |
|
The LVM bootable disks contain vital boot information
in the BDRA. This information can become corrupted, not current, or
just no longer present. Because of the importance of maintaining up-to-date
information within the BDRA, use the lvrmboot or lvlnboot commands whenever you make a change that affects
the location of the root, boot, primary swap, or dump logical volumes.
To correct this problem, boot the system in maintenance
mode as described in “Maintenance Mode Boot”, then repair the damage to the system
LVM data structures. Use vgcfgrestore on the boot
disk.
Corrupted LVM Configuration File |
|
Another problem with activation of a volume group
is a missing or corrupted /etc/lvmtab or /etc/lvmtab_p file. After booting in maintenance mode,
you can use the vgscan command to re-create the /etc/lvmtab and /etc/lvmtab_p files.
For more information, see vgscan(1M).