Advanced Installation

Software RAID

RAID is a method of configuring multiple hard drives to act as one, reducing the probability of catastrophic data loss in case of drive failure. RAID is implemented in either software (where the operating system knows about both drives and actively maintains both of them) or hardware (where a special controller makes the OS think there's only one drive and maintains the drives 'invisibly').

The RAID software included with current versions of Linux (and Ubuntu) is based on the 'mdadm' driver and works very well, better even than many so-called 'hardware' RAID controllers.

Partitioning

Follow the installation steps until you get to the Partition disks step, then:

  1. Select Manual as the partition method.

  2. Select the first hard drive, and agree to "Create a new empty partition table on this device?".

    Repeat this step for each drive you wish to be part of the RAID array.

  3. Select the "FREE SPACE" on the first drive then select "Create a new partition".

  4. Next, select the Size of the partition, then choose Primary, then Beginnning.

  5. Select the "Use as:" line at the top. By default this is "Ext3 journaling file system", change that to "physical volume for RAID".

  6. Repeat steps three through five for the other disks and partitions.

RAID Configuration

With the partitions setup the array is ready to be configured:

  1. Back in the main "Partition Disks" page, select "Configure Software RAID" at the top.

  2. Select "yes" to write the changes to disk.

  3. Choose "Create new MD drive".

  4. Select "RAID1", or the type of RAID you want (RAID0 RAID1 RAID5).

    [Note]

    In order to use RAID5 you need at least three drives. Using RAID0 or RAID1 only two drives are required.

  5. Enter the number of active devices "2", or the amount of hard drives you have, for the array. Then select "Continue".

  6. Next, enter the number of spare devices "0" by default, then choose "Continue".

  7. Choose which partitions to use. Generally they will be sda1, sdb1, sdc1, etc. The numbers will usually match and the different letters correspond to different hard drives.

    Select "Continue" to go to the next step.

  8. Repeat steps three through seven with each pair of partitions you have created (you may only have one pair).

  9. Once done select "Finish".

Formatting

There should now be a list of hard drives and RAID devices. The next step is to format and set the mount point for the RAID devices. Treat the RAID device as a local hard drive, format and mount accordingly.

  1. Select first RAID device partition.

  2. Choose "Use as:". Then select "Ext3 journaling file system", or whichever filesystem you prefer.

  3. If you selected Ext3, then select your mount point. You can also create multiple partitions on one RAID device or use multiple RAID devices for different partitions. As an example, if you only have one partition for choose "/" as the mount point.

  4. Repeat for any additional RAID devices.

  5. Finally, select "Finish partitioning and write changes to disk".

If you choose to place the root partition on a RAID array, the installer will then ask if you would like to boot in a degraded state. See the section called “Degraded RAID” for further details.

The installation process will then continue normally.

Degraded RAID

At some point in the life of the computer a disk failure event may occur. When this happens, using Software RAID, the operating system will place the array into what is known as a degraded state.

If the array has become degraded, due to the chance of data corruption, by default Ubuntu Server Edition will boot to initramfs after thirty seconds. Once the initramfs has booted there is a fifteen second prompt giving you the option to go ahead and boot the system, or attempt manual recover. Booting to the initramfs prompt may or may not be the desired behavior, especially if the machine is in a remote location. Booting to a degraded array can be configured several ways:

  • The dpkg-reconfigure utility can be used to configure the default behavior, and during the process you will be queried about additional settings related to the array. Such as monitoring, email alerts, etc. To reconfigure mdadm enter the following:

    sudo dpkg-reconfigure mdadm
  • The dpkg-reconfigure mdadm process will change the /etc/initramfs-tools/conf.d/mdadm configuration file. The file has the advantage of being able to pre-configure the system's behavior, and can also be manually edited:

    BOOT_DEGRADED=true
    [Note]

    The configuration file can be overridden by using a Kernel argument.

  • Using a Kernel argument will allow the system to boot to a degraded array as well:

    • When the server is booting press ESC to open the Grub menu.

    • Press "e" to edit your Kernel command options.

    • Press the DOWN arrow to highlight the kernel line.

    • Press the "e" key again to edit the kernel line.

    • Add "bootdegraded=true" (without the quotes) to the end of the line.

    • Press "ENTER".

    • Finally, press "b" to boot the system.

Once the system has booted you can either repair the array see the section called “RAID Maintenance” for details, or copy important data to another machine due to major hardware failure.

RAID Maintenance

The mdadm utility can be used to view the status of an array, add disks to an array, remove disks, etc:

  • To view the status of an array, from a terminal prompt enter:

    sudo mdadm -D /dev/md0

    The -D tells mdadm to display detailed information about the /dev/md0 device. Replace /dev/md0 with the appropriate RAID device.

  • To view the status of a disk in an array:

    sudo mdadm -E /dev/sda1

    The output if very similar to the mdadm -D command, adjust /dev/sda1 for each disk.

  • If a disk fails and needs to be removed from an array enter:

    sudo mdadm --remove /dev/md0 /dev/sda1

    Change /dev/md0 and /dev/sda1 to the appropriate RAID device and disk.

  • Similarly, to add a new disk:

    sudo mdadm --add /dev/md0 /dev/sda1

Sometimes a disk can change to a faulty state even though there is nothing physically wrong with the drive. It is usually worthwhile to remove the drive from the array then re-add it. This will cause the drive to re-sync with the array. If the drive will not sync with the array, it is a good indication of hardware failure.

The /proc/mdstat file also contains useful information about the system's RAID devices:

cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sda1[0] sdb1[1] 10016384 blocks [2/2] [UU] unused devices: <none>

The following command is great for watching the status of a syncing drive:

watch -n1 cat /proc/mdstat

Press Ctrl+c to stop the watch command.

If you do need to replace a faulty drive, after the drive has been replaced and synced, grub will need to be installed. To install grub on the new drive, enter the following:

sudo grub-install /dev/md0

Replace /dev/md0 with the appropriate array device name.

Resources

The topic of RAID arrays is a complex one due to the plethora of ways RAID can be configured. Please see the following links for more information: