HotplugRaid
ContentsBRTableOfContents |
HotplugRaid
This is a generally usefull setup, a How-To and a basic test case.
For now we distiguish between having the root filesystem (/) on a hotplugable raid and having the home directories (/home) on a hotplugable raid. Having the root filesystem on a hotplugable raid will require modules like usb-storage be loaded in initramfs.
Home directories on HotplugRaid
Layman's Description:
Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB harddisks containing mirror partions. Upon insertion lights of those drives will stay lit during idle times for a while. The mirror partition is getting synced in the background automaticaly. As long as the drive stay atatched any work you save will be written to the laptops internal disk as well as to the attached mirror partitions.
If the internal disk fails, crashes down the stairway together with your laptop when you are on the road, or it drives away in that fake taxi drivers trunk, well hopefully you have at least one up-to-date mirror partition on that drive in your backpack or on the one you keep at home.
Technical Description:
Failsafe installations with filesystems residing on multi-disk raid arrays that contain hotplugable devices like removable USB and Firewire drives and (external / bay-mounted) [S/P]-ATA drives.
As long as appropriate hotplug udev rules do not exist yet, you can mount a degraded md device with your backup mirror partition manually to access your data on another machine.
The neccessary steps would be somethig like this:
mdadm --assemble --run /dev/md8 /dev/sdx5 # Assemble and run the device (in this case sdx2) as degraded raid (in this case md5 is an unused device name). cryptsetup luksOpen /dev/md5 my-rescued-mirror # Open the encrypted device using its luks header. mount /dev/mapper/my-rescued-mirror /mnt # Finally mount your data.
Installation Instructions
- Attatch the external drives that you want to hold raid partitions.
- Beginn the install process by booting from the alternate-cd.
- Choose manual partioning.
(The basic process is described more elaborately at https://help.ubuntu.com/community/Installation/SoftwareRAID)
- Since this setup and test case assumes that the installation is done on a laptop machine the system will also be set up using encrypted devices. (The encryption steps can be left if encryption is not desired.) First create a rather small (primary) partition that will hold the /boot files. (500M is more than needed.) Configure it to be a ext2 filesystem and be mounted as /boot. Then create a (logical) partion that will hold the rootfs and swap partion on the internal drive. Configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Then configure the resulting _crypt device to be used as a physical volume for lvm. Enter the lvm setup to create a volume group (in this case vg_tsthost-local) and the logical volumes for the rootfs and swap (lv_root, lv_swap). Back in the partitioning tool configure lv_root to be mounted as / and lv_swap to be used as swap. Now setup equally sized (locical) partitions on the internal drive and each external drives as physical raid partitions that will hold the home directories. Choose to configure raid (multi-disks) from the menu and create a raid 1 (mirroring) device from the partitions you just created. (md0 will be assumed here.) The multi-disk raid partitions should start syncing in the background. Go on to create a partition on the md0 device and configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Finally create an ext3 partion on the md0_crypt device and configure it to be mounted as /home. (Note that unlike it is the case with the first _crypt device there is no lvm used on top of the second _crypt device. If one wants it to hold multiple logical volumes it is of course possible to create a lvm volume group (vg_tsthost-raid) containing logical volumes (lv_home, lv_svn, ...) in between.) The partitioning of this example can therfore be summarized as:
ext2 (/boot) first_crypt -> lvm [ext3 (/) and swap] [multi disk member partitions] -> md0 -> second_crypt -> ext3 (/home)
(During subsequent testing or when the disks already contain some lvm, raid or luks partitions you may encouter problems with the installer not being able to delete or use existing devices (with data) or inadvertently scraping previous setups. Please test this and file/report on appropriate bugs.) - Crypttab Bug
The installer would not put the second md0_crypt device into the /etc/crypttab, so it won't get opened during the boot process. As a workaround do that manually while the installer is copying away. Change to another console and have a look: "cat /target/etc/crypttab". If md0_crypt is missing check "ls -l /dev/disks/by-uuid" to determine the uuid that points to the md0 device. Then enter "echo md0_crypt /dev/drives/by-uuid/<use_tab_completion_to_get_the_uuid_right> none luks >> /target/etc/crypttab" to add the necessary line to the crypttab.
- Many mdadm boot script racing udev events bugs
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bugs The boot scripts try to assemble raid arrays before the partitions become available. Symptom: File system checks or opening dm-crypt devices fails because the raid device is not activated and can not be recognized. You are droped to a console. As long as the problem is not worked out (see also next issue) you may hack around the bug by introducing a "sleep 10" before do_open in the start section of /etc/cryptsetup-early.
- When and what to do with degraded arrays during boot bug
- When you detatch a removable device with a raid member partiion on it when the machine is powered down the array won't come up during boot because it is not complete. Again you are droped to a console. A solution would be to rely on udev event rules also during boot: If all the md devices specified in /etc/mdadm/mdadm.conf or persistent superblocks have come up, do resume booting imediately. If after 30 sec there are still some raid partitions missing, run these arrays in degreaded mode and issue notifications. In the meantime you have to manualy start the array (mdadm --assemble --scan) and subsequently stop it again (mdadm --stop /dev/md0) on the console in order to have its status properly updated to degraded. (mdadm fails to stop RAID on shutdown Bug #111398) Exit from the console and reboot.
- Let the sync start when a raid member partition is attatched.
- "mdadm --add /dev/md0 $dev" will re-add the partion to the raid and start the syncing. But in order to get this automatically an udev rule is nessesary:
............................
- "mdadm --add /dev/md0 $dev" will re-add the partion to the raid and start the syncing. But in order to get this automatically an udev rule is nessesary:
Further Improvements
Having the root filesystem and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume.
With fast devices like external SATA Drives it may be bearable to include the root filesystem and swap partition into the raid mirror. You may even keep your /boot partition on a raid and have it mirrored to external drives. Maybe including a read only medium as a prefered mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly resyncing the internal /boot partition and master boot records (MBR).
All this will require rootfs on HotplugRaid to work.
Root filesystem on HotplugRaid
- That early in the boot process we need to wait only for the raid device with the root filesystem to come up. And try to resume with an degraded array if this times out.
- If the root filesystem resides on USB/Firewire/eSATA/PCMCIA/PC-Card/ExpressCard devices those drivers need to be present in initramfs and loaded prior to starting the timeout.
- Present boot script racing bugs
- To workaround bugs here, one may need to edit /usr/share/initramfs-tools/init and put sleep 10 after line log_begin_msg "Mounting root file system...". After this update initramfs with "sudo update-initramfs -k all -u".