HotplugRaid
|
Size: 10054
Comment: issue with suspending usb drives
|
Size: 10065
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 22: | Line 22: |
| Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB harddisks containing mirror partions. Upon insertion the lights of those drives will stay lit during idle times for a little while. This is the mirror partition being synced in the background automaticaly. As long as those drives stays atatched any work you save will be written simultaniously to the laptops internal disk as well as to the attached mirror partitions. | Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB harddisks containing mirror partions. Upon insertion the lights of those drives will stay lit during idle times for a little while. This is the mirror partition being synced in the background automaticaly. As long as those drives stay attached any work you save will be written simultaniously to the laptops internal disk as well as to the attached mirror partitions. |
| Line 81: | Line 81: |
| * Crypttab Bug #227285 | * Crypttab Bug wiki:Bug:227285 |
| Line 92: | Line 92: |
| As long as the boot scripts don't catch if its time for them to started (see also next issue) you may hack around the bug by introducing a "sleep 10" before do_start in the start section of /etc/cryptsetup-early. | As long as the boot scripts don't catch if its time for them to started (see DegradedRaid) you may hack around the bug by introducing a "sleep 10" before do_start in the start section of /etc/cryptsetup-early. |
| Line 98: | Line 98: |
| The udev rules need to be changed to use "mdadm --incremental". See BootDegradedRaid | The udev rules need to be changed to use "mdadm --incremental". (See BootDegradedRaid) |
| Line 100: | Line 100: |
| In the meantime you have to manualy start the array (mdadm --assemble --scan) and subsequently stop it again (mdadm --stop /dev/md0) on the console in order to have its status updated to degraded. (mdadm fails to stop RAID on shutdown Bug #111398) Then you can exit from the console and the next reboot will work. | In the meantime you have to manualy start the array (mdadm --assemble --scan) and subsequently stop it again (mdadm --stop /dev/md0) on the console in order to have its status updated to degraded. (mdadm fails to stop RAID on shutdown, wiki:Bug:111398) Then you can exit from the console and the next reboot will work. |
ContentsBRTableOfContents |
HotplugRaid
This is a generally usefull setup, a How-To and a basic test case.
Home directories on HotplugRaid
Layman's Description:
Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB harddisks containing mirror partions. Upon insertion the lights of those drives will stay lit during idle times for a little while. This is the mirror partition being synced in the background automaticaly. As long as those drives stay attached any work you save will be written simultaniously to the laptops internal disk as well as to the attached mirror partitions.
If the internal disk fails, crashes down the stairway together with your laptop when you are on the road, or it drives away in that fake taxi drivers trunk, well hopefully you have at least one up-to-date mirror partition on that drive in your backpack or on the one you keep at home.
Technical Description:
It's a installation with filesystems residing on multi-disk raid arrays that contain hotplugable devices like removable USB and Firewire drives and (external / bay-mounted) [S/P]-ATA drives.
For now we distiguish between having the root filesystem (/) on a hotplugable raid and having the home directories (/home) on a hotplugable raid. Having the root filesystem on a hotplugable raid will require modules like usb-storage be loaded in initramfs.
As long as appropriate hotplug udev rules do not exist yet, you can mount a degraded md device with your backup mirror partition manually. To access your data on another machine the neccessary steps for this would go something like this:
mdadm --incremental --run /dev/sde5 # Assemble and run the device (in this case sde5) as a degraded raid. cryptsetup luksOpen /dev/mdX my-rescued-mirror # Open the encrypted md device using its luks header. mount /dev/mapper/my-rescued-mirror /mnt # Finally mount your data.
Installation Instructions
- Attatch the external drives that you want to hold raid partitions.
- Beginn the install process by booting from the alternate-cd.
- Install using manual partioning.
(The basic process is described more elaborately at https://help.ubuntu.com/community/Installation/SoftwareRAID)
- Since this setup and test case assumes that the installation is done on a laptop machine the system will also be set up using encrypted devices. (The encryption steps can be left out if encryption is not desired.) First create a rather small (primary) partition that will hold the /boot files. (500M is more than needed.) Configure it to be a ext2 filesystem and be mounted as /boot. Then create a (logical) partion that will hold the rootfs and swap partion on the internal drive. Configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Then configure the resulting _crypt device to be used as a physical volume for lvm. Enter the lvm setup to create a volume group (in this case vg_internal_hostname) and the logical volumes for the rootfs and swap (lv_root, lv_swap). Back in the partitioning tool configure lv_root to be mounted as / and lv_swap to be used as swap. Now setup equally sized (locical) partitions on the internal drive and each external drives as physical raid partitions that will hold the home directories. Choose to configure raid (multi-disks) from the menu and create a raid 1 (mirroring) device from the partitions you just created. (md0 will be assumed here.) The multi-disk raid partitions should start syncing in the background. Go on to create a partition on the md0 device and configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Finally create an ext3 partion on the md0_crypt device and configure it to be mounted as /home. (Note that unlike it is the case with the first _crypt device there is no lvm used on top of the second _crypt device. If one wants it to hold multiple logical volumes it is of course possible to create a lvm volume group (vg_mirrored_hostname) containing logical volumes (lv_home, lv_svn, ...) in between.) The partitioning of this example can therfore be summarized as:
ext2 (/boot) first_crypt -> lvm [ext3 (/) and swap] [multi disk member partitions] -> md0 -> second_crypt -> ext3 (/home)
(During subsequent testing or when the disks already contain some lvm, raid or luks partitions you may encouter problems with the installer not being able to delete or use existing devices (with data) or inadvertently scraping previous setups. Please test this and file/report on appropriate bugs.) When the HotplugRaid is for the home directory data (i.e. not too much i/o) the resyncing can be accelerated when using a bitmap. (Create the array with a internal bitmap "-b internal", or grow one later: "mdadm --grow /dev/mdX -binternal")
Crypttab Bug wiki:227285
The installer would not put the second md0_crypt device into the /etc/crypttab, so it won't get opened during the boot process. As a workaround do that manually while the installer is copying away. Change to another console and have a look: "cat /target/etc/crypttab". If md0_crypt is missing check "ls -l /dev/disks/by-uuid" to determine the uuid that points to the md0 device. Then enter "echo md0_crypt /dev/drives/by-uuid/<use_tab_completion_to_get_the_uuid_right> none luks >> /target/etc/crypttab" to add the necessary line to the crypttab.
- The boot scripts are racing udev events
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bugs That is the boot scripts may try to assemble, fsck, mount etc. raid arrays before the partitions become available. Symptom: File system checks or opening dm-crypt devices fails because the raid device is not activated and can not be recognized. You are droped to a console.
As long as the boot scripts don't catch if its time for them to started (see DegradedRaid) you may hack around the bug by introducing a "sleep 10" before do_start in the start section of /etc/cryptsetup-early.
- When and what to do with degraded arrays during boot
- When you detach a removable device with a raid member partition on it from the machine when it is powered down the array won't come up during boot because it is not complete. Again you are droped to a console.
The udev rules need to be changed to use "mdadm --incremental". (See BootDegradedRaid)
In the meantime you have to manualy start the array (mdadm --assemble --scan) and subsequently stop it again (mdadm --stop /dev/md0) on the console in order to have its status updated to degraded. (mdadm fails to stop RAID on shutdown, wiki:111398) Then you can exit from the console and the next reboot will work. https://wiki.ubuntu.com/BootDegradedRaid
- When you detach a removable device with a raid member partition on it from the machine when it is powered down the array won't come up during boot because it is not complete. Again you are droped to a console.
- Let the sync start (add a raid member to its array) automatically if it is attatched later on.
This works when the udev rules are changed to use "mdadm --incremental". (See BootDegradedRaid). Still members that got marked as faulty ?may? need to be re-added manualy (or with a custom udev rule for that raid member device if you regularily disconnect your laptop from your mirror drive at home.) "mdadm --add /dev/md0 $dev" will manually re-add the partion to the raid and start the syncing. A custom udev rule may be simmilar to this one:
ACTION=="add", BUS=="usb", SUBSYSTEM=="block", DEVPATH=="*[0-9]" SYSFS{serial}=="5B6B1B916C31", RUN+="mdadm --add /dev/md0 $DEVNAME"- But the above rule actually identifies the USB device by its serial number and not the raid partition by superblock/uuid (which you may move to another device).
Avoid to suspend the computer if you are using USB drives until they can survive the sleep. wiki:230671
Further Improvements
Having the root filesystem and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume.
With fast devices like external SATA Drives it may be bearable to include the root filesystem and swap partition into the raid mirror. You may even keep your /boot partition on a raid and have it mirrored to external drives. Maybe including a read only medium as a prefered mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly resyncing the internal /boot partition and master boot records (MBR).
All this will require rootfs on HotplugRaid to work.
Root filesystem on HotplugRaid
- That early in the boot process we need to wait only for the raid device with the root filesystem to come up. And try to resume with an degraded array if this times out.
- If the root filesystem resides on USB/Firewire/eSATA/PCMCIA/PC-Card/ExpressCard devices those drivers need to be present in initramfs and loaded prior to starting the timeout.
- Remaining boot script racing bugs
- To workaround current racing bugs here, one may need to edit /usr/share/initramfs-tools/init and put sleep 10 after line log_begin_msg "Mounting root file system...". After this update initramfs with "sudo update-initramfs -k all -u".
References
https://help.ubuntu.com/community/Installation/SoftwareRAID https://features.launchpad.net/distros/ubuntu/+spec/udev-mdadm https://blueprints.launchpad.net/ubuntu/+spec/udev-mdadm https://blueprints.edge.launchpad.net/ubuntu/+spec/boot-degraded-raid https://blueprints.launchpad.net/ubuntu/+spec/udev-lvm-mdadm-evms-gutsy
HotplugRaid (last edited 2013-01-05 21:44:43 by 77-22-90-94-dynip)