HotplugRaid
|
Size: 8843
Comment: A generally usefull setup, How-To and a basic test case.
|
Size: 8813
Comment: multiple bitmaps for multiple external members (rolling backup style)
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 3: | Line 3: |
| #title Hotplugable raid or mirror partitions | #title Raid with hotplugable (possibly external) devices |
| Line 5: | Line 5: |
| ||<tablestyle="float:right; font-size: 0.9em; width:30%; background:#F1F1ED; background-repeat: no-repeat; background-position: 98% 0.5ex; margin: 0 0 1em 1em; padding: 0.5em;">'''Contents'''[[BR]][[TableOfContents]]|| | * Block devices connected over the network (drbd or nbd devices) are not covered here. * Alternative aproaches include the syncing of regular (possibly network mounted) filesystems (unison, ChironFS) or replicating file systems (GlusterFS, OpenAFS, coda, intermezzo, ...). ||<tablestyle="float:right; font-size: 0.9em; width:30%; background:#F1F1ED; background-repeat: no-repeat; background-position: 98% 0.5ex; margin: 0 0 1em 1em; padding: 0.5em;"><<TableOfContents>>|| |
| Line 10: | Line 13: |
| This is a generally usefull setup, a How-To and a basic test case. | A replicating raid (redundant array of independent disks) that holds the user data is a generally useful setup. A hotplugable raid with one ore more external drives is especially usefull for home or office laptop users. This page provides a How-To and a basic test case. |
| Line 13: | Line 16: |
For now we distiguish between having the root filesystem (/) on a hotplugable raid and having the home directories (/home) on a hotplugable raid. Having the root filesystem on a hotplugable raid will require modules like usb-storage be loaded in initramfs. |
|
| Line 23: | Line 24: |
| Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB harddisks containing mirror partions. Upon insertion lights of those drives will stay lit during idle times for a while. The mirror partition is getting synced in the background automaticaly. As long as the drive stay atatched any work you save will be written to the laptops internal disk as well as to the attached mirror partitions. | Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB hard disks containing mirror partitions. Upon insertion the lights of those drives will stay lit during idle times for a little while. This is the mirror partition being synced in the background automatically. As long as those drives stay attached any work you save will be written simultaneously to the laptops internal disk, as well as to the attached mirror partitions. |
| Line 25: | Line 26: |
| If the internal disk fails, crashes down the stairway together with your laptop when you are on the road, or it drives away in that fake taxi drivers trunk, well hopefully you have at least one up-to-date mirror partition on that drive in your backpack or on the one you keep at home. | If the internal disk fails, crashes down the stairway together with your laptop when you are on the road, or it drives away in that fake taxi drivers trunk, well hopefully you have at least one up-to-date mirror partition on that drive in your backpack, or on the one you kept at home. |
| Line 31: | Line 32: |
| Failsafe installations with filesystems residing on multi-disk raid arrays that contain hotplugable devices like removable USB and Firewire drives and (external / bay-mounted) [S/P]-ATA drives. | It's an installation with a filesystem residing on a multi-disk (md) raid array that contains hot-plugable devices like removable USB and Firewire drives or (possibly external / bay-mounted) (S)ATA harddrives. |
| Line 33: | Line 34: |
| As long as appropriate hotplug udev rules do not exist yet, you can mount a degraded md device with your backup mirror partition manually to access your data on another machine. | It is possible to have the entire root file system (/) on a hot-plugable raid or as in the example installation below just the home directories (/home). |
| Line 35: | Line 36: |
| The neccessary steps would be somethig like this: | Beginning with ubuntu 9.10 the arrays are properly assembled using udev rules and the --incremental option. Recovery of temporarily disconected (external) drives should now work out of the box. |
| Line 37: | Line 38: |
| {{{ mdadm --assemble --run /dev/md8 /dev/sdx5 # Assemble and run the device (in this case sdx2) as degraded raid (in this case md5 is an unused device name). cryptsetup luksOpen /dev/md5 my-rescued-mirror # Open the encrypted device using its luks header. mount /dev/mapper/my-rescued-mirror /mnt # Finally mount your data. }}} |
|
| Line 48: | Line 46: |
| 1. Attatch the external drives that you want to hold raid partitions. | 1. Attach all external drives that you want to hold raid partitions. |
| Line 50: | Line 48: |
| 1. Beginn the install process by booting from the alternate-cd. | 1. Begin the install process by booting from the alternate-cd. |
| Line 52: | Line 50: |
| 1. Choose manual partioning. | 1. Install using manual partitioning. |
| Line 56: | Line 54: |
| Since this setup and test case assumes that the installation is done on a laptop machine the system will also be set up using encrypted devices. (The encryption steps can be left if encryption is not desired.) | Since this setup and test case assumes that the installation is done on a laptop machine the system we'll also set up encryption. (This can be left out if encryption is not desired.) |
| Line 58: | Line 56: |
| First create a rather small (primary) partition that will hold the /boot files. (500M is more than needed.) Configure it to be a ext2 filesystem and be mounted as /boot. | First create a rather small (primary) partition that will hold the /boot files. (500M is more than needed.) Configure it to be an ext2 file system and be mounted as /boot. |
| Line 60: | Line 58: |
| Then create a (logical) partion that will hold the rootfs and swap partion on the internal drive. Configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Then configure the resulting _crypt device to be used as a physical volume for lvm. Enter the lvm setup to create a volume group (in this case vg_tsthost-local) and the logical volumes for the rootfs and swap (lv_root, lv_swap). | Then create a (logical) partition that will hold the rootfs and swap partition on the internal drive. Configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Then configure the resulting _crypt device to be used as a physical volume for lvm. Enter the lvm setup to create a volume group (in this case vg_internal_hostname) and the logical volumes for the rootfs and swap (lv_root, lv_swap). |
| Line 62: | Line 60: |
| Back in the partitioning tool configure lv_root to be mounted as / and lv_swap to be used as swap. | Then configure lv_root to be mounted as / and lv_swap to be used as swap. |
| Line 64: | Line 62: |
| Now setup equally sized (locical) partitions on the internal drive and each external drives as physical raid partitions that will hold the home directories. Choose to configure raid (multi-disks) from the menu and create a raid 1 (mirroring) device from the partitions you just created. (md0 will be assumed here.) The multi-disk raid partitions should start syncing in the background. Go on to create a partition on the md0 device and configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. | Now setup equally sized (logical) partitions on the internal drive and each external drive to be used as physical raid partitions for the home directories. Choose to configure raid (multi-disks) from the menu and create a raid 1 (mirroring) device from the partitions you just created. (md0 will be assumed here.) The multi-disk raid partitions should start syncing in the background. Go on to create a partition on the md0 device and configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. |
| Line 66: | Line 64: |
| Finally create an ext3 partion on the md0_crypt device and configure it to be mounted as /home. | Finally create an ext4 partition on the md0_crypt device and configure it to be mounted as /home. |
| Line 68: | Line 66: |
| (Note that unlike it is the case with the first _crypt device there is no lvm used on top of the second _crypt device. If one wants it to hold multiple logical volumes it is of course possible to create a lvm volume group (vg_tsthost-raid) containing logical volumes (lv_home, lv_svn, ...) in between.) | (Note that unlike with the first *_crypt device there is no lvm used on top of the second *_crypt device. If you want it to hold multiple logical volumes it is of course possible to create a lvm volume group (vg_mirrored_hostname) containing logical volumes (lv_home, lv_svn, ...).) |
| Line 70: | Line 68: |
| The partitioning of this example can therfore be summarized as: | The partitioning of this example can be summarized as: |
| Line 73: | Line 71: |
| ext2 (/boot) first_crypt -> lvm [ext3 (/) and swap] [multi disk member partitions] -> md0 -> second_crypt -> ext3 (/home) |
[boot partition] plain device -> ext2 (/boot) [system partition] first_crypt device -> lvm {ext4 (/) and swap} [multi disk member partitions] -> md0 device -> second_crypt device -> ext3 (/home) |
| Line 78: | Line 76: |
| (During subsequent testing or when the disks already contain some lvm, raid or luks partitions you may encouter problems with the installer not being able to delete or use existing devices (with data) or inadvertently scraping previous setups. Please test this and file/report on appropriate bugs.) | |
| Line 80: | Line 77: |
| * Crypttab Bug | (During subsequent install attempts or when the disks already contain some lvm, raid or luks partitions you may encounter problems with the installer not being able to delete or use preexisting devices (with data) or inadvertently scraping previous setups. If you encounter this please file/report on appropriate bugs.) |
| Line 82: | Line 79: |
| The installer would not put the second md0_crypt device into the /etc/crypttab, so it won't get opened during the boot process. As a workaround do that manually while the installer is copying away. Change to another console and have a look: "cat /target/etc/crypttab". If md0_crypt is missing check "ls -l /dev/disks/by-uuid" to determine the uuid that points to the md0 device. Then enter "echo md0_crypt /dev/drives/by-uuid/<use_tab_completion_to_get_the_uuid_right> none luks >> /target/etc/crypttab" to add the necessary line to the crypttab. | 1. If the HotplugRaid is used for the home directories (i.e. not too much i/o, the little slow down does not matter) as in this example the re-syncing can be accelerated using an "internal bitmap". If the array has not been set up with the "-b internal" option, you can "grow" one afterwards with "mdadm --grow /dev/mdX -b internal") * Note: If you have multiple external members and want to use them conneced one at a time (rolling backup style) you may like to have a bitmap for each member. For this you need to stack md arrays of two members. The first array (i.e. md0) consists of an internal member and one external member. The second array (i.e. md1) consists of md0 and the second external member. |
| Line 84: | Line 82: |
| * Many mdadm boot script racing udev events bugs https://bugs.launchpad.net/ubuntu/+source/mdadm/+bugs |
|
| Line 87: | Line 83: |
| The boot scripts try to assemble raid arrays before the partitions become available. | 1. Booting with a degraded array (see BootDegradedRaid). |
| Line 89: | Line 85: |
| Symptom: File system checks or opening dm-crypt devices fails because the raid device is not activated and can not be recognized. You are droped to a console. | When you detach a removable device with a raid member partition from the machine when it is powered down, the degraded array may not come up automatically during boot. |
| Line 91: | Line 87: |
| As long as the problem is not worked out (see also next issue) you may hack around the bug by introducing a "sleep 10" before do_open in the start section of /etc/cryptsetup-early. | With 9.10: Do a "dpkg-reconfigure mdadm" after the install and set boot degraded to yes again. (Bug:462258) |
| Line 93: | Line 89: |
| * When and what to do with degraded arrays during boot bug | The bootscripts need to wait for devices and time out after a while to start degraded arrays. To start the array manually if you are droped to a console: {{{ mdadm --incremental --run /dev/sde5 # Assemble and run the device (in this case sde5) as a degraded raid. cryptsetup luksOpen /dev/mdX my-rescued-mirror # Open the encrypted md device using its luks header. mount /dev/mapper/my-rescued-mirror /mountpoint # Finally mount your data. }}} |
| Line 95: | Line 98: |
| When you detatch a removable device with a raid member partiion on it when the machine is powered down the array won't come up during boot because it is not complete. Again you are droped to a console. | |
| Line 97: | Line 99: |
| A solution would be to rely on udev event rules also during boot: If all the md devices specified in /etc/mdadm/mdadm.conf or persistent superblocks have come up, do resume booting imediately. If after 30 sec there are still some raid partitions missing, run these arrays in degreaded mode and issue notifications. |
|
| Line 100: | Line 100: |
| In the meantime you have to manualy start the array (mdadm --assemble --scan) and subsequently stop it again (mdadm --stop /dev/md0) on the console in order to have its status properly updated to degraded. (mdadm fails to stop RAID on shutdown Bug #111398) Exit from the console and reboot. https://wiki.ubuntu.com/BootDegradedRaid |
1. Resyncing reconnected drives (readding raid member partitions to its array) when they are re-attached. |
| Line 103: | Line 102: |
| 1. Let the sync start when a raid member partition is attatched. "mdadm --add /dev/md0 $dev" will re-add the partion to the raid and start the syncing. But in order to get this automatically an udev rule is nessesary: |
Since 9.10 uses "mdadm --incremental" in the udev rules this now works automatically. If a reconnected partitions that got marked faulty still needs to (do they?) be re-added manually: "mdadm --add /dev/md0 $dev" will manually re-add the partition to the raid and start the syncing. A custom udev rule may be similar to this one: |
| Line 108: | Line 108: |
| ............................ | ACTION=="add", BUS=="usb", SUBSYSTEM=="block", DEVPATH=="*[0-9]" SYSFS{serial}=="5B6B1B916C31", RUN+="mdadm --add /dev/md0 $DEVNAME" |
| Line 110: | Line 110: |
| * But the above rule actually identifies the USB device by its serial number and not the raid partition by superblock/uuid (which you may have moved to another device). 1. Check if the system will survive the sleep when you suspend the computer. Bug:230671 To check the status of the arrays: cat /proc/mdstat |
|
| Line 114: | Line 120: |
| Having the root filesystem and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume. | * Maybe enabling the mdadm options --write-mostly --write-behind for external drives. |
| Line 116: | Line 122: |
| With fast devices like external SATA Drives it may be bearable to include the root filesystem and swap partition into the raid mirror. You may even keep your /boot partition on a raid and have it mirrored to external drives. Maybe including a read only medium as a prefered mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly resyncing the internal /boot partition and master boot records (MBR). | * Having the root file system and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume. With fast devices like external SATA Drives you can include the full root file system and swap partition into the raid mirror. |
| Line 118: | Line 124: |
| All this will require rootfs on HotplugRaid to work. | * You may keep your /boot partition on a raid and have it mirrored to external drives, * maybe including a read only medium as a preferred mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly re-syncing the internal /boot partition and master boot records (MBR). |
| Line 122: | Line 130: |
| == Root filesystem on HotplugRaid == | |
| Line 124: | Line 131: |
| 1. That early in the boot process we need to wait only for the raid device with the root filesystem to come up. And try to resume with an degraded array if this times out. | |
| Line 126: | Line 132: |
| 1. If the root filesystem resides on USB/Firewire/eSATA/PCMCIA/PC-Card/ExpressCard devices those drivers need to be present in initramfs and loaded prior to starting the timeout. | = References = |
| Line 128: | Line 134: |
| 1. Present boot script racing bugs To workaround bugs here, one may need to edit /usr/share/initramfs-tools/init and put sleep 10 after line log_begin_msg "Mounting root file system...". After this update initramfs with "sudo update-initramfs -k all -u". |
https://help.ubuntu.com/community/Installation/SoftwareRAID https://features.launchpad.net/distros/ubuntu/+spec/udev-mdadm https://blueprints.launchpad.net/ubuntu/+spec/udev-mdadm https://blueprints.edge.launchpad.net/ubuntu/+spec/boot-degraded-raid https://blueprints.launchpad.net/ubuntu/+spec/udev-lvm-mdadm-evms-gutsy |
| Line 135: | Line 142: |
| CategoryBootAndPartition | CategoryBootAndPartition CategoryUsb CategoryRecovery |
- Block devices connected over the network (drbd or nbd devices) are not covered here.
- Alternative aproaches include the syncing of regular (possibly network mounted) filesystems (unison, ChironFS) or replicating file systems (GlusterFS, OpenAFS, coda, intermezzo, ...).
HotplugRaid
A replicating raid (redundant array of independent disks) that holds the user data is a generally useful setup. A hotplugable raid with one ore more external drives is especially usefull for home or office laptop users. This page provides a How-To and a basic test case.
Home directories on HotplugRaid
Layman's Description:
Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB hard disks containing mirror partitions. Upon insertion the lights of those drives will stay lit during idle times for a little while. This is the mirror partition being synced in the background automatically. As long as those drives stay attached any work you save will be written simultaneously to the laptops internal disk, as well as to the attached mirror partitions.
If the internal disk fails, crashes down the stairway together with your laptop when you are on the road, or it drives away in that fake taxi drivers trunk, well hopefully you have at least one up-to-date mirror partition on that drive in your backpack, or on the one you kept at home.
Technical Description:
It's an installation with a filesystem residing on a multi-disk (md) raid array that contains hot-plugable devices like removable USB and Firewire drives or (possibly external / bay-mounted) (S)ATA harddrives.
It is possible to have the entire root file system (/) on a hot-plugable raid or as in the example installation below just the home directories (/home).
Beginning with ubuntu 9.10 the arrays are properly assembled using udev rules and the --incremental option. Recovery of temporarily disconected (external) drives should now work out of the box.
Installation Instructions
- Attach all external drives that you want to hold raid partitions.
- Begin the install process by booting from the alternate-cd.
- Install using manual partitioning.
(The basic process is described more elaborately at https://help.ubuntu.com/community/Installation/SoftwareRAID)
- Since this setup and test case assumes that the installation is done on a laptop machine the system we'll also set up encryption. (This can be left out if encryption is not desired.) First create a rather small (primary) partition that will hold the /boot files. (500M is more than needed.) Configure it to be an ext2 file system and be mounted as /boot. Then create a (logical) partition that will hold the rootfs and swap partition on the internal drive. Configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Then configure the resulting _crypt device to be used as a physical volume for lvm. Enter the lvm setup to create a volume group (in this case vg_internal_hostname) and the logical volumes for the rootfs and swap (lv_root, lv_swap). Then configure lv_root to be mounted as / and lv_swap to be used as swap. Now setup equally sized (logical) partitions on the internal drive and each external drive to be used as physical raid partitions for the home directories. Choose to configure raid (multi-disks) from the menu and create a raid 1 (mirroring) device from the partitions you just created. (md0 will be assumed here.) The multi-disk raid partitions should start syncing in the background. Go on to create a partition on the md0 device and configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Finally create an ext4 partition on the md0_crypt device and configure it to be mounted as /home. (Note that unlike with the first *_crypt device there is no lvm used on top of the second *_crypt device. If you want it to hold multiple logical volumes it is of course possible to create a lvm volume group (vg_mirrored_hostname) containing logical volumes (lv_home, lv_svn, ...).) The partitioning of this example can be summarized as:
[boot partition] plain device -> ext2 (/boot) [system partition] first_crypt device -> lvm {ext4 (/) and swap} [multi disk member partitions] -> md0 device -> second_crypt device -> ext3 (/home)(During subsequent install attempts or when the disks already contain some lvm, raid or luks partitions you may encounter problems with the installer not being able to delete or use preexisting devices (with data) or inadvertently scraping previous setups. If you encounter this please file/report on appropriate bugs.)
If the HotplugRaid is used for the home directories (i.e. not too much i/o, the little slow down does not matter) as in this example the re-syncing can be accelerated using an "internal bitmap". If the array has not been set up with the "-b internal" option, you can "grow" one afterwards with "mdadm --grow /dev/mdX -b internal")
- Note: If you have multiple external members and want to use them conneced one at a time (rolling backup style) you may like to have a bitmap for each member. For this you need to stack md arrays of two members. The first array (i.e. md0) consists of an internal member and one external member. The second array (i.e. md1) consists of md0 and the second external member.
Booting with a degraded array (see BootDegradedRaid).
- When you detach a removable device with a raid member partition from the machine when it is powered down, the degraded array may not come up automatically during boot.
With 9.10: Do a "dpkg-reconfigure mdadm" after the install and set boot degraded to yes again. (462258) The bootscripts need to wait for devices and time out after a while to start degraded arrays. To start the array manually if you are droped to a console:
mdadm --incremental --run /dev/sde5 # Assemble and run the device (in this case sde5) as a degraded raid. cryptsetup luksOpen /dev/mdX my-rescued-mirror # Open the encrypted md device using its luks header. mount /dev/mapper/my-rescued-mirror /mountpoint # Finally mount your data.
- When you detach a removable device with a raid member partition from the machine when it is powered down, the degraded array may not come up automatically during boot.
- Resyncing reconnected drives (readding raid member partitions to its array) when they are re-attached.
- Since 9.10 uses "mdadm --incremental" in the udev rules this now works automatically. If a reconnected partitions that got marked faulty still needs to (do they?) be re-added manually: "mdadm --add /dev/md0 $dev" will manually re-add the partition to the raid and start the syncing. A custom udev rule may be similar to this one:
ACTION=="add", BUS=="usb", SUBSYSTEM=="block", DEVPATH=="*[0-9]" SYSFS{serial}=="5B6B1B916C31", RUN+="mdadm --add /dev/md0 $DEVNAME" - But the above rule actually identifies the USB device by its serial number and not the raid partition by superblock/uuid (which you may have moved to another device).
- Since 9.10 uses "mdadm --incremental" in the udev rules this now works automatically. If a reconnected partitions that got marked faulty still needs to (do they?) be re-added manually: "mdadm --add /dev/md0 $dev" will manually re-add the partition to the raid and start the syncing. A custom udev rule may be similar to this one:
Check if the system will survive the sleep when you suspend the computer. 230671
To check the status of the arrays: cat /proc/mdstat
Further Improvements
- Maybe enabling the mdadm options --write-mostly --write-behind for external drives.
- Having the root file system and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume. With fast devices like external SATA Drives you can include the full root file system and swap partition into the raid mirror.
- You may keep your /boot partition on a raid and have it mirrored to external drives,
- maybe including a read only medium as a preferred mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly re-syncing the internal /boot partition and master boot records (MBR).
References
https://help.ubuntu.com/community/Installation/SoftwareRAID https://features.launchpad.net/distros/ubuntu/+spec/udev-mdadm https://blueprints.launchpad.net/ubuntu/+spec/udev-mdadm https://blueprints.edge.launchpad.net/ubuntu/+spec/boot-degraded-raid https://blueprints.launchpad.net/ubuntu/+spec/udev-lvm-mdadm-evms-gutsy
HotplugRaid (last edited 2013-01-05 21:44:43 by 77-22-90-94-dynip)