HotplugRaid

Differences between revisions 24 and 39 (spanning 15 versions)
Revision 24 as of 2009-09-17 10:28:37
Size: 10768
Editor: 77-21-62-108-dynip
Comment:
Revision 39 as of 2013-01-05 21:44:43
Size: 6121
Editor: 77-22-90-94-dynip
Comment: data loss bug
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
#title Raid with hotplugable or external devices #title Raid with hotplugable (possibly external) devices
Line 5: Line 5:
[Replicas connected over the network (drbd or nbd devices) are currently not covered. Alternative aproaches include the syncing of regular (network mounted) filesystems (unison) or distributed file systems (afs, coda, intermezzo, ...).]  * Block devices connected over the network (drbd or nbd devices) are not covered here.
 *
Alternative aproaches include the syncing of regular (possibly network mounted) filesystems (unison, ChironFS) or replicating file systems (GlusterFS, OpenAFS, coda, intermezzo, ...).
Line 12: Line 13:
A replicating raid (redundant array of independent disks) that holds the user data is a generally useful setup. A hotplugable raid is especially usefull for home or office laptop users. This page provides a How-To and a basic test case. A replicating raid (redundant array of independent disks) that holds the user data is a generally useful setup. A hotplugable raid with one ore more external drives is especially usefull for home or office laptop users. This page provides a How-To and a basic test case.
Line 15: Line 16:

Warning: Hotplugging differnt hardware with device mapper on top of the raid can cause data loss!
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/320638
Line 31: Line 32:
It's a installation with file systems residing on multi-disk (md) raid arrays that contain hot-plugable devices like removable USB and Firewire drives and (external / bay-mounted) [S/P]-ATA drives. It's an installation with a filesystem residing on a multi-disk (md) raid array that contains hot-plugable devices like removable USB and Firewire drives or (possibly external / bay-mounted) (S)ATA harddrives.
Line 33: Line 34:
For now we distinguish between having the root file system (/) on a hot-plugable raid and having the home directories (/home) on a hot-plugable raid. It is possible to have the entire root file system (/) on a hot-plugable raid or as in the example installation below just the home directories (/home).
Line 35: Line 36:
As long as appropriate hotplug udev rules do not exist yet, you have to mount degraded md devices (your backup mirror partitions) manually to recover your data or to access your data on another machine. The necessary steps for this go something like this: Beginning with ubuntu 9.10 the arrays are properly assembled using udev rules and the --incremental option. Recovery of temporarily disconected (external) drives should now work out of the box.
Line 37: Line 38:
{{{
mdadm --incremental --run /dev/sde5 # Assemble and run the device (in this case sde5) as a degraded raid.
cryptsetup luksOpen /dev/mdX my-rescued-mirror # Open the encrypted md device using its luks header.
mount /dev/mapper/my-rescued-mirror /mnt # Finally mount your data.
}}}

Line 48: Line 46:
 1. Attach the external drives that you want to hold raid partitions. See: http://testcases.qa.ubuntu.com/Install/AlternateRaidMirror and adopt the partitioning to your needs.
Line 50: Line 48:
 1. Begin the install process by booting from the alternate-cd.  * Having the root file system and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume. With fast devices like external SATA Drives you can include root swap and home into one volume group on one luks on raid mirror, and use a separate raid mirror only for /boot.
Line 52: Line 50:
 1. Install using manual partitioning.  * multi-sync-bitmaps: If you want to have multiple external members and use them connected one at a time (rolling backup style) you may like to have a separate bitmap for each member! For this you need to stack md arrays of two members. The first array (i.e. md0) then consists of an internal member and one external member. The second array (i.e. md1) consists of md0 and the second external member.
Line 54: Line 52:
   (The basic process is described more elaborately at https://help.ubuntu.com/community/Installation/SoftwareRAID)
Line 56: Line 53:
  Since this setup and test case assumes that the installation is done on a laptop machine the system will also be set up using encrypted devices. (The encryption steps can be left out if encryption is not desired.) === Further Improvements ===
Line 58: Line 55:
  First create a rather small (primary) partition that will hold the /boot files. (500M is more than needed.) Configure it to be an ext2 file system and be mounted as /boot.  * Maybe enabling the mdadm options --write-mostly --write-behind for external drives.
Line 60: Line 57:
  Then create a (logical) partition that will hold the rootfs and swap partition on the internal drive. Configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it. Then configure the resulting _crypt device to be used as a physical volume for lvm. Enter the lvm setup to create a volume group (in this case vg_internal_hostname) and the logical volumes for the rootfs and swap (lv_root, lv_swap).  * Maybe including a read only medium as a preferred mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly re-syncing the internal /boot partition and master boot records (MBR).
Line 62: Line 59:
  Back in the partitioning tool configure lv_root to be mounted as / and lv_swap to be used as swap. === Troubleshooting ===
Line 64: Line 61:
  Now setup equally sized (logical) partitions on the internal drive and each external drives as physical raid partitions that will hold the home directories. Choose to configure raid (multi-disks) from the menu and create a raid 1 (mirroring) device from the partitions you just created. (md0 will be assumed here.) The multi-disk raid partitions should start syncing in the background. Go on to create a partition on the md0 device and configure it to be used for dm-crypt with luks header. Enter the encryption setup to set the passphrase for it.  1. Preexisting Superblocks: If during subsequent install attempts or when the disks already contain some lvm, raid or luks partitions you encounter problems with the installer not being able to delete or use preexisting devices (with data) or inadvertently scraping previous setups. If you encounter this please file/report on appropriate bugs. (Workaround may be to delete existing superblocks: mdadm --zero-superblock))
Line 66: Line 63:
  Finally create an ext3 partition on the md0_crypt device and configure it to be mounted as /home.
Line 68: Line 64:
  (Note that unlike with the first *_crypt device there is no lvm used on top of the second *_crypt device. If you want it to hold multiple logical volumes it is of course possible to create a lvm volume group (vg_mirrored_hostname) containing logical volumes (lv_home, lv_svn, ...).)  1. Booting with a degraded array (see ReliableRaid).
Line 70: Line 66:
  The partitioning of this example can therefore be summarized as:    When you detach a removable device with a raid member partition from the machine when it is powered down, the degraded array may not come up automatically during boot.
Line 72: Line 68:
  {{{
ext2 (/boot)
first_crypt -> lvm [ext3 (/) and swap]
[multi disk member partitions] -> md0 -> second_crypt -> ext3 (/home)
   Slight chance with 9.10: Do a "dpkg-reconfigure mdadm" after the install and set boot degraded to yes again. (Bug:462258)
 
   To start the array manually if you are droped to a console:
   {{{
mdadm --incremental --run /dev/sde5 # Assemble and run the device (in this case sde5) as a degraded raid.
cryptsetup luksOpen /dev/mdX my-degraded-mirror # Open the encrypted md device using its luks header.
mount /dev/mapper/my-degraded-mirror /mountpoint # Finally mount your data.
Line 78: Line 77:
  (During subsequent testing or when the disks already contain some lvm, raid or luks partitions you may encounter problems with the installer not being able to delete or use existing devices (with data) or inadvertently scraping previous setups. Please test this and file/report on appropriate bugs.)

  * If the HotplugRaid is used for the home directory data (i.e. not too much i/o, the little slow down does not matter) the re-syncing can be accelerated using an internal bitmap. (Create the array with "-b internal", or grow one later: "mdadm --grow /dev/mdX -binternal")

  * Crypttab Bug Bug:227285

    The installer does not put the mdX_crypt devices into the /etc/crypttab, so they won't get opened during the boot process. As a workaround do that manually while the installer is copying files: Change to another console. Determine the UUIDs that point to your md devices with "ls -l /dev/disks/by-uuid". (Write down the first few UUID characters.) When the crypttab has been created (check with "cat /target/etc/crypttab"), add your md*_crypt devices like this: "echo md0_crypt /dev/drives/by-uuid/<use_tab_completion_to_get_the_uuid_right> none luks >> /target/etc/crypttab".

  * The boot scripts are racing udev events.
   https://bugs.launchpad.net/ubuntu/+source/mdadm/+bugs

   That is, the boot scripts may try to assemble, fsck, mount etc. raid arrays before the partitions become available. Bug:247153 Bug:251164

   Symptom: File system checks or opening dm-crypt devices fails because raid devices are not active. You are dropped to a console after a timeout.

   As long as the boot scripts don't catch if its time for them start (see DegradedRaid) you have to hack around this by introducing a "sleep 10" before do_start in the start section of /etc/init.d/cryptsetup-early (for non-root file systems) and into the initramfs for root file systems.

  * When and what to do with degraded arrays during boot. Bug:120375 (See BootDegradedRaid)

   When you detach a removable device with a raid member partition from the machine when it is powered down the array won't come up during boot because it is not complete. Again you are dropped to a console.

   The udev rules need to be changed to use "mdadm --incremental" and the bootscripts need to wait for devices and time out to start degraded arrays.

   In the meantime you need to start the array manually (mdadm --run /dev/mdX) in order to have its status updated to degraded. (mdadm fails to stop RAID on shutdown, Bug:111398) Then you can exit from the console and the next reboot will work (Bug:244810).
Line 104: Line 79:
 1. Let the sync start (add a raid member to its array) automatically if they are attached later on.  1. Resyncing reconnected drives (readding raid member partitions to its array) when they are re-attached (see ReliableRaid).
Line 106: Line 81:
   This works when the udev rules are changed to use "mdadm --incremental". (As described in the BootDegradedRaid howto).
   Still members that got marked as faulty ?may? need to be re-added manually (or with a custom udev rule for that raid member device if you regularly disconnect your laptop from your mirror drive at home.)
   If a reconnected partitions that got marked faulty still needs to be re-added manually:
Line 116: Line 89:
 1. Do not suspend the computer if you are using USB drives as raid members, until they can survive the sleep! Bug:230671

=== Further Improvements ===

Having the root file system and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume.

With fast devices like external SATA Drives it may be bearable to include the root file system and swap partition into the raid mirror. You may even keep your /boot partition on a raid and have it mirrored to external drives. Maybe including a read only medium as a preferred mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly re-syncing the internal /boot partition and master boot records (MBR).

All this will require rootfs on HotplugRaid to work.
Line 128: Line 92:
== Root filesystem on HotplugRaid ==

 1. Kernel modules.
  If the root file system resides on USB/Firewire/eSATA/PCMCIA/PC-Card/ExpressCard devices those kernel modules need to be present in initramfs. The defaults in Ubuntu 8.04 work at least for usb-storage devices.

 1. Some initramfs boot scripts do not wait for devices and timeout after a little while. Bug:251164

  Encrypted md devices on USB raid members do not get opened because the boot script does not wait for it to come up.

  To workaround this you need to put a "sleep 10" to the bottom of /usr/share/initramfs-tools/scripts/init-premount/udev and update the initramfs with "sudo update-initramfs -k all -u". (Yes, it takes 9 sec. for my usb disk to respond after initialization.)

  (To bring up the mdX_crypt devices manually after you are dropped to a console use: "cryptsetup luksOpen /dev/mdX mdX_crypt", then "exit" the console.)
Line 153: Line 105:
CategoryBootAndPartition CategoryBootAndPartition CategoryUsb CategoryRecovery
  • Block devices connected over the network (drbd or nbd devices) are not covered here.
  • Alternative aproaches include the syncing of regular (possibly network mounted) filesystems (unison, ChironFS) or replicating file systems (GlusterFS, OpenAFS, coda, intermezzo, ...).

HotplugRaid

A replicating raid (redundant array of independent disks) that holds the user data is a generally useful setup. A hotplugable raid with one ore more external drives is especially usefull for home or office laptop users. This page provides a How-To and a basic test case.

Warning: Hotplugging differnt hardware with device mapper on top of the raid can cause data loss! https://bugs.launchpad.net/ubuntu/+source/linux/+bug/320638

Home directories on HotplugRaid

Layman's Description:

Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB hard disks containing mirror partitions. Upon insertion the lights of those drives will stay lit during idle times for a little while. This is the mirror partition being synced in the background automatically. As long as those drives stay attached any work you save will be written simultaneously to the laptops internal disk, as well as to the attached mirror partitions.

If the internal disk fails, crashes down the stairway together with your laptop when you are on the road, or it drives away in that fake taxi drivers trunk, well hopefully you have at least one up-to-date mirror partition on that drive in your backpack, or on the one you kept at home.

Technical Description:

It's an installation with a filesystem residing on a multi-disk (md) raid array that contains hot-plugable devices like removable USB and Firewire drives or (possibly external / bay-mounted) (S)ATA harddrives.

It is possible to have the entire root file system (/) on a hot-plugable raid or as in the example installation below just the home directories (/home).

Beginning with ubuntu 9.10 the arrays are properly assembled using udev rules and the --incremental option. Recovery of temporarily disconected (external) drives should now work out of the box.

Installation Instructions

See: http://testcases.qa.ubuntu.com/Install/AlternateRaidMirror and adopt the partitioning to your needs.

  • Having the root file system and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume. With fast devices like external SATA Drives you can include root swap and home into one volume group on one luks on raid mirror, and use a separate raid mirror only for /boot.
  • multi-sync-bitmaps: If you want to have multiple external members and use them connected one at a time (rolling backup style) you may like to have a separate bitmap for each member! For this you need to stack md arrays of two members. The first array (i.e. md0) then consists of an internal member and one external member. The second array (i.e. md1) consists of md0 and the second external member.

Further Improvements

  • Maybe enabling the mdadm options --write-mostly --write-behind for external drives.
  • Maybe including a read only medium as a preferred mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly re-syncing the internal /boot partition and master boot records (MBR).

Troubleshooting

  1. Preexisting Superblocks: If during subsequent install attempts or when the disks already contain some lvm, raid or luks partitions you encounter problems with the installer not being able to delete or use preexisting devices (with data) or inadvertently scraping previous setups. If you encounter this please file/report on appropriate bugs. (Workaround may be to delete existing superblocks: mdadm --zero-superblock))
  2. Booting with a degraded array (see ReliableRaid).

    • When you detach a removable device with a raid member partition from the machine when it is powered down, the degraded array may not come up automatically during boot.

      Slight chance with 9.10: Do a "dpkg-reconfigure mdadm" after the install and set boot degraded to yes again. (462258) To start the array manually if you are droped to a console:

      mdadm --incremental --run /dev/sde5 # Assemble and run the device (in this case sde5) as a degraded raid.
      cryptsetup luksOpen /dev/mdX my-degraded-mirror # Open the encrypted md device using its luks header.
      mount /dev/mapper/my-degraded-mirror /mountpoint # Finally mount your data.
  3. Resyncing reconnected drives (readding raid member partitions to its array) when they are re-attached (see ReliableRaid).

    • If a reconnected partitions that got marked faulty still needs to be re-added manually: "mdadm --add /dev/md0 $dev" will manually re-add the partition to the raid and start the syncing. A custom udev rule may be similar to this one:
      ACTION=="add", BUS=="usb", SUBSYSTEM=="block", DEVPATH=="*[0-9]" SYSFS{serial}=="5B6B1B916C31", RUN+="mdadm --add /dev/md0 $DEVNAME" 
    • But the above rule actually identifies the USB device by its serial number and not the raid partition by superblock/uuid (which you may have moved to another device).

References

https://help.ubuntu.com/community/Installation/SoftwareRAID https://features.launchpad.net/distros/ubuntu/+spec/udev-mdadm https://blueprints.launchpad.net/ubuntu/+spec/udev-mdadm https://blueprints.edge.launchpad.net/ubuntu/+spec/boot-degraded-raid https://blueprints.launchpad.net/ubuntu/+spec/udev-lvm-mdadm-evms-gutsy


CategoryBootAndPartition CategoryUsb CategoryRecovery

HotplugRaid (last edited 2013-01-05 21:44:43 by 77-22-90-94-dynip)