How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

server1:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md2              4.4G  730M  3.4G  18% /
tmpfs                 126M     0  126M   0% /lib/init/rw
udev                   10M   68K   10M   1% /dev
tmpfs                 126M     0  126M   0% /dev/shm
/dev/md0              137M   17M  114M  13% /boot
server1:~#

The output of

cat /proc/mdstat

should be as follows:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
      4594496 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]
      497920 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]
      144448 blocks [2/1] [_U]

unused devices: <none>
server1:~#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

server1:~# fdisk /dev/sda

Command (m for help):
 <-- t
Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):
 <-- t
Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):
 <-- t
Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help):
 <-- w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
server1:~#

Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID arrays:

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1]
      4594496 blocks [2/1] [_U]
      [=====>...............]  recovery = 29.7% (1367040/4594496) finish=0.6min speed=85440K/sec

md1 : active raid1 sda2[0] sdb2[1]
      497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      144448 blocks [2/2] [UU]

unused devices: <none>
server1:~#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
      4594496 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      144448 blocks [2/2] [UU]

unused devices: <none>
server1:~#

).

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck $
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35d103e3:2b3d68b9:a903a704
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e19f9e4:2b3d68b9:a903a704
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae381162:2b3d68b9:a903a704

 

8 Preparing GRUB (Part 2)

We are almost done now. Now we must modify /boot/grub/menu.lst again. Right now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel stanza (which contains hd1), paste it below and replace hd1 with hd0. Furthermore we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

[...]
## ## End Default Options ##

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd1)
root            (hd1,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

title           Debian GNU/Linux, kernel 2.6.18-4-486 RAID (hd0)
root            (hd0,0)
kernel          /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd          /initrd.img-2.6.18-4-486
savedefault

#title          Debian GNU/Linux, kernel 2.6.18-4-486
#root           (hd0,0)
#kernel         /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
#initrd         /initrd.img-2.6.18-4-486
#savedefault

#title          Debian GNU/Linux, kernel 2.6.18-4-486 (single-user mode)
#root           (hd0,0)
#kernel         /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro single
#initrd         /initrd.img-2.6.18-4-486
#savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

In the same file, there's a kopt line; replace /dev/sda3 with /dev/md2 (don't remove the # at the beginning of the line!):

[...]
# kopt=root=/dev/md2 ro
[...]

Afterwards, update your ramdisk:

update-initramfs -u

... and reboot the system:

reboot

It should boot without problems.

That's it - you've successfully set up software RAID1 on your running Debian Etch system!

Share this page:

14 Comment(s)

Add comment

Comments

From: Rik Bignell

Thx for this.  Successfully used your guide to setup Juanty 9.04 with RAID5.

Points to note, RAID5 will NOT work when boot partition is raid5.  For example, if you have:

md0 = swap

md1 = root (boot within root)

Then you will not be able to write your grub properly to each drive due to raid5 not having separate copies of files on each disc.  Grub boots at disk level and not at software raid level it seems.

My work around was to have boot separate. I chose:

md0=swap (3x drives within raid5, sda1, sdb1, sdc1)

md1=boot (2x drives within raid1, sda2, sdb2) 3rd drive is not needed unless 2 drives fail at once, and because drives a mirrored completely you are able to write to grub.

md2=root (3x drives within raid5, sda3, sdb3, sdc3)

I'll be writing my own guide for raid1 and raid5 so you can see the difference in commands, but will referrence this guide a lot as it helped me the most out of all the ubuntu raid guides i found on google.

 

Watch http://www.richardbignell.co.uk/ for new guides.

From:

Hello,

This instruction looks very useful however I would like to ask could someone please make this to suite the default and recommended hdd setup of Debian (single partition) please

From: Anonymous

The article shows writing out a modified partition table, getting the message:

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.

and then, without rebooting,  trying to write to the new partitions (running "mdadm -add ...").

Doing that is extremely dangerous to any data on that disk---and even if there is no data, doing that means  mdadm might not be initializing something (the kernel's old view of partition N) other than what you meant (your new partition N).

 

From: Lars

 QUOTE:

... and reboot the system:
reboot
It should boot without problems.

 

 not quite... you need to do the GRUB part from page 2 again to make this work - just got stuck in a 'GRUB' prompt after reload - can be fixed with a rescue system and a new grub setup on the hd's.

Otherwise the howto works just fine - thank you!

 -Lars

From: Anonymous

I followed this guide to setup raid 1 (mirror of existing disc,to another) of an seperate disc containing vmware virtual disc files.

 I hope, I won't lose any data, but that's a risk I'd had to take. Right now it's synchronising the vmware disc with the harddisc containing no data... at this point, I can't access the harddisc containing the vmware files - so I have my fingers crossed :-)

 I'll post an update as soon, as the synchronisation is complete, byfar it's only 18% complete.

 I would recommand everyone there is using this guide to synchronise data between two discs to unmount EVERY disc that you're making changes to, BEFORE making any changes at all. If you somehow fail to do so, it can lead to serious data loss. A point that I think this guide, failed to mention.

 Besides that, thank you very much for sharing you're knowledge!

 - Simon Sessingø
Denmark

From: Anonymous

THANK YOU for this wonderful howto. I managed to get RAID set up on Debian Lenny with no changes to your instructions.

From: Singapore website design

Hi thanks for writing this guide. I managed to setup my servers software raid successfully using this guide. Been using hardware raid all along. Thanks

From: Ben

Great tutorial, worked perfectly for me in Debain Lenny substituting sda and sdb with hda and hdd, and a few extra partitions ... thanks for posting. :)

From: Juan De Stefano

Thank you for this excellent guideline. I followed on Ubuntu 9.10. The only thing different is to setup the grub2.  It is suposed i shouldn't edit the grub.cnf (former grub.lst) but i did to change the root device) then mounted the /dev/md2 on /mnt/md2  and then /dev/md0 on /mnt/md2/boot. Mounted sys, proc and dev also to make the chroot. Later i did the dpkg-reconfigure grub-pc and selected the both disks to install grub on mbr. Everything worked the first time i tried.

Thanks again

/ Juan

From: Anonymous

I just did this for 9.10 Ubuntu as well.  This procedure really needs to be updated for GRUB2, which in and of itself is an excersise in tedium.  However, GRUB2 is slightly smarter & seemed to auto-configure a few of the drive details here and there.  However, there were some major departures from this procedure.

You don't need to (& should not) modify grub.cfg directly.  Instead, I created a custom grub config file: /etc/grub.d/06_custom which would contain my RAID entries and put them above the other grub boot options during the "degraded" sections of the installation.   There's a few tricks there in how to format a custom file correctly: there is some "EOF" crayziness, and also you should be using UUIDs, so you have to make sure you get the right UUIDs, instead of using /dev/sd[XX] notation.  In the end, my 06_custom looked like:  

#! /bin/sh -e
echo "Adding RAID boot options" >&2
cat << EOF
menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd1)" 
{
        recordfail=1
        if [ -n ${have_grubenv} ]; then save_env recordfail; fi
set quiet=1
insmod ext2
set root=(hd1,0)
search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872
linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-2c4fa99a5872 ro   quiet splash
initrd /boot/initrd.img-2.6.31-20-generic
}

menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd0)" 
{
        recordfail=1
        if [ -n ${have_grubenv} ]; then save_env recordfail; fi
set quiet=1
insmod ext2
set root=(hd0,0)
search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872
linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-2c4fa99a5872 ro   quiet splash
initrd /boot/initrd.img-2.6.31-20-generic
}

EOF

Also, you have to figure out which pieces of 10_linux to comment out to get rid of the non-RAID boot options; for that:
  #linux_entry "${OS}, Linux ${version}" \
  #    "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_EXTRA} ${GRUB_CMDLINE_LINUX_DEFAULT}" \
  #    quiet
  #if [ "x${GRUB_DISABLE_LINUX_RECOVERY}" != "xtrue" ]; then
  #  linux_entry "${OS}, Linux ${version} (recovery mode)" \
  # "single ${GRUB_CMDLINE_LINUX}"
  #fi

Overall, this was the best non-RAID -> RAID migration how-to I could find.  Thanks very much for putting this out there.

From: Alex Dekker
From: Vlad P

I had already set up my RAID 1 before hitting your tutorial, but this reading made me understand everything better - much better! Thank you very much!

From: Cristian

This guide is awesme, it is just all you need to transform an usual one SATA disk into RAID1 if you follow all instruction.

 Thanks again ... thanks,  thanks. You save me from some days work to configure again a server.

From: Rory

Thank you for this perfect tutortial.

 It works perfectly even for Ubuntu. Had to mess with grub2 instead, but aside from that, it's brilliant. Used it on three machines without a glitch.