Welcome to http://www.marssoft.de/
 
Tuesday, 11th December 2018 21:54:26 (GMT+1) 

How to set up Linux (Debian) on a Raid

This guide is intended to aid in setting up Linux fully remote, for example on your server, on a Software-Raid (two Harddisks minimum required). While I found lots of guides for setting up a Software-Raid, none of them covered a new installation from start to end - so it took me several blind reboots until this guide was written.

Some additional notes: This guide intends a new installation, probably from some bootable CD or similar. But you can as well use it on an existing linux installation. Just make sure you don't delete partitions that keep data you need, and use fdisk rather than sfdisk. If you want existing partitions to be added to a Raid 1 as well (as I assume you want), you can use mdadm's –assemble option to add a second partition to the first one.

Again: This guide intends new installation, and therefore empty disks. If you follow it closely, you might loose data if your disks aren't empty!

Requirements:

  • (At least) two harddisks
  • Access to a remote console on your server
  • mdadm and raidtools2
  • Coffee and some fingerfood

The first steps: Partitioning the Harddisks

You need to load all ide-modules for your mainboard, so that fdisk shows you the disks (they might be loaded or in your kernel already). Mine are:

modprobe ide_generic
modprobe sis5513
modprobe sata_sis
modprobe scsi_mod

Then you need the raid-modules (they might be loaded or in your kernel already). These are:

modprobe sd_mod
modprobe md_mod
modprobe raid1
modprobe raid0

Now we can start partitioning the disks. Partition disk 1 (/dev/sda in my case) first, we can then alter disk 2. BTW: Remember, both disks should never be on the same IDE-channel, so good examples are:

disk 1: /dev/hda
disk 2: /dev/hdc
or
disk 1: /dev/sda
disk 2: /dev/sdb

but never use:

disk 1: /dev/hda (NO!)
disk 2: /dev/hdb (BAD!)

I chose to have some Raid 1, and some Raid 0 disks. Why? Raid 1 for the system partitons, where you need reliability. Raid 0 for /tmp and /scratch, because it is fast and it provides more space. It is only for data that I certainly have backups, like when sharing a large file with friends.

Afterwards, I want my partitions to look like:

mount:/boot/Windows/usr/var/home/scratch/tmpswap
Harddisk:sda1sda2sda3sda5sda6sda7sda8sda9sda10
Harddisk:sdb1sdb2sdb3sdb5sdb6sdb7sdb8sdb9sdb10
Raid:/dev/md0/dev/md1-/dev/md2/dev/md3/dev/md4/dev/md5/dev/md6-
# > sfdisk -l /dev/sda

Disk /dev/sda: 9964 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

  Device  Boot Start  End  #cyls   #blocks   Id  System
/dev/sda1  *     0+     8     9-     72261   fd  Linux raid autodetect (raid 1: /boot)
/dev/sda2        9    132   124     996030   fd  Linux raid autodetect (raid 1: /)
/dev/sda3      133    880   748    6008310    c  W95 FAT32 (LBA)       (no raid: Windows)
/dev/sda4      881   9963  9083   72959197+   5  Extended
/dev/sda5      881+  1628   748-   6008278+  fd  Linux raid autodetect (raid 1: /usr)
/dev/sda6     1629+  2376   748-   6008278+  fd  Linux raid autodetect (raid 1: /var)
/dev/sda7     2377+  8330  5954-  47825473+  fd  Linux raid autodetect (raid 1: /home)
/dev/sda8     8331+  9630  1300-  10442218+  fd  Linux raid autodetect (raid 0: /scratch)
/dev/sda9     9631+  9695    65-    522081   fd  Linux raid autodetect (raid 0: /tmp)
/dev/sda10    9696+  9963   268-   2152678+  82  Linux swap / Solaris  (no raid: swap)

To set them up like this, use your favorite partition manager. Mine is fdisk and/or sfdisk. Using sfdisk, you can specify the partitons in a script-like manner. You can leave out unneeded fields, for example <start>, and it will start the partition at the first free spot. <size> should be in units, fdisk (see above) tells you how big one unit is and how many your disk has. <id> can be the hexadecimal id of the partition, where all raid-partitions use 'fd' ('0c' is FAT32, '05' is an extended patition, '82' is swap). BTW: Don't use raid for your swap, the kernel will make a fast raid 0 out of all swap devices that have the same priority in fstab.

echo -e "<start>,<size>,<id>,<bootable>\n" | sfdisk /dev/<disk>

So for my partitioning sceme I used this call:

echo -e "\
,9,0xfd\n\
,124,0xfd\n\
,748,0x0c,*\n\
,,0x05\n\
,748,0xfd\n\
,748,0xfd\n\
,5954,0xfd\n\
,1300,0xfd\n\
,65,0xfd\n\
,,0x82\n\
" | sfdisk -D -L /dev/sda

To copy this partition table to the second disk as well, use this call:

sfdisk -d /dev/sda | sfdisk -D -L /dev/sdb

Old: Using raidtools to set up the Raid

If you like, you can use the outdated raidtools-setup and then continue below the next section.

New: Using mdadm: setting up the Raid

Well, mdadm is newer and seems to be more powerful than the old raidtools, so you might prefer this section. To have a little more comfort and automation, I made the following small script to create and set up the whole raid. Set FS to the filesystem of your choice (FS=ext3, FS=reiserfs, FS=xfs), and watch out for needed options:

FS="reiserfs -q"
function raidformat {
	# sleep until raid is created (be nice to the harddisk)
	T=`cat /proc/mdstat | grep resync`
	while [ "$T" != "" ]
	do
		echo "$T"
		sleep 10
		T=`cat /proc/mdstat | grep resync`
	done
	mkfs.$FS $1
}

for I in 0 1 2 3 4 5 6
do
	if [ ! -e /dev/md$I ]; then mknod /dev/md$I b 9 $I; fi
done

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1 && raidformat /dev/md0
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sd[ab]2 && raidformat /dev/md1
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sd[ab]5 && raidformat /dev/md2
mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sd[ab]6 && raidformat /dev/md3
mdadm --create /dev/md4 --level=1 --raid-devices=2 /dev/sd[ab]7 && raidformat /dev/md4
mdadm --create /dev/md5 --level=0 --raid-devices=2 /dev/sd[ab]8 && raidformat /dev/md5
mdadm --create /dev/md6 --level=0 --raid-devices=2 /dev/sd[ab]9 && raidformat /dev/md6

also create the mdadm-configuration:

mv -vi /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.bak
echo 'DEVICE /dev/sda* /dev/sdb*' > /etc/mdadm/mdadm.conf
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

my /etc/mdadm/mdadm.conf then looks like this:

DEVICE /dev/sda* /dev/sdb*
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=7961b071:72a48244:06f9cf50:e997e77f
   devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=3d438ec5:e0036c78:2f225878:67c239cb
   devices=/dev/sda2,/dev/sdb2
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=61e67b7b:622d28fe:570015bf:270125a0
   devices=/dev/sda5,/dev/sdb5
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=6ca2d89a:5827ce55:3d040fd9:401733eb
   devices=/dev/sda6,/dev/sdb6
ARRAY /dev/md4 level=raid1 num-devices=2 UUID=05a4a412:9185cbbc:07c74786:b40709fb
   devices=/dev/sda7,/dev/sdb7
ARRAY /dev/md5 level=raid0 num-devices=2 UUID=6aaced2c:46e71c4d:9a7f3cb8:3595a9f7
   devices=/dev/sda8,/dev/sdb8
ARRAY /dev/md6 level=raid0 num-devices=2 UUID=2f0af3ba:4ce10736:eb793a0d:7638a513
   devices=/dev/sda9,/dev/sdb9

If raid-disks fail

Sometimes after power-outage or forced reboot the raid disks may be out of sync. mdadm then rejects one of the disks as outdated, the result will look somehow like md1, md2 or md3 below:

root(strassen) ~> cat /proc/mdstat
Personalities : [raid0] [raid1]
md3 : active raid1 sda8[0]
      60556864 blocks [2/1] [U_]

md2 : active raid1 sda7[0]
      15631104 blocks [2/1] [U_]

md1 : active raid1 sda6[0]
      29302464 blocks [2/1] [U_]

md0 : active raid1 sda1[0] sdc1[1]
      505920 blocks [2/2] [UU]

The solution is to re-sync the disks. This is done by first removing the faulty disk, and then re-adding it. The error message below can be ignored, it just states that mdadm already removed partition sdc6:

root(strassen) ~> mdadm /dev/md1 --fail /dev/sdc6 --remove /dev/sdc6
mdadm: set device faulty failed for /dev/sdc6:  No such device
root(strassen) ~> mdadm /dev/md1 --add /dev/sdc6
mdadm: re-added /dev/sdc6

Having fun: some Raid-Tests

So you want to see if everything is working as expected? You can make mdadm think the raid is faulty, and see how it reacts to that. Obviously it makes sense to set your email address in mdadm.conf:

root(strassen) ~> cat /etc/mdadm/mdadm.conf|grep MAILADDR
MAILADDR user@domain.org

Now you can set one of the partitions or disks as faulty:

root(strassen) ~> mdadm /dev/md1 --manage --set-faulty /dev/sdc6
# now you should get an email saying "A Fail event had been detected on md device /dev/md1."
root(strassen) ~> cat /proc/mdstat|grep -A1 md1
md1 : active raid1 sdc6[2](F) sda6[0]
      29302464 blocks [2/1] [U_]

Finally remove the faulty disk and re-add it again (as in the section about raid-recovery above):

root(strassen) ~> mdadm /dev/md1 --manage --remove /dev/sdc6
mdadm: hot removed /dev/sdc6
root(strassen) ~> mdadm /dev/md1 --add /dev/sdc6
mdadm: re-added /dev/sdc6

To re-add a stopped array (maybe after re-partitioning), use mdadm –assemble:

root(strassen) ~> mdadm --assemble /dev/md9 /dev/sd[ab]14
mdadm: /dev/md9 has been started with 2 drives.

Going further: installing a new Linux

I also needed to set up swap and the FAT32-partitinons, do that for any partitions you additionally have:

mkswap /dev/sda9
mkswap /dev/sdb9

mkdosfs -F 32 /dev/sda3
mkdosfs -F 32 /dev/sdb3

To mount the newly created partitions, use this:

export	NEWLIN="/mnt/linux"
mkdir -p $NEWLIN && mount /dev/md1 $NEWLIN
mkdir -p $NEWLIN/boot && mount /dev/md0 $NEWLIN/boot
mkdir -p $NEWLIN/usr && mount /dev/md2 $NEWLIN/usr
mkdir -p $NEWLIN/var && mount /dev/md3 $NEWLIN/var
mkdir -p $NEWLIN/home && mount /dev/md4 $NEWLIN/home
mkdir -p $NEWLIN/scratch && mount /dev/md5 $NEWLIN/scratch
mkdir -p $NEWLIN/tmp && mount /dev/md6 $NEWLIN/tmp
mkdir -p $NEWLIN/mnt/win32 && mount /dev/sda3 $NEWLIN/mnt/win32
mkdir -p $NEWLIN/mnt/data && mount /dev/sdb3 $NEWLIN/mnt/data

Installation of debootstrap on your recovery system (if it is not already there, check with debootstrap –help). You will also need wget, install it the same way if it's not already there.

export WORKDIR="/tmp/debootstrap"
mkdir -p $WORKDIR
cd $WORKDIR
wget http://ftp.de.debian.org/debian/pool/main/d/debootstrap/debootstrap_0.2.45-0.2_i386.deb
ar -xf $WORKDIR/debootstrap_0.2.45-0.2_i386.deb
tar -xvzf $WORKDIR/data.tar.gz
export DEBOOTSTRAP_DIR=$WORKDIR/usr/lib/debootstrap

Now you have a working debootstrap in /tmp/debootstrap/usr/sbin/, use it to install debian:

$WORKDIR/usr/sbin/debootstrap --arch i386  sarge $NEWLIN "http://ftp.de.debian.org/debian/"

Depending on the raid tools you used earlier in this guide (A) or (B), also copy the raid configuration file to the new system.

For (A) do: cp /etc/raidtab $NEWLIN/etc/raidtab
For (B) do: mkdir $NEWLIN/etc/mdadm && \
            cp /etc/mdadm/mdadm.conf $NEWLIN/etc/mdadm/mdadm.conf

You should then be able to chroot to your new installation, and set up the needed environment.

mount -o bind /proc $NEWLIN/proc
mount -o bind /dev $NEWLIN/dev
mount -o bind /sys $NEWLIN/sys
chroot $NEWLIN

base-config

Answer the questions and set up a first user for your system. Then you'll need to edit some more configs and install some base packages:

Set up /etc/apt/sources.list (I use nano as a text editor):

deb http://ftp.de.debian.org/debian stable main contrib non-free
deb http://ftp.de.debian.org/debian testing main contrib non-free
deb http://ftp.de.debian.org/debian unstable main contrib non-free
deb http://ftp.de.debian.org/debian experimental main contrib non-free

deb-src http://ftp.de.debian.org/debian stable main contrib non-free
deb-src http://ftp.de.debian.org/debian testing main contrib non-free
deb-src http://ftp.de.debian.org/debian unstable main contrib non-free
deb-src http://ftp.de.debian.org/debian experimental main contrib non-free

deb http://security.debian.org/debian-security testing/updates main contrib non-free
deb http://security.debian.org/debian-security stable/updates main contrib non-free

as well as the /etc/apt/preferences

Package: *
Pin: release a=testing
Pin-Priority: 900

Package: *
Pin: release a=stable
Pin-Priority: 400

Package: *
Pin: release a=unstable
Pin-Priority: 300

Package: *
Pin: release a=experimental
Pin-Priority: 100

Set up /etc/fstab:

# /etc/fstab: static file system information.
#
# -- mount for this linux needed partitions
# <file system> <mount point>   <type>		<options>		<dump>  <pass>
/dev/md1	/		reiserfs	defaults		0	1

/dev/sda10	swap		swap		defaults,pri=1		0	0
/dev/sdb10	swap		swap		defaults,pri=1		0	0

/dev/md0	/boot		reiserfs	defaults		0	2
/dev/md2	/usr		reiserfs	defaults		0	2
/dev/md3	/var		reiserfs	defaults		0	2
/dev/md4	/home		reiserfs	defaults		0	2
/dev/md5	/scratch	reiserfs	defaults		0	2
/dev/md6	/tmp		reiserfs	defaults		0	2

proc		/proc		proc		defaults		0	0

/dev/sda3	/mnt/win32	auto		noauto,rw,users,user,fmask=007,dmask=007,gid=users 0 0
/dev/sdb3	/mnt/data	auto		noauto,rw,users,user,fmask=007,dmask=007,gid=users 0 0

Make the grub directory in case it doesn't exist:

mkdir -p /boot/grub

Set up grub's menu.lst in order to boot with grub later:

default         saved
fallback        0
timeout         1

title           2.6.8-12-amd64
root            (hd0,0)
kernel          (hd0,0)/vmlinuz-2.6.8-12-amd64-k8 root=/dev/md1 ro
initrd          (hd0,0)/initrd.img-2.6.8-12-amd64-k8
savedefault     0
boot

title           Windows
root            (hd0,2)
makeactive
chainloader     +1
savedefault     0

# Older, unused debian kernels
title           2.6.15-1
root            (hd0,0)
kernel          (hd0,0)/vmlinuz-2.6.15-1-k7 root=/dev/md1 ro
initrd          (hd0,0)/initrd.img-2.6.15-1-k7
savedefault     0
boot

Also set up /etc/network/interfaces:

# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

Very important: since we are using a standard debian kernel, we'll need to include all raid- and filesystem-modules into the initrd, else the kernel won't find the raid or disks. So change '/etc/mkinitramfs/modules' (or '/etc/mkinitrd/modules', depending on which one you are using - if in doubt update both!) to include these modules:

# List of modules that you want to include in your initramfs.
#
# Syntax:  module_name [args ...]
raid0
raid1
sd_mod
reiserfs
vfat
fat
ide_generic
sata_sis
libata
scsi_mod
sis5513
generic
ide_core
rtc
dm_mod

You might also need to change the probing of the root-directory for mkinitrd, because else it will take the root directory of the rescue media which might lead to problems later. So edit '/etc/mkinitrd/mkinitrd.conf' to contain this small change:

# If this is set to probe mkinitrd will try to figure out what's needed to
# mount the root file system.  This is equivalent to the old PROBE=on setting.
#ROOT=probe
ROOT=/dev/md1

If you already installed a kernel into your new system, you need to re-create your initramfs in order to use it on reboot. Everybody else can skip this step.

dpkg-reconfigure linux-image-2.6.15-1-k7

Install packages in your new system:

export	PACKAGES="udev grub grubconf ssh locales linux-image-k7 libc6-i686 \
		bzip2 zip unzip rar unrar apt-show-versions gnupg \
		libsasl2-modules debian-keyring raidtools2 mdadm less \
		hdparm attr reiserfsprogs xfsprogs iproute discover findutils"
apt-get update
apt-get dist-upgrade
apt-get install $PACKAGES

Last not least exit your chroot, install grub, and reboot into your new system:

exit
grub-install --no-floppy --recheck /dev/sda --root-directory=$NEWLIN
shutdown -r now

Thats it, you're all set now. Nice! Grab some more chips, lean back, and think of vegas.

Mount the raid from the recovery console

To quickly mount the raid again later, I use the following code.

put this in '/etc/mdadm/mdadm.conf':

DEVICE /dev/sda* /dev/sdb*

Then issue these commands:

modprobe ide_generic
modprobe sis5513
modprobe sata_sis
modprobe scsi_mod
modprobe sd_mod
modprobe md_mod
modprobe raid1
modprobe raid0

for I in 0 1 2 3 4 5 6
do
	if [ ! -e /dev/md$I ]; then mknod /dev/md$I b 9 $I; fi
done

mdadm --examine --scan >> /etc/mdadm/mdadm.conf
mdadm --assemble --scan

export	NEWLIN="/mnt/linux"
mkdir -p $NEWLIN && mount /dev/md1 $NEWLIN
mkdir -p $NEWLIN/boot && mount /dev/md0 $NEWLIN/boot
mkdir -p $NEWLIN/usr && mount /dev/md2 $NEWLIN/usr
mkdir -p $NEWLIN/var && mount /dev/md3 $NEWLIN/var
mkdir -p $NEWLIN/home && mount /dev/md4 $NEWLIN/home
mkdir -p $NEWLIN/scratch && mount /dev/md5 $NEWLIN/scratch
mkdir -p $NEWLIN/tmp && mount /dev/md6 $NEWLIN/tmp
mkdir -p $NEWLIN/mnt/win32 && mount /dev/sda3 $NEWLIN/mnt/win32
mkdir -p $NEWLIN/mnt/data && mount /dev/sdb3 $NEWLIN/mnt/data

mount -o bind /proc $NEWLIN/proc
mount -o bind /dev $NEWLIN/dev
mount -o bind /sys $NEWLIN/sys
guides/debianraid.txt · Last modified: 2014/04/02 22:39 (external edit)