New server for Colo (RAID-1, Debian with vservers)

Yesterday and today I’ve set up a new server for colocation.
Properties and Features:

  • celeron 2GHz
  • 2,5GB mem
  • 2×500GB ordinary sata disks in RAID-1
  • debian etch installed on it
  • serial console (both grub and linux itself)
  • all it will serve and compute nicely sorted and devided in vservers

This was the first time I’ve installed a system in RAID and with serial console.

First thing to do was buying the hardware and wire and install everything properly.

Next thing to do was to check everything works in the BIOS. Nice to see 2,6GB RAM and 2×500GB disks in the BIOS startup screen at once without any trouble.

Then install the operating system. I attached an old DVD drive to the mainboard to start up from CD-ROM.

Server when installing

The choosen disk layout will be:

  • both disks a 200MB boot partition and a near-500GB system partition as linux-raid
  • combine partitions on each disk to md devices
  • install ext3 for /boot on the first 200MB md and install LVM on the big second md for al the other partitions
  • in LVM partitions:
    • root 512MB
    • /usr 5GB
    • /var 3GB
    • swap 3GB
    • /tmp 1GB
    • /home 20GB
    • /home/mail 10GB optimized for Maildir, 1 inode per 4kb (news options) and mount options noatime, nosuid, noexec
    • /home/backup 25GB
    • /var/local/nntpcache 2GB news cache spool, 1 inode per 4kb and mount options noatime

    This will be enough for a while in the future. I left almost 400GB of non-partitioned space on LVM, which can be used to extend volumes or add volumes for future expanding.

To fix this with de debian installer:

  • First create a DOS label on both disks
  • Create on each disk a 200MB for booting and a second partition of almost 500GB for LVM. Partition Type fd (linux-raid). Give the 200MB partition a boot flag.
  • Set up RAID, couple both 200MB and the near-500GB partition to two RAID-1 devices, md0 and md1
  • setup the 200MB md0 device to format as ext3 and use /boot as mount point
  • setup the big md1 device to use as LVM
  • setup LVM: make a volume group on the LVM device (I named it raid1)
  • create logical volumes like root, swap, usr, var, etc.
  • close the logical volume manager setup
  • format all logical volumes properly (I choose ext3) and give it mount points
  • finish partitioning!

After partitioning and installing base .deb packages I choose to install only the standard system set of packages. All other functionality will be in the vservers, which are already there and will be copied from the old servers.

Unfortunatly the debian installer left me no choice between boot loaders, I had to use lilo as boot loader. Strange, since I left a boot md0 specialy to use with grub. I’d like grub rather than lilo because if I once forget to run lilo properly after a kernel upgrade, I’ll get stuck with an unbootable server and need to travel to the colocation to fix up things. Grub will always work with serial console even if it can’t find a kernel, because it can load kernels dynamicly from the grub CLI.

After reboot I noted two things:

  • the installer didn’t use my seperate /boot partition at all, although I’m sure I’ve setup a ext3 formated /boot file system for it in the installer. The md0 device was left ext3 formatted by the installer. Bug somewhere?
  • The raid had to be build up according to /proc/mdstat and it took hours… I just left the server on without reboot to finish this, since I don’t know if rebooting before with slow down or restart the process…

Next thing to do is to install grub and get it work with serial console. More on this later…

Tags: , , , , ,

Comments are closed.