Archive for the ‘raid’ Category

Lucky me, I need to upgrade yet another machine, and it’s my email server. LVM (which I am not a big fan), takes all of the drive by default, and so now I want to resize the volume to be much smaller, then install the new operating system on the same drive (in some of the free space).

Quick Answer:
1. boot rescue cd, don’t mount drives
2. run:

lvm vgchange -a y
e2fsck -f /dev/VolGroup00/LogVol00
resize2fs -f /dev/VolGroup00/LogVol00 100G
lvm lvreduce -L100G /dev/VolGroup00/LogVol00

More Detail:
First I looked to see how much space I’m using.

[root@au1 c]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 297673144  49140752 233167480  18% /
/dev/sda1               101086     61896     33971  65% /boot
tmpfs                  3109444         0   3109444   0% /dev/shm

so around 50G, kid of a shame because I normally allocate 50G for the installed OS. No worries, I decide to go with 100G. (It’s a 250G drive I think) – hmm… TODO: check it’s not a 300G or 500G drive. Damn, I think it’s 500G…

Insert rescue CD into drive (eg the normal install disk (CentOS 5.4 64bit)).
At the prompt type: linux rescue
and then “Skip” the mounting of the drives.

lvm vgchange -a y
e2fsck -f /dev/VolGroup00/LogVol00
resize2fs -p -f /dev/VolGroup00/LogVol00 100G
lvm lvreduce -L100G /dev/VolGroup00/LogVol00

e2fsck and resize2fs will probably take a long time. For me e2fsck was probably around 5 minutes. resize2fs is certainly longer than that and I won’t know how long as I forgot to add in the -p and I’m about to head out for lunch (Happy Birthday Paul!).

If the Logical volume can be unmounted, then you can do these things without the rescue cd.

after reboot:
[root@au1 c]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00 101573920 49124416 48255200 51% /
/dev/sda1 101086 61896 33971 65% /boot
tmpfs 3109444 0 3109444 0% /dev/shm

I recently upgraded a machine to 64bit CentOS, and now the drives are running crazy slow. hdparm -t /dev/hda showed result like this:

Timing buffered disk reads:   14 MB in  3.18 seconds =   4.40 MB/sec
Timing buffered disk reads:   12 MB in  3.23 seconds =   3.72 MB/sec

And that was on a striped drive! (very slow, should be ~100MB/sec for 1 drive, ~180MB/sec for 2 drive stripe).

I had a similar problem with an SSD, and thought it odd that the drives appeared as /dev/hd?? instead of /dev/sd??. The solution is not to probe the IDE interfaces and you do this by adding ide0=noprobe ide1=noprobe to the kernel params. So now my entry in /etc/grub.conf looks like:

title CentOS (2.6.18-164.11.1.el5) No Probe
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ ide0=noprobe ide1=noprobe
initrd /boot/initrd-2.6.18-164.11.1.el5.img

When making such a change, check your /etc/fstab to make sure that it’s not going to load /dev/hd?? since now the drives will change to /dev/sd??. And possibly they might get renumbered (probably thanks to the CD drive). So mine went from /dev/hda -> /dev/sda and /dev/hdc -> /dev/sdb

An earlier post will show that I setup a raid using /dev/hda3 and /dev/hdc1, so I recreated the strip with:

mdadm --stop /dev/md0
mdadm -A /dev/md0 /dev/sda3 /dev/sdb1
#add entry back into /etc/fstab
mount /dev/md0
echo 'DEVICES /dev/hda3 /dev/hdc1 /dev/sda3 /dev/sdb1'  > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf

reboot to test (consider commenting out /dev/md0 in /etc/fstab first).

so after all is fixed up hdparm shows much much better results.

hdparm -t /dev/sda

Timing buffered disk reads:  320 MB in  3.01 seconds = 106.16 MB/sec
Timing buffered disk reads:  320 MB in  3.00 seconds = 106.50 MB/sec

htparm -t /dev/md0

Timing buffered disk reads:  572 MB in  3.00 seconds = 190.41 MB/sec
Timing buffered disk reads:  592 MB in  3.01 seconds = 196.91 MB/sec

Certainly that is acceptable!