Archive for the ‘centos’ Category

CentOS 6.2

If you want to upgrade from CentOS 6.0 or 6.1, all you need to do is run “yum update” as root, and then reboot.  The reboot is required because the kernel is updated.

If you have CentOS 5.x or 4.x, then there is no path for you to update.  For the adventurous, you can certainly get it to work, but you’ll be in for some pain.  Check the release notes for some ideas on where to start: http://wiki.centos.org/Manuals/ReleaseNotes/CentOS6.2

The new installer in the 6.x release is a little easier to use, and the boot times are certainly much better.

If you use netinstall, then the URL containing the centos installation image for 64bit is:

http://mirror.centos.org/centos/6.2/os/x86_64/

And if for some crazy reason you are still using 32bit:

http://mirror.centos.org/centos/6.2/os/i386/

I’m a little at a loss as to why these URL’s aren’t in the installer by default.

have fun,

Cameron

 

finally! Here is the post from Karinbar:

http://www.karan.org/blog/index.php/2011/07/10/release-for-centos-6-0-i386-and-x86-64

congrats to all the guys who made that happen. Certainly seemed to be a bit trickier than last time.
It might not be too long before 6.1 also comes out, so if you are thinking of a major upgrade, you might consider waiting (although 6.0 will upgrade automatically anyway).

Cameron

The documention for /etc/cron.d files says that it’s the same as /etc/crontab. And although this is true, most people are not used to adding the user in as the first argument before the script.

So when you use crontab -e:

10 * * * * /opt/myprog

When you put it in /etc/cron.d/myapp.crontab

10 * * * * root /opt/myprog

It’s a little annoying, and a comment in the documention as such would go a long way.

Cameron

You can use msgset to set the size of the message queue, but the easiest way is to just set it on the kernel directly.

The default size is 64k. So here I set it to around 1Meg.

As root run:

echo 1000000 > /proc/sys/kernel/msgmnb

(I added to /etc/rc.d/rc.local on CentOS 5.5)

and you can check it with:

cat /proc/sys/kernel/msgmnb

To complete the story, I ran into this problem when I had hundreds of clients simultaneously trying to write to a full queue, and a server trying to read from it. For reasons I didn’t investigate, the server would see the queue as empty, even though it had 11 messages in the queue (use command: ipcs).

So for me, this is a quick work-around as I move to some other queuing system.

wget http://downloads.sourceforge.net/project/fail2ban/fail2ban-stable/fail2ban-0.8.4/fail2ban-0.8.4.tar.bz2?use_mirror=transact
tar xf fail2ban-0.8.4.tar.bz2
cd fail2ban-0.8.4
su
python setup.py install
cp files/redhat-initd /etc/init.d/fail2ban
su -
chkconfig --add fail2ban
chkconfig fail2ban on
vi /etc/fail2ban/jail.conf

And go through the various sections (eg [ssh-iptables]), and change the ones you want to enabled = true
and change the lines like (/etc/fail2ban/jail.conf):

sendmail-whois[name=SSH, dest=you@mail.com, sender=fail2ban@mail.com]
logpath  = /var/log/sshd.log

to your email address, and a sender that works for you, so if you run example.com you might change to:

sendmail-whois[name=SSH, dest=cameron@example.com, sender=fail2ban@example.com]
logpath  = /var/log/secure

(you have to change the logpath to secure.log)
If you use [sasl-iptables], then change the logpath to /var/log/maillog
And then of course start it (or reboot)

service fail2ban start

you can test the rules with

fail2ban-regex /var/log/secure /etc/fail2ban/filter.d/sshd.conf

The defaults worked fine for me, but you might want to look here for some alternate centos sshd rules.

What doesn’t work for me is a rule to ban attacks on my mail server. More on that when I find a good solution.

Links
http://www.sonoracomm.com/support/18-support/228-fail2ban

CentOS 5.5 is out …

Well, the short answer is no. The long answer is yes, but it’s a pain.

I had a fairly basic setup with 32bit Centos 5.4, inserted the 64bit and went through the “upgrade” procedure. The whole thing took about 5 minutes, and it rebooted just fine (I had to select the 2.6.16-164.el5 option from grub (not the old PAE ones I was using)). but the first command:

yum install

failed with the following error:

There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

/usr/lib/python2.4/site-packages/rpm/_rpmmodule.so: wrong ELF class: ELFCLASS32

Please install a package which provides this module, or
verify that the module is installed correctly.

It’s possible that the above module doesn’t match the
current version of Python, which is:
2.4.3 (#1, Sep 3 2009, 15:37:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)]

If you cannot solve this problem yourself, please go to
the yum faq at:

http://wiki.linux.duke.edu/YumFaq

 

Then I noted that hpssd had failed on startup for a similar reason.
“uname -a” presents a promising answer:

-bash-3.2$ uname -a
Linux libero 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
-bash-3.2$

So I think the problem is just with python. After a bit more bashing, there is certainly something wrong because when I try to enforce a install, I get a wrong architecture warning:

[root@libero CentOS]# rpm -ivh ./rpm-4.4.2.3-18.el5.x86_64.rpm ./rpm-libs-4.4.2.3-18.el5.x86_64.rpm ./rpm-python-4.4.2.3-18.el5.x86_64.rpm   ./perl-5.8.8-27.el5.x86_64.rpm ./sqlite-3.3.6-5.x86_64.rpm ./elfutils-libelf-0.137-3.el5.x86_64.rpm --force
Preparing...                ########################################### [100%]
package elfutils-libelf-0.137-3.el5.x86_64 is intended for a x86_64 architecture
package sqlite-3.3.6-5.x86_64 is intended for a x86_64 architecture
package perl-5.8.8-27.el5.x86_64 is intended for a x86_64 architecture
package rpm-libs-4.4.2.3-18.el5.x86_64 is intended for a x86_64 architecture
package rpm-4.4.2.3-18.el5.x86_64 is intended for a x86_64 architecture
package rpm-python-4.4.2.3-18.el5.x86_64 is intended for a x86_64 architecture

I’d be interested to hear in your experience, I know it’s possible thanks to the following link:

[root@libero CentOS]# rpm --eval '%{_arch}'
i386
[root@libero CentOS]#

okay, lets see if we can fix that:

[root@libero CentOS]# rpm -e rpm.i386 rpm-python
error: Failed dependencies:
rpm = 4.4.2.3-18.el5 is needed by (installed) rpm-libs-4.4.2.3-18.el5.i386
rpm is needed by (installed) man-1.6d-1.1.i386
rpm >= 0:4.4.2 is needed by (installed) yum-3.2.22-20.el5.centos.noarch
rpm is needed by (installed) qt4-4.2.1-1.i386
/bin/rpm is needed by (installed) policycoreutils-1.33.12-14.6.el5.i386
rpm-python is needed by (installed) system-config-users-1.2.51-4.el5.noarch
rpm-python is needed by (installed) yum-3.2.22-20.el5.centos.noarch
rpm-python is needed by (installed) system-config-network-tui-1.3.99.18-1.el5.noarch
[root@libero CentOS]# rpm --nodeps -e rpm.i386 rpm-python
[root@libero CentOS]# rpm -ivh ./rpm-4.4.2.3-18.el5.x86_64.rpm
bash: /bin/rpm: No such file or directory
[root@libero CentOS]#

ha ha ha .. no surprise there. I tried just upgrading again, and that didn’t work. So I booted into rescue mode (put in cd, type “linux rescue” at the boot: prompt.
I mounted the drive, copied the runtime to the hard drive, and then copied a number of items back into their appropriate places

mkdir -p /mnt/sysimage/media/cdrom
mount /dev/scd0 /mnt/sysimage/media/cdrom
rsync -a /mnt/runtime /mnt/sysimage/runtime
chroot /mnt/sysimage
cp /runtime/usr/bin/rpm /usr/bin
cp /runtime/usr/lib64/librpm* /usr/lib64
cp /runtime/usr/lib64/libsqlite* /usr/lib64
cp /runtime/usr/lib64/libelf* /usr/lib64

At this point I rebooted.

# as root
mount /dev/csd0 /media/cdrom
cp -r /runtime/usr/lib/rpm /usr/lib/
/runtime/usr/bin/rpm --rebuilddb
#edit /etc/rpm/platform to be x86_64-redhat-linux
[root@libero cdrom]# cd /media/cdrom/CentOS/
[root@libero CentOS]# /runtime/usr/bin/rpm -ivh ./rpm-4.4.2.3-18.el5.x86_64.rpm  rpm-libs-4.4.2.3-18.el5.x86_64.rpm elfutils-libelf-0.137-3.el5.x86_64.rpm  ./sqlite-3.3.6-5.x86_64.rpm
Preparing...                ########################################### [100%]
1:sqlite                 ########################################### [ 25%]
2:elfutils-libelf        ########################################### [ 50%]
3:rpm-libs               ########################################### [ 75%]
4:rpm                    ########################################### [100%]
[root@libero CentOS]#

I thought it prudent to rebuild the rpm database again

[root@libero CentOS]# rpm --rebuilddb
[root@libero CentOS]#

I change the rpm output to show the architecture…

[root@libero CentOS]# cat /etc/rpm/macros
%_query_all_fmt      %%{name}-%%{version}-%%{release}.%%{arch}
%_query_fmt      %%{name}-%%{version}-%%{release}.%%{arch}

okay .. lets see how we go:

[root@libero CentOS]# yum update
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

No module named rpm

Please install a package which provides this module, or
verify that the module is installed correctly.

It’s possible that the above module doesn’t match the
current version of Python, which is:
2.4.3 (#1, Sep 3 2009, 15:37:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)]

If you cannot solve this problem yourself, please go to
the yum faq at:

http://wiki.linux.duke.edu/YumFaq

[root@libero CentOS]# rpm -qa | grep python
gnome-python2-canvas-2.16.0-1.fc6.i386
python-numeric-23.7-2.2.2.i386
python-2.4.3-27.el5.i386
gnome-python2-gnomeprint-2.16.0-3.el5.i386
gnome-python2-libegg-2.14.2-6.el5.i386
gnome-python2-bonobo-2.16.0-1.fc6.i386
gnome-python2-gconf-2.16.0-1.fc6.i386
gnome-python2-gnomevfs-2.16.0-1.fc6.i386
python-elementtree-1.2.6-5.i386
gnome-python2-desktop-2.16.0-3.el5.i386
gnome-python2-extras-2.14.2-6.el5.i386
libxml2-python-2.6.26-2.1.2.8.i386
python-sqlite-1.1.7-1.2.1.i386
python-2.4.3-27.el5.x86_64
notify-python-0.1.0-3.fc6.i386
python-ldap-2.2.0-2.1.i386
libselinux-python-1.33.4-5.5.el5.i386
python-urlgrabber-3.1.0-5.el5.noarch
gnome-python2-applet-2.16.0-3.el5.i386
gnome-python2-gtksourceview-2.16.0-3.el5.i386
gnome-python2-2.16.0-1.fc6.i386
audit-libs-python-1.7.13-2.el5.i386
python-iniparse-0.2.3-4.el5.noarch
gamin-python-0.1.7-8.el5.i386
dbus-python-0.70-9.el5_4.i386

Certainly at this point, I’m running x86 (at least the rpm and kernel are).

rpm --erase rpm-libs-4.4.2.3-18.el5.i386
[root@libero CentOS]# yum install gcc
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

No module named rpm

Please install a package which provides this module, or
verify that the module is installed correctly.

It’s possible that the above module doesn’t match the
current version of Python, which is:
2.4.3 (#1, Sep 3 2009, 15:37:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)]

If you cannot solve this problem yourself, please go to
the yum faq at:

http://wiki.linux.duke.edu/YumFaq

[root@libero CentOS]# yum install gcc
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

No module named rpm

Please install a package which provides this module, or
verify that the module is installed correctly.

It’s possible that the above module doesn’t match the
current version of Python, which is:
2.4.3 (#1, Sep 3 2009, 15:37:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)]

If you cannot solve this problem yourself, please go to
the yum faq at:

http://wiki.linux.duke.edu/YumFaq

And so I attempt to fix it…

[root@libero CentOS]# rpm -ivh ./rpm-python-4.4.2.3-18.el5.x86_64.rpm
Preparing… ########################################### [100%]
1:rpm-python ########################################### [100%]
[root@libero CentOS]#
[root@libero CentOS]# rpm –erase expat –nodeps
[root@libero CentOS]# rpm -ivh ./python-elementtree-1.2.6-5.x86_64.rpm ./python-elementtree-1.2.6-5.x86_64.rpm python-ldap-2.2.0-2.1.x86_64.rpm python-numeric-23.7-2.2.2.x86_64.rpm python-sqlite-1.1.7-1.2.1.x86_64.rpm ./expat-1.95.8-8.2.1.x86_64.rpm
Preparing… ########################################### [100%]
1:expat ########################################### [ 17%]
2:python-elementtree ########################################### [ 33%]
3:python-elementtree ########################################### [ 50%]
4:python-ldap ########################################### [ 67%]
5:python-numeric ########################################### [ 83%]
6:python-sqlite ########################################### [100%]
[root@libero CentOS]# rpm –erase python-2.4.3-27.el5.i386 python-elementtree-1.2.6-5.i386 python-ldap-2.2.0-2.1.i386 python-numeric-23.7-2.2.2.i386 python-sqlite-1.1.7-1.2.1.i386
error: Failed dependencies:
libpython2.4.so.1.0 is needed by (installed) gnome-python2-gnomevfs-2.16.0-1.fc6.i386
libpython2.4.so.1.0 is needed by (installed) rhythmbox-0.11.6-4.el5.i386
libpython2.4.so.1.0 is needed by (installed) libsemanage-1.9.1-4.4.el5.i386
[root@libero CentOS]# rpm –erase python-2.4.3-27.el5.i386 python-elementtree-1.2.6-5.i386 python-ldap-2.2.0-2.1.i386 python-numeric-23.7-2.2.2.i386 python-sqlite-1.1.7-1.2.1.i386 –nodeps
[root@libero CentOS]# rpm -ivh ./sqlite-devel-3.3.6-5.x86_64.rpm
Preparing… ########################################### [100%]
1:sqlite-devel ########################################### [100%]
[root@libero CentOS]# rpm -ivh ./yum-metadata-parser-1.1.2-3.el5.centos.x86_64.rpm ./libxml2-2.6.26-2.1.2.8.x86_64.rpm
Preparing… ########################################### [100%]
1:libxml2 ########################################### [ 50%]
2:yum-metadata-parser ########################################### [100%]
[root@libero CentOS]# yum update

 

yay .. finally :)

I was getting a lot of errors like this:

-bash-3.2$ yum update
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

/usr/lib/python2.4/site-packages/_sqlitecache.so: wrong ELF class: ELFCLASS32

Please install a package which provides this module, or
verify that the module is installed correctly.

It’s possible that the above module doesn’t match the
current version of Python, which is:
2.4.3 (#1, Sep 3 2009, 15:37:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)]

If you cannot solve this problem yourself, please go to
the yum faq at:

http://wiki.linux.duke.edu/YumFaq

-bash-3.2$

For me, that problem was related to some of the yum and python files not being upgraded to 64bit. So I spent a fair bit of time dorking around and upgrading some x86_64 from the cdrom and then deleting (sometimes –nodeps) the i386/i686 ones.

So now the question is, would I recommend this? No! But if you are in a bind, and spending a day or two to sort out the problems is less painful than a fresh install, then it looks like it’ll work to me.

But, after all that, I’m off to reinstall (but it was a fun exercise).

mdadm is a pain, since it doesn’t record the raid settings automatically, after you set everything up, you have to remember to save the mdadm.conf file. What a pain. Anyway, I forgot to do that, and on reboot, it hung my machine saying /dev/md0 was hosed.

So here is how I recovered it.

originally, my stripe was created with:

mdadm -C /dev/md0 --level=raid0 --raid-devices=2 /dev/hda3 /dev/hdc1

and so I was able to recreate the stripe using:

mdadm -A /dev/md0 /dev/hda3 /dev/hdc1

and then mount it with:

mount /dev/md0 /www -t ext3

so then I saved the /etc/mdadm.conf file with:

echo 'DEVICES /dev/hda3 /dev/hdc1' > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf

So before adding it back into my /etc/fstab, I reboot, and check that I can mount it

mount /dev/md0 /www -t ext3

and if that works (it did), I add back into /etc/fstab

/dev/md0  /www   ext3   defaults   1 2

and reboot again.

I recently installed CentOS remotely on a machine in the UK.  Now, I want to configure the drives to run raid 0 for /www.

1. create the partitions (and possibly reboot)
2. mdadm -C /dev/md0 --level=raid0 --raid-devices=2 /dev/hda3 /dev/hdc1
3. mkfs.ext3 /dev/md0
4. create /etc/mdadm.conf
echo 'DEVICES /dev/hda3 /dev/hdc1' > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
4. create mount point: mkdir /www
5. Add line to /etc/fstab: /dev/md0                /www                       ext3    defaults        1 1
6. mount /www

I have two drives /dev/hda and /dev/hdc.  /dev/hda has a swap and ext3 partition (the os) on it already.

[root@uk1 conf.d]# <strong>ls -la /dev/hd*</strong>
brw-r----- 1 root disk  3, 0 Mar  7 11:21 /dev/hda
brw-r----- 1 root disk  3, 1 Mar  7 11:21 /dev/hda1
brw-r----- 1 root disk  3, 2 Mar  7 11:21 /dev/hda2
brw-r----- 1 root disk 22, 0 Mar  7 11:21 /dev/hdc

Create some partitions with fdisk:

[root@uk1 /]# <strong>fdisk /dev/hda</strong>

Command (m for help): <strong>p</strong>

Disk /dev/hda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1        6374    51199123+  83  Linux
/dev/hda2            6375        7011     5116702+  82  Linux swap / Solaris

Command (m for help): <strong>n</strong>
Command action
e   extended
p   primary partition (1-4)
<strong>p</strong>
Partition number (1-4): <strong>3</strong>
First cylinder (7012-30401, default 7012):<em>Hit Enter</em>
Using default value 7012
Last cylinder or +size or +sizeM or +sizeK (7012-30401, default 30401): <strong>+200G</strong>
Value out of range.
Last cylinder or +size or +sizeM or +sizeK (7012-30401, default 30401): <strong>+180G</strong>

Command (m for help): <strong>w</strong>
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@uk1 /]# <strong>fdisk /dev/hdc</strong>

The number of cylinders for this disk is set to 30401.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): <strong>p</strong>

Disk /dev/hdc: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

Command (m for help): <strong>n</strong>
Command action
e   extended
p   primary partition (1-4)
<strong>p</strong>
Partition number (1-4): <strong>1</strong>
First cylinder (1-30401, default 1):<em>Hit Enter</em>
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-30401, default 30401): <strong>+180G</strong>

Command (m for help): <strong>w</strong>
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@uk1 /]# <strong>ls /dev/hd*</strong>
/dev/hda  /dev/hda1  /dev/hda2  /dev/hdc  /dev/hdc1

So we notice that /dev/hda3 is not showing. So we can’t create the raid array till it’s available. So we reboot.

[root@uk1 cameron]# ls /dev/hd*
/dev/hda  /dev/hda1  /dev/hda2  /dev/hda3  /dev/hdc  /dev/hdc1

And so we continue…

mdadm -C /dev/md0 --level=raid0 --raid-devices=2 /dev/hda3 /dev/hdc1

Which outputs:

mdadm: /dev/hdc1 appears to contain an ext2fs file system
    size=80192K  mtime=Sat Mar  6 13:45:53 2010
Continue creating array? y
mdadm: array /dev/md0 started.
[root@uk1 ~]#

now add the following line to /etc/fstab:

 /dev/md0                /www                       ext3    defaults        1 1

then:

mkdir /www
mount /www

done.