Wednesday, 19 August 2009

locale configuration issue on Ubuntu 9.04


I've rent a new server by dedibox.fr and the ubuntu server version is tunned by the dedibox team...



On the Ubuntu Server 9.04 64bit/english version, there's a locale configuration issue.
whenever you do a aptitude update or run the 'locale' command you get a warning like this :

thomas@sd1:~$ locale
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
LANG=en_US.ISO-8859-15
LANGUAGE=en_US:en:en_GB:en
LC_CTYPE="en_US.ISO-8859-15"
LC_NUMERIC="en_US.ISO-8859-15"
LC_TIME="en_US.ISO-8859-15"
LC_COLLATE="en_US.ISO-8859-15"
LC_MONETARY="en_US.ISO-8859-15"
LC_MESSAGES="en_US.ISO-8859-15"
LC_PAPER="en_US.ISO-8859-15"
LC_NAME="en_US.ISO-8859-15"
LC_ADDRESS="en_US.ISO-8859-15"
LC_TELEPHONE="en_US.ISO-8859-15"
LC_MEASUREMENT="en_US.ISO-8859-15"
LC_IDENTIFICATION="en_US.ISO-8859-15"
LC_ALL=


or

thomas@sd1:~$ sudo  dpkg-reconfigure locales
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en:en_GB:en",
LC_ALL = (unset),
LANG = "en_US.ISO-8859-15"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Generating locales...
en_AU.UTF-8... up-to-date
en_BW.UTF-8... up-to-date
en_CA.UTF-8... up-to-date
en_DK.UTF-8... up-to-date
en_GB.UTF-8... up-to-date
en_HK.UTF-8... up-to-date
en_IE.UTF-8... up-to-date
en_IN.UTF-8... up-to-date
en_NG.UTF-8... up-to-date
en_NZ.UTF-8... up-to-date
en_PH.UTF-8... up-to-date
en_SG.UTF-8... up-to-date
en_US.ISO-8859-1... up-to-date
en_US.UTF-8... up-to-date
en_ZA.UTF-8... up-to-date
en_ZW.UTF-8... up-to-date
Generation complete.



I've asked the support for help, and I get a refusal arguing that software is not part of the support... but as you can install only a few Operating System provided by dedibox, which operating system has been tunned/preconfigured for their servers, I feel that they are responsible for it, if it doesn't works out of the box. (of course when you're installing new software or start editing the configuration, it's your responsibility).

Anyway, with dedibox, the support is so lame, takes so much time to answer your call (10 days to get a KVM over IP on a production system which was blocked at boot time), to finally not solve your issue

One can say that there is NO support by dedibox at all... One would be absolutely right!

So, build a redundant architecture if you decide to play with dedibox servers.

Anyway....


What is wrong here is that the current locale configured on the system (en_US.ISO-8859-15) does not exists on the system (see the one generated, you have en_US.ISO-8859-1 but not en_US.ISO-8859-15).

So to change the current locale of the system do this :

vi /etc/default/locale

and change it to this (en_US.UTF-8 is listed while regenerating the locales, so it should be good)

to :

LANGUAGE="en_US:en:en_GB:en"
LANG="en_US.UTF-8"

and then

vi /etc/environment

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"
LANGUAGE="en_US:en:en_GB:en"
LANG="en_US.UTF-8"

then run this command :

thomas@sd1:~$ sudo  dpkg-reconfigure locales
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = "en_US:en:en_GB:en",
LC_ALL = (unset),
LANG = "en_US.ISO-8859-15"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Generating locales...
en_AU.UTF-8... up-to-date
en_BW.UTF-8... up-to-date
en_CA.UTF-8... up-to-date
en_DK.UTF-8... up-to-date
en_GB.UTF-8... up-to-date
en_HK.UTF-8... up-to-date
en_IE.UTF-8... up-to-date
en_IN.UTF-8... up-to-date
en_NG.UTF-8... up-to-date
en_NZ.UTF-8... up-to-date
en_PH.UTF-8... up-to-date
en_SG.UTF-8... up-to-date
en_US.ISO-8859-1... up-to-date
en_US.UTF-8... up-to-date
en_ZA.UTF-8... up-to-date
en_ZW.UTF-8... up-to-date
Generation complete.

You still have the warning which is normal since at the execution, the system is still ill-configured.
In order to complete the reconfiguration, you must to reboot !

after reboot, type

thomas@sd1:~$ locale
LANG=en_US.UTF-8
LANGUAGE=en_US:en:en_GB:en
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=

the current locale is set to en_US.UTF-8 and there is no more warning.

Note : to add a particular charset you can use the following command :

locale-gen en_US.UTF-8 en_GB.UTF-8


You might also find localepurge usefull to cleanup unused locale :
aptitude install localepurge

Monday, 10 August 2009

Add 2 disk for RAID 1 setup


On the dev server I spoke previously, we add 2 hard drives of 500GB in order to have 500GB in RAID1 as a network storage.

Here is the setup procedure:

Check the disk

sudo fdisk -l

thomas@dev:~$ sudo fdisk -l

Disk /dev/sda: 82.3 GB, 82348277760 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00059bfb

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          36      289138+  fd  Linux raid autodetect
/dev/sda2              37         218     1461915   fd  Linux raid autodetect
/dev/sda3             219         826     4883760   fd  Linux raid autodetect
/dev/sda4             827       10011    73778512+  fd  Linux raid autodetect

Disk /dev/sdb: 82.3 GB, 82348277760 bytes
255 heads, 63 sectors/track, 10011 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d96f2

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          36      289138+  fd  Linux raid autodetect
/dev/sdb2              37         218     1461915   fd  Linux raid autodetect
/dev/sdb3             219         826     4883760   fd  Linux raid autodetect
/dev/sdb4             827       10011    73778512+  fd  Linux raid autodetect

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xffffffff

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       60801   488384001   83  Linux

Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xffffffff

Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       60801   488384001   83  Linux

Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x44fdfe06


will list all drives and partitions.

Here the drives are /dev/sdc and /dev/sdd

remove existing partition

thomas@dev:~$ sudo fdisk /dev/sdd

The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p

Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xffffffff

Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       60801   488384001   83  Linux

Command (m for help): d
Selected partition 1

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


and do the same for the other disk.

Create RAID Partition

Create partition for RAID and change it for Linux Raid

thomas@dev:~$ sudo fdisk /dev/sdc

The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-60801, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-60801, default 60801):
Using default value 60801

Command (m for help): p

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xffffffff

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       60801   488384001   83  Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


Choose a number for the md device

thomas@dev:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sdb4[1] sda4[0]
73778432 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
4883648 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
1461824 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
289024 blocks [2/2] [UU]

unused devices: none


here the number 4 is free... so it will be /dev/md4




Create the array
(ignore the warning, I've created a filesystem before the creation of the md device, which is wrong)
thomas@dev:~$ mdadm --create /dev/md4 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=488384000K  mtime=Thu Jan  1 01:00:00 1970
mdadm: /dev/sdd1 appears to contain an ext2fs file system
size=488384000K  mtime=Thu Jan  1 01:00:00 1970
Continue creating array? y
mdadm: array /dev/md4 started.


thomas@dev:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md4 : active raid1 sdd1[1] sdc1[0]
488383936 blocks [2/2] [UU]
[>....................]  resync =  0.1% (821888/488383936) finish=98.8min speed=82188K/sec

md3 : active raid1 sda4[0] sdb4[1]
73778432 blocks [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
4883648 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
1461824 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
289024 blocks [2/2] [UU]

unused devices: none


with the second command you can see the progress of the build.

Create the filesystem

sudo mkfs.ext4 /dev/md4

mke2fs 1.41.4 (27-Jan-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
61054976 inodes, 244189984 blocks
12209499 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7453 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
 102400000, 214990848

done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.



Mount & Check

Now we can mount and check that we can use the file (even if the sync is not done yet... although, I won't copy several gigs on it, but a simple touch..

sudo mkdir /mnt/md4-sdc-sdd-500GB-RAID1
sudo mount /dev/md4 /mnt/md4-sdc-sdd-500GB-RAID1
cd /mnt/md4-sdc-sdd-500GB-RAID1
sudo touch test
sudo rm test


It seems allright...

updating /etc/mdadm/mdadm.conf


In order to auto-reassemble the new raid device on reboot, you need to persist it's configuration in /etc/mdadm/mdadm.conf

the following command make a backup copy and append in mdadm.conf the configuration of the new array :
sudo cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.origin
sudo mdadm --misc --detail --brief /dev/md4 | sudo tee -a /etc/mdadm/mdadm.conf


here is the result :
thomas@dev:/mnt/md4-sdc-sdd-500GB-RAID1$ tail /etc/mdadm/mdadm.conf

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=8f382586:5dd86ec3:6e6f1681:980b2774
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=6c35c2c6:96c41f8a:596e7711:e9498056
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=e35a7ea9:d90afbda:526e888e:d5c311af
ARRAY /dev/md3 level=raid1 num-devices=2 UUID=6c7f5b4f:27e96317:a44ec9ef:2b057faa

# This file was auto-generated on Thu, 04 Sep 2008 21:25:35 +0000
# by mkconf $Id$
ARRAY /dev/md4 level=raid1 num-devices=2 metadata=00.90 UUID=25b3acb8:99375aa9:2eeb9ed0:acf0d8fd




Updating the fstab

Now, in order to auto remount the filesystem on /dev/md4, you need to update the fstab.

The UUID you find in mdadm.conf is not the one to report in /etc/fstab (I don't know why.... if someone knows... let me know)
The UUID you need for the /etc/fstab can be found here :

thomas@dev:/mnt/md4-sdc-sdd-500GB-RAID1$ ls -l /dev/disk/by-uuid/ | grep md
lrwxrwxrwx 1 root root  9 2009-08-10 23:42 242d3f42-2cdb-4626-989b-70b4e25ed5d5 -> ../../md2
lrwxrwxrwx 1 root root  9 2009-08-10 23:42 468fca8d-c048-41ec-a1d8-172afd8d081c -> ../../md0
lrwxrwxrwx 1 root root  9 2009-08-10 23:42 a670d86f-6c57-48b7-ba92-13ecf14cc8c2 -> ../../md1
lrwxrwxrwx 1 root root  9 2009-08-10 23:42 bf44c30e-7c04-41ca-b7c8-610d279a4135 -> ../../md3
lrwxrwxrwx 1 root root  9 2009-08-10 23:55 cc55dd26-a913-43c1-a67a-479c4d6eb29c -> ../../md4


sudo vi /etc/fstab


append the following line:

# /dev/md4
UUID=cc55dd26-a913-43c1-a67a-479c4d6eb29c /mnt/md4-sdc-sdd-500GB-RAID1 ext3     defaults,relatime        1       1


be sure to replace the UUID by your UUID (ls -l /dev/disk/by-uuid/ | grep md)

Now we will check that it's correct :
unmount /dev/md4 first :

cd /mnt
sudo umount /dev/md4

then try to remount all partition defined in /etc/fstab :

thomas@dev:/mnt$ sudo mount -a


check if it's mounted :

thomas@dev:/mnt$ sudo mount -l
/dev/md3 on / type ext3 (rw,relatime,errors=remount-ro) [/]
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
/proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
varrun on /var/run type tmpfs (rw,nosuid,mode=0755)
varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
lrm on /lib/modules/2.6.27-14-server/volatile type tmpfs (rw,mode=755)
/dev/md0 on /boot type ext2 (rw,relatime) [/boot]
/dev/md2 on /var type ext3 (rw,relatime) [/var]
securityfs on /sys/kernel/security type securityfs (rw)
/dev/md4 on /mnt/md4-sdc-sdd-500GB-RAID1 type ext3 (rw,relatime) []

thomas@dev:/mnt$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/md3               70G   61G  6.3G  91% /
tmpfs                 251M     0  251M   0% /lib/init/rw
varrun                251M  292K  251M   1% /var/run
varlock               251M     0  251M   0% /var/lock
udev                  251M  2.7M  249M   2% /dev
tmpfs                 251M     0  251M   0% /dev/shm
lrm                   251M  2.2M  249M   1% /lib/modules/2.6.27-14-server/volatile
/dev/md0              265M   54M  197M  22% /boot
/dev/md2              4.7G  1.1G  3.4G  23% /var
/dev/md4              459G  199M  435G   1% /mnt/md4-sdc-sdd-500GB-RAID1


Now the raid is configured ;)


Ref. used for this blog post :
http://www.linuxpedia.fr/doku.php/expert/mdadm by
http://www.linuxpedia.fr/doku.php/expert/systeme_conversion_raid1

Raid on linux - re-add a drive



Today, one of the developer of 123Monsite.com mess up with one server, and this lead to this :


They configure fake hardware on the mother board while it's a software raid... noticing it didn't work, they remove a drive to boot on a degraded raid... and try to find what's wrong... that's when I interrupt the massacre

this lead to this :
the partitions of the removed drive was marked as deleted (even if it has been replugged and fake-hardware disabled) :

thomas@dev:~$ sudo mdadm --query --detail /dev/md3
/dev/md3:
Version : 00.90
Creation Time : Thu Sep  4 23:15:23 2008
Raid Level : raid1
Array Size : 73778432 (70.36 GiB 75.55 GB)
Used Dev Size : 73778432 (70.36 GiB 75.55 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 3
Persistence : Superblock is persistent

Update Time : Mon Aug 10 22:45:33 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : 6c7f5b4f:27e96317:a44ec9ef:2b057faa
Events : 0.296

Number   Major   Minor   RaidDevice State
0       8        4        0      active sync   /dev/sda4
1       0        0        1      removed


In order to re-activate the missing drive on each mdX I run the following command :

sudo mdadm --manage -a /dev/md3 /dev/sdb4
mdadm: re-added /dev/sdb4


which re-enable the partition of the sdb disk and when needed rebuild of the faulty partition :

thomas@dev:~$ sudo mdadm --query --detail /dev/md3
/dev/md3:
Version : 00.90
Creation Time : Thu Sep  4 23:15:23 2008
Raid Level : raid1
Array Size : 73778432 (70.36 GiB 75.55 GB)
Used Dev Size : 73778432 (70.36 GiB 75.55 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 3
Persistence : Superblock is persistent

Update Time : Mon Aug 10 22:56:35 2009
State : clean, degraded
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

UUID : 6c7f5b4f:27e96317:a44ec9ef:2b057faa
Events : 0.300

Number   Major   Minor   RaidDevice State
0       8        4        0      active sync   /dev/sda4
2       8       20        1      spare rebuilding   /dev/sdb4

thomas@dev:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md3 : active raid1 sdb4[2] sda4[0]
73778432 blocks [2/1] [U_]
[===>.................]  recovery = 16.8% (12412928/73778432) finish=18.6min speed=54688K/sec

md2 : active raid1 sdb3[1] sda3[0]
4883648 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
1461824 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
289024 blocks [2/2] [UU]

unused devices: