Raid on linux - re-add a drive
Today, one of the developer of 123Monsite.com mess up with one server, and this lead to this :
They configure fake hardware on the mother board while it's a software raid... noticing it didn't work, they remove a drive to boot on a degraded raid... and try to find what's wrong... that's when I interrupt the massacre
this lead to this :
the partitions of the removed drive was marked as deleted (even if it has been replugged and fake-hardware disabled) :
thomas@dev:~$ sudo mdadm --query --detail /dev/md3 /dev/md3: Version : 00.90 Creation Time : Thu Sep 4 23:15:23 2008 Raid Level : raid1 Array Size : 73778432 (70.36 GiB 75.55 GB) Used Dev Size : 73778432 (70.36 GiB 75.55 GB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Mon Aug 10 22:45:33 2009 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 6c7f5b4f:27e96317:a44ec9ef:2b057faa Events : 0.296 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed
In order to re-activate the missing drive on each mdX I run the following command :
sudo mdadm --manage -a /dev/md3 /dev/sdb4 mdadm: re-added /dev/sdb4
which re-enable the partition of the sdb disk and when needed rebuild of the faulty partition :
thomas@dev:~$ sudo mdadm --query --detail /dev/md3 /dev/md3: Version : 00.90 Creation Time : Thu Sep 4 23:15:23 2008 Raid Level : raid1 Array Size : 73778432 (70.36 GiB 75.55 GB) Used Dev Size : 73778432 (70.36 GiB 75.55 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Mon Aug 10 22:56:35 2009 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 UUID : 6c7f5b4f:27e96317:a44ec9ef:2b057faa Events : 0.300 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 2 8 20 1 spare rebuilding /dev/sdb4
thomas@dev:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2] sda4[0] 73778432 blocks [2/1] [U_] [===>.................] recovery = 16.8% (12412928/73778432) finish=18.6min speed=54688K/sec md2 : active raid1 sdb3[1] sda3[0] 4883648 blocks [2/2] [UU] md1 : active raid1 sda2[0] sdb2[1] 1461824 blocks [2/2] [UU] md0 : active raid1 sdb1[1] sda1[0] 289024 blocks [2/2] [UU] unused devices:
Comments