Reassembling Non-Fresh mdadm RAID Arrays

I recently ran into this error when a drive dropped out of my RAID array:

kicking non-fresh sdg1 from array!

It reported this error for 8 out of the 10 drives in my array, completely disabling the array.

I know how to recover from situations like this, but it is terribly tedious. As most of the devices are non-fresh, the entire array needs to be assembled by hand and forced to run. In other words, you can’t rely on mdadm to automatically create the array for you.

To speed things up a bit, I cracked out a bit of bash to create a list of devices in my RAID array based on the UUID. You can find the UUID in /etc/mdadm.conf (if you’ve set things up correctly). Either that or run mdadm –examine [device].

Here’s the command. Replace xxxxxxx with the UUID of your array:


UUID="xxxxxxx"; echo; ls /dev/sd* | while read i; do if [ "$(mdadm --examine $i | grep ${UUID})" ]; then echo -n "$i "; fi ; done; echo

The script will output something like this:

/dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdk1 /dev/sdl1

These are all the devices you need to add to your array. So I ran the following command to reassemble my array:

mdadm -A --force /dev/md2 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdk1 /dev/sdl1

After I ran the command, I was still missing one drive. This is the drive that started the problems in the first place (it could be going faulty):


[root@host ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid6 sdk1[0] sdd1[9] sdc1[8] sdb1[7] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdl1[1]
11721076736 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/9] [UUUUUU_UUU]

I tried the –re-add command, but it wouldn’t work. Because I’m running RAID 6, I simply decided to discard the contents of the drive and add it as if it were a new drive (be careful, this might not be what you want in your scenario):


mdadm --add /dev/md2 /dev/sda1

Now the array is rebuilding:


[root@host backups]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid6 sda1[10] sdk1[0] sdd1[9] sdc1[8] sdb1[7] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdl1[1]
11721076736 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/9] [UUUUUU_UUU]
[>....................] recovery = 1.1% (16251264/1465134592) finish=1290.0min speed=18717K/sec