16 March 2013

How to: Replace a faulty drive in software RAID

Example scenario

The following configuration is assumed:

# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda4[0] sdb4[1]
      1822442815 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      1073740664 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      524276 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      33553336 blocks super 1.2 [2/2] [UU]

unused devices: none

There are four partitions in total:
  • /dev/md0 as swap
  • /dev/md1 as /boot
  • /dev/md2 as /
  • /dev/md3 as /home
/dev/sdb is the defective drive in this case and it is shown by [U_]. If the defective drive is /dev/sda it is show by [_U]. If the RAID array is intact, it shows [UU].

# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda4[0] sdb4[1](F)
      1822442815 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sda3[0] sdb3[1](F)
      1073740664 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sda2[0] sdb2[1](F)
      524276 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0] sdb1[1](F)
      33553336 blocks super 1.2 [2/1] [U_]

unused devices: none

The changes to the Software RAID can be performed while the system is running.
If cat /proc/mdstat shows that the drive is failing, like the example here, then an appointment can be made with the support technicians to replace the drive

# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda4[0]
      1822442815 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sda3[0]
      1073740664 blocks super 1.2 [2/1] [U_]

md1 : active raid1 sda2[0]
      524276 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0]
      33553336 blocks super 1.2 [2/1] [U_]

unused devices: none

Removal of the defective drive

Before a new drive can be added the old defective drive needs to be removed from the RAID array. This needs to be done for each individual partition.

# mdadm /dev/md0 -r /dev/sdb1
# mdadm /dev/md1 -r /dev/sdb2
# mdadm /dev/md2 -r /dev/sdb3
# mdadm /dev/md3 -r /dev/sdb4

The following command shows the drives that are part of an array:

# mdadm --detail /dev/md0

In some cases a drive may only be partly defective, so for example only /dev/md0 is in the [U_] state, whereas all other devices are in the [UU] state. In this case the command

# mdadm /dev/md1 -r /dev/sdb2

fails, as the /dev/md1 array is ok.
In this event, the command

# mdadm --manage /dev/md1 --fail /dev/sdb2

needs to be executed first, to move the RAID into [U_] status.

Arrange an appointment with tech support to change the defective drive.

In order to be able to exchange the defective drive, it is necessary to arrange an appointment with support in advance. The server will need to be taken off-line for a short time.

Preparing the new drive

Both drives in the array need to have the exact same partitioning. Depending on the partition table type used (MBR or GPT), appropriate utilities have to be used to copy the partition table. The GPT partition table is usually used for disks larger than 2 TB (e.g. 3 TB HDDs)

Drives with GPT

There are several reduntant copies of the GUID partition table (GPT) stored on the drive, so tools that support GPT, for example parted or GPT fdisk, need to be used to edit the table. The sgdisk tool from GPT fdisk (pre-installed when using the Rescue System) can be used to easily copy the partition table to a new drive.

Here's an example of copying the partition table from sda to sdb:

sgdisk -R /dev/sdb /dev/sda

The drive then needs to be assigned a new random UUID:

sgdisk -G /dev/sdb

After this the drive can be added to the array. As a final step the bootloader needs to be installed.

Drives with MBR

The partition table can be simply copied to a new drive using sfdisk:

# sfdisk -d /dev/sda | sfdisk /dev/sdb

where /dev/sda is the source drive and /dev/sdb is the target drive.
(Optional): If the partitions are not detected by the system then the partition table has to be reread from the kernel:

# sfdisk -R /dev/sdb

Naturally, the partitions may also be created manually using fdisk, cfdisk or other tools. The partitions should be Linux raid autodetect (ID fd) types.

Integration of the new drive

Once the defective drive has been removed and the new one installed, it needs to be intagrated into the RAID array. This needs to be done for each partition.

# mdadm /dev/md0 -a /dev/sdb1
# mdadm /dev/md1 -a /dev/sdb2
# mdadm /dev/md2 -a /dev/sdb3
# mdadm /dev/md3 -a /dev/sdb4

The new drive is now part of the array and will be synchronized. Depending on the size of the partitions this procedure can take some time. The status of the synchronization can be observed using cat /proc/mdstat.

# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdb4[1] sda4[0]
      1028096 blocks [2/2] [UU]
      [==========>..........]  resync =  50.0% (514048/1028096) finish=97.3min speed=65787K/sec

md2 : active raid1 sdb3[1] sda3[0]
      208768 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      2104448 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
      208768 blocks [2/2] [UU]

unused devices: none

Bootloader installation

If you are doing this repair in a booted system, then for GRUB2 running grub-install on the new drive is enough. For example:

grub-install /dev/sdb