Rebuilding RAID after degraded disk using 3dm2 management CLI

Scenario

I was using Openfiler on a reconditioned server with 6x2TB disks (4xRAID5, 2x RAID1). This was my Veeam Backup repository (via ISCSI to a pRDM). The device started playing up and was completely unresponsive. I couldnt log into the console of the openfiler server. After a hard reset it rebooted but could not find the RAID controller. I re-seated the RAID controller which fixed the issue. However, I was seeing a disk degraded when looking at the RAID controller Management software (3dm2)

 

Firstly I SSH'd into the Openfiler server. The logs files showed that there was a drive error:

 

Log into the 3Ware RAID controller software type:

tw_cli

To view the information on the controller (c0 in this case)

/c0 show 

The above screenshot show the rebuild stuck on 53% and port p0 degraded.

As I could see the disk in Port 0 was degraded. The next challenge was to work out which disk was plugged into port 0! First of all I took the disk offline

/c0/p0 remove

 

Once this was offline I attempted to work out which disk needed to be removed by copying a file from the repository to our C:/ drive and watching the activity lights on the disks.

When the disk was located it was removed and re-seated it. After a few minutes the rebuild started taking place again.

As you will see the the p0 port is still showing as rebuilding. However, this stopped at 52% again. Next up it was time to replace the disk.

To remove the disk:

maint remove c0 p0

I inserted a new 2TB drive (well, when I say new it was re-purposed)

/c0 show

The new disk showed up as "OK" this time which was a relief. The status also showed "u?" which meant the disk wasnt sure what unit it was in.

I did a rescan of the controller

maint rescan c0

But that put the disk in u2!!

How odd. After much investigation it actually turns out i'd used a disk that was previously used with the exact same RAID controller card in another server. This continued to use the previous cards settings. I therefore had to delete the unit:

maint deleteunit c0 u2

 

It was then just a matter of removing the disk and popping it back and the array started to rebuild again. (im sure there is a command to do this)