Replacing failed disk on Server: Difference between revisions

From DISI
Jump to navigation Jump to search
No edit summary
Line 135: Line 135:
For instruction on how to identify and replace failed disk on ZFS system. [http://wiki.docking.org/index.php/Zfs#Example:_Fixing_degraded_pool.2C_replacing_faulted_disk '''Read here''']
For instruction on how to identify and replace failed disk on ZFS system. [http://wiki.docking.org/index.php/Zfs#Example:_Fixing_degraded_pool.2C_replacing_faulted_disk '''Read here''']


[[ Category: Ben ]] [[ Category : Sysadmin ]]
=== On Any Raid1 Configurations ===
Steps to fix a hard drive failure that is in a raid 1 configuration:
 
The following demonstrates what a failed disk looks like:
 
  [root@myServer ~]# cat /proc/mdstat <br />
  Personalities : [raid1] <br/>
  md0 : active raid1 sdb1[0] sda1[2](F) <br/>
  128384 blocks [2/1] [U_]  <br/>
  md1 : active raid1 sdb2[0] sda2[2](F) <br/>
  16779776 blocks [2/1] [U_] <br/>
  md2 : active raid1 sdb3[0] sda3[2](F) <br/>
  139379840 blocks [2/1] [U_] <br/>   
  unused devices: <none> <br/>
 
  [root@myServer ~]# smartctl -a /dev/sda  <br/>             
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)  <br/>
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net  <br/>
  Short INQUIRY response, skip product id  <br/>
  A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.  <br/>
 
 
  [root@myServer ~]# smartctl -a /dev/sdb  <br/>
  smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)  <br/>
  Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net  <br/>
  === START OF INFORMATION SECTION ===    <br/>
  Model Family:    Seagate Barracuda 7200.10    <br/>
  Device Model:    ST3160815AS    <br/>
  Serial Number:    9RA6DZP8    <br/>
  Firmware Version: 4.AAB    <br/>
  User Capacity:    160,041,885,696 bytes [160 GB]  <br/>
  Sector Size:      512 bytes logical/physical  <br/>
  Device is:        In smartctl database [for details use: -P show]  <br/>
  ATA Version is:  7  <br/>
  ATA Standard is:  Exact ATA specification draft version not indicated  <br/>
  Local Time is:    Mon Sep  8 15:50:48 2014 PDT  <br/>
  SMART support is: Available - device has SMART capability.  <br/>
  SMART support is: Enabled  <br/>
  === START OF READ SMART DATA SECTION ===  <br/>
  SMART overall-health self-assessment test result: PASSED  <br/>
 
There is a lot more that gets printed, but I cut it out.  <br/>
 
So /dev/sda has clearly failed.  <br/>
 
Take note of the GOOD disk serial number so I leave that one in when I replace it:  <br/>
  Serial Number:    9RA6DZP8  <br/>
 
Mark and remove failed disk from raid:  <br/>
 
  [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1  <br/>
  mdadm: set /dev/sda1 faulty in /dev/md0  <br/>
  [root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2  <br/>
  mdadm: set /dev/sda2 faulty in /dev/md1  <br/>
  [root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3  <br/>
  mdadm: set /dev/sda3 faulty in /dev/md2  <br/>
  [root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1  <br/>
  mdadm: hot removed /dev/sda1  <br/>
  [root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2  <br/>
  mdadm: hot removed /dev/sda2  <br/>
  [root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3  <br/>
  mdadm: hot removed /dev/sda3  <br/>
 
Make sure grub is installed on the good disk and that grub.conf is updated:
 
  [root@myServer ~]# grub-install /dev/sdb  <br/>
  Installation finished. No error reported.  <br/>
  This is the contents of the device map /boot/grub/device.map.  <br/>
  Check if this is correct or not. <br/>
  If any of the lines is incorrect, fix it and re-run the script `grub-install'.  <br/>
  This device map was generated by anaconda  <br/>
  (hd0)    /dev/sda  <br/>
  (hd1)    /dev/sdb  <br/>
 
Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.  <br/>
 
  [root@myServer ~]# vim /boot/grub/menu.lst  <br/>
  Add fallback=1 right after default=0  <br/>
  Go to the bottom section where you should find some kernel stanzas.  <br/>
  Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)  <br/>
  Should look like this:  <br/>
    [...]  <br/>
    title CentOS (2.6.18-128.el5)  <br/>
    root (hd1,0)  <br/>
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00  <br/>
    initrd /initrd-2.6.18-128.el5.img  <br/>
    title CentOS (2.6.18-128.el5)  <br/>
    root (hd0,0)  <br/>
    kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/  <br/>
    initrd /initrd-2.6.18-128.el5.img  <br/>
 
Save and quit  <br/>
 
  [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak  <br/>
  [root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)  <br/>
  [root@myServer ~]# init 0  <br/>
 
Swap the bad drive with the new drive and boot the machine.  <br/>
 
Once it's booted:  <br/>
 
Check the device names with cat /proc/mdstat and/or fisk -l.  <br/>
The newly installed drive on myServer was named /dev/sda.  <br/>
 
  [root@myServer ~]# modeprobe raid1  <br/>
  [root@myServer ~]# modeprobe linear  <br/>
 
Copy the partitions from one disk to the other:
 
  [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda  <br/>
  [root@myServer ~]# sfdisk -l => sanity check  <br/>
 
Add the new disk to the raid array:
 
  [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1  <br/>
  mdadm: added /dev/sda1  <br/>
  [root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2  <br/>
  mdadm: added /dev/sda2  <br/>
  [root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3  <br/>
  mdadm: added /dev/sda3  <br/>
 
Sanity check:
  [root@myServer ~]# cat /proc/mdstat  <br/>
  Personalities : [raid1] [linear]    <br/>
  md0 : active raid1 sda1[1] sdb1[0]  <br/>
  128384 blocks [2/2] [UU]  <br/>     
  md1 : active raid1 sda2[2] sdb2[0]  <br/>
  16779776 blocks [2/1] [U_]  <br/>
  [>....................]  recovery =  3.2% (548864/16779776) finish=8.8min speed=30492K/sec  <br/>
  md2 : active raid1 sda3[2] sdb3[0]  <br/>
  139379840 blocks [2/1] [U_]  <br/>
  resync=DELAYED  <br/>   
  unused devices: <none>  <br/>
 
That's it! :)  <br/>
[[ Category: Ben ]]
[[ Category : Sysadmin ]]
[[Category:Tutorials]]

Revision as of 22:49, 25 January 2021

How to check if Disk failed

Check for the light on disk

Solid Yellow => Fail

Blinking Yellow => Predictive Failure (going to fail soon)

Green => Normal

Replace disk instruction

  • Determine what machine the disk below to
  • Press the red button on the disk to turn it off.
  • Gently pull a little bit out (NOT all the way) and wait for 10 sec until it stops spinning before pulling all the way out.
  • Find replacement with a similar disk with the same specs
  • Carefully unscrew the disk from disk holder (if the disk holder part on the replacement is the same then you don't have to).

How to check if disk is failed or install correctly

On Cluster 0 's machines

1. Log into gimel as root

$ ssh root@sgehead1.bkslab.org

2. Log in as root to the machine that you determined from earlier

$ ssh root@<machine_name>
Example: RAID 3,6,7 belongs to nfshead2

3. Run this command

$ /opt/compaq/hpacucli/bld/hpacucli ctrl all show config
Output Example:
Smart Array P800 in Slot 1                (sn: PAFGF0N9SXQ0MX)
  array A (SATA, Unused Space: 0 MB)
     logicaldrive 1 (5.5 TB, RAID 1+0, OK)
     physicaldrive 1E:1:1 (port 1E:box 1:bay 1, SATA, 1 TB, OK)
     physicaldrive 1E:1:2 (port 1E:box 1:bay 2, SATA, 1 TB, OK)
     physicaldrive 1E:1:3 (port 1E:box 1:bay 3, SATA, 1 TB, OK)
     physicaldrive 1E:1:4 (port 1E:box 1:bay 4, SATA, 1 TB, OK)
     physicaldrive 1E:1:5 (port 1E:box 1:bay 5, SATA, 1 TB, OK)
     physicaldrive 1E:1:6 (port 1E:box 1:bay 6, SATA, 1 TB, OK)
     physicaldrive 1E:1:7 (port 1E:box 1:bay 7, SATA, 1 TB, OK)
     physicaldrive 1E:1:8 (port 1E:box 1:bay 8, SATA, 1 TB, OK)
     physicaldrive 1E:1:9 (port 1E:box 1:bay 9, SATA, 1 TB, OK)
     physicaldrive 1E:1:10 (port 1E:box 1:bay 10, SATA, 1 TB, OK)
     physicaldrive 1E:1:11 (port 1E:box 1:bay 11, SATA, 1 TB, OK)
     physicaldrive 1E:1:12 (port 1E:box 1:bay 12, SATA, 1 TB, OK)
  array B (SATA, Unused Space: 0 MB)
     logicaldrive 2 (5.5 TB, RAID 1+0, OK)
     physicaldrive 2E:1:1 (port 2E:box 1:bay 1, SATA, 1 TB, OK)
     physicaldrive 2E:1:2 (port 2E:box 1:bay 2, SATA, 1 TB, Predictive Failure)
     physicaldrive 2E:1:3 (port 2E:box 1:bay 3, SATA, 1 TB, OK)
     physicaldrive 2E:1:4 (port 2E:box 1:bay 4, SATA, 1 TB, OK)
     physicaldrive 2E:1:5 (port 2E:box 1:bay 5, SATA, 1 TB, OK)
     physicaldrive 2E:1:6 (port 2E:box 1:bay 6, SATA, 1 TB, OK)
     physicaldrive 2E:1:7 (port 2E:box 1:bay 7, SATA, 1 TB, OK)
     physicaldrive 2E:1:8 (port 2E:box 1:bay 8, SATA, 1 TB, OK)
     physicaldrive 2E:1:9 (port 2E:box 1:bay 9, SATA, 1 TB, OK)
     physicaldrive 2E:1:10 (port 2E:box 1:bay 10, SATA, 1 TB, OK)
     physicaldrive 2E:1:11 (port 2E:box 1:bay 11, SATA, 1 TB, OK)
     physicaldrive 2E:1:12 (port 2E:box 1:bay 12, SATA, 1 TB, OK)
  array C (SATA, Unused Space: 0 MB)
     logicaldrive 3 (5.5 TB, RAID 1+0, Ready for Rebuild)
     physicaldrive 2E:2:1 (port 2E:box 2:bay 1, SATA, 1 TB, OK)
     physicaldrive 2E:2:2 (port 2E:box 2:bay 2, SATA, 1 TB, OK)
     physicaldrive 2E:2:3 (port 2E:box 2:bay 3, SATA, 1 TB, OK)
     physicaldrive 2E:2:4 (port 2E:box 2:bay 4, SATA, 1 TB, OK)
     physicaldrive 2E:2:5 (port 2E:box 2:bay 5, SATA, 1 TB, OK)
     physicaldrive 2E:2:6 (port 2E:box 2:bay 6, SATA, 1 TB, OK)
     physicaldrive 2E:2:7 (port 2E:box 2:bay 7, SATA, 1 TB, OK)
     physicaldrive 2E:2:8 (port 2E:box 2:bay 8, SATA, 1 TB, OK)
     physicaldrive 2E:2:9 (port 2E:box 2:bay 9, SATA, 1 TB, OK)
     physicaldrive 2E:2:10 (port 2E:box 2:bay 10, SATA, 1 TB, OK)
     physicaldrive 2E:2:11 (port 2E:box 2:bay 11, SATA, 1 TB, OK)
     physicaldrive 2E:2:12 (port 2E:box 2:bay 12, SATA, 1 TB, OK)
  Expander 243 (WWID: 50014380031A4B00, Port: 1E, Box: 1)
  Expander 245 (WWID: 5001438005396E00, Port: 2E, Box: 2)
  Expander 246 (WWID: 500143800460A600, Port: 2E, Box: 1)
  Expander 248 (WWID: 50014380055E913F)
  Enclosure SEP (Vendor ID HP, Model MSA60) 241 (WWID: 50014380031A4B25, Port: 1E, Box: 1)
  Enclosure SEP (Vendor ID HP, Model MSA60) 242 (WWID: 5001438005396E25, Port: 2E, Box: 2)
  Enclosure SEP (Vendor ID HP, Model MSA60) 244 (WWID: 500143800460A625, Port: 2E, Box: 1)
  SEP (Vendor ID HP, Model P800) 247 (WWID: 50014380055E913E)

On shin

As root

/opt/MegaRAID/storcli/storcli64 /c0 /eall /sall show all
Drive /c0/e8/s18 :
 ================

 -----------------------------------------------------------------------------
EID:Slt DID State  DG     Size Intf Med SED PI SeSz Model            Sp Type 
-----------------------------------------------------------------------------
8:18     24 Failed  0 3.637 TB SAS  HDD N   N  512B ST4000NM0023     U  -    
-----------------------------------------------------------------------------

EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup
DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare
UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface
Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info
SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign
UGUnsp-Unsupported|UGShld-UnConfigured shielded|HSPShld-Hotspare shielded
CFShld-Configured shielded|Cpybck-CopyBack|CBShld-Copyback Shielded


Drive /c0/e8/s18 - Detailed Information :
=======================================

Drive /c0/e8/s18 State :
======================
Shield Counter = 0
Media Error Count = 0
Other Error Count = 16
BBM Error Count = 0
Drive Temperature =  32C (89.60 F)
Predictive Failure Count = 0
S.M.A.R.T alert flagged by drive = No


Drive /c0/e8/s18 Device attributes :
==================================
SN = Z1Z2S2TL0000C4216E9V
Manufacturer Id = SEAGATE 
Model Number = ST4000NM0023    
NAND Vendor = NA
WWN = 5000C50057DB2A28
Firmware Revision = 0003
Firmware Release Number = 03290003
Raw size = 3.638 TB [0x1d1c0beb0 Sectors]
Coerced size = 3.637 TB [0x1d1b00000 Sectors]
Non Coerced size = 3.637 TB [0x1d1b0beb0 Sectors]
Device Speed = 6.0Gb/s
Link Speed = 6.0Gb/s
Write cache = N/A
Logical Sector Size = 512B
Physical Sector Size = 512B
Connector Name = Port 0 - 3 & Port 4 - 7 

On ZFS machines

$ zpool status

For instruction on how to identify and replace failed disk on ZFS system. Read here

On Any Raid1 Configurations

Steps to fix a hard drive failure that is in a raid 1 configuration:

The following demonstrates what a failed disk looks like:

 [root@myServer ~]# cat /proc/mdstat 
Personalities : [raid1]
md0 : active raid1 sdb1[0] sda1[2](F)
128384 blocks [2/1] [U_]
md1 : active raid1 sdb2[0] sda2[2](F)
16779776 blocks [2/1] [U_]
md2 : active raid1 sdb3[0] sda3[2](F)
139379840 blocks [2/1] [U_]
unused devices: <none>
 [root@myServer ~]# smartctl -a /dev/sda   
smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
Short INQUIRY response, skip product id
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.


 [root@myServer ~]# smartctl -a /dev/sdb  
smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.18-371.1.2.el5] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.10
Device Model: ST3160815AS
Serial Number: 9RA6DZP8
Firmware Version: 4.AAB
User Capacity: 160,041,885,696 bytes [160 GB]
Sector Size: 512 bytes logical/physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 7
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Mon Sep 8 15:50:48 2014 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

There is a lot more that gets printed, but I cut it out.

So /dev/sda has clearly failed.

Take note of the GOOD disk serial number so I leave that one in when I replace it:

 Serial Number:    9RA6DZP8   

Mark and remove failed disk from raid:

 [root@myServer ~]# mdadm --manage /dev/md0 --fail /dev/sda1   
mdadm: set /dev/sda1 faulty in /dev/md0
[root@myServer ~]# mdadm --manage /dev/md1 --fail /dev/sda2
mdadm: set /dev/sda2 faulty in /dev/md1
[root@myServer ~]# mdadm --manage /dev/md2 --fail /dev/sda3
mdadm: set /dev/sda3 faulty in /dev/md2
[root@myServer ~]# mdadm --manage /dev/md0 --remove /dev/sda1
mdadm: hot removed /dev/sda1
[root@myServer ~]# mdadm --manage /dev/md1 --remove /dev/sda2
mdadm: hot removed /dev/sda2
[root@myServer ~]# mdadm --manage /dev/md2 --remove /dev/sda3
mdadm: hot removed /dev/sda3

Make sure grub is installed on the good disk and that grub.conf is updated:

 [root@myServer ~]# grub-install /dev/sdb   
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not.
If any of the lines is incorrect, fix it and re-run the script `grub-install'.
This device map was generated by anaconda
(hd0) /dev/sda
(hd1) /dev/sdb

Take note of the which hd partition corresponds with the good disk, ie hd1 in this case.

 [root@myServer ~]# vim /boot/grub/menu.lst  
Add fallback=1 right after default=0
Go to the bottom section where you should find some kernel stanzas.
Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0)
Should look like this:
[...]
title CentOS (2.6.18-128.el5)
root (hd1,0)
kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/VolGroup00/LogVol00
initrd /initrd-2.6.18-128.el5.img
title CentOS (2.6.18-128.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-128.el5 ro root=/dev/
initrd /initrd-2.6.18-128.el5.img

Save and quit

 [root@myServer ~]# mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak   
[root@myServer ~]# mkinitrd /boot/initramfs-$(uname -r).img $(uname -r)
[root@myServer ~]# init 0

Swap the bad drive with the new drive and boot the machine.

Once it's booted:

Check the device names with cat /proc/mdstat and/or fisk -l.
The newly installed drive on myServer was named /dev/sda.

 [root@myServer ~]# modeprobe raid1   
[root@myServer ~]# modeprobe linear

Copy the partitions from one disk to the other:

 [root@myServer ~]# sfdisk -d /dev/sdb | sfdisk --force /dev/sda   
[root@myServer ~]# sfdisk -l => sanity check

Add the new disk to the raid array:

 [root@myServer ~]# mdadm --manage /dev/md0 --add /dev/sda1   
mdadm: added /dev/sda1
[root@myServer ~]# mdadm --manage /dev/md1 --add /dev/sda2
mdadm: added /dev/sda2
[root@myServer ~]# mdadm --manage /dev/md2 --add /dev/sda3
mdadm: added /dev/sda3

Sanity check:

 [root@myServer ~]# cat /proc/mdstat  
Personalities : [raid1] [linear]
md0 : active raid1 sda1[1] sdb1[0]
128384 blocks [2/2] [UU]
md1 : active raid1 sda2[2] sdb2[0]
16779776 blocks [2/1] [U_]
[>....................] recovery = 3.2% (548864/16779776) finish=8.8min speed=30492K/sec
md2 : active raid1 sda3[2] sdb3[0]
139379840 blocks [2/1] [U_]
resync=DELAYED
unused devices: <none>

That's it! :)