linux

linux

Kamis, 22 Desember 2011

mdadm

just note

//=================================================================================
// add new hard disk
//=================================================================================
1. plug in th hard disk on empty bay, let say, in to bay 4 (as host3, 1st bay as host0)
2. do this command:

# echo 0 0 0 > /sys/class/scsi_host/host3/scan

for bay number 3:
# echo 0 0 0 > /sys/class/scsi_host/host2/scan

for bay number 3:
# echo 0 0 0 > /sys/class/scsi_host/host1/scan

for bay number 1 (1st):
# echo 0 0 0 > /sys/class/scsi_host/host0/scan

>>> >>> the hard drive will be accessible

//=================================================================================
// remove hard disk
//=================================================================================

beware, to UNMOUNT the file system FIRST !!!!!!!!!!!!!!!!!!!!!!!!!!!!!

to remove harddisk from bay second bay or bay number 2 (it is as host1), do:

# echo 1 > /sys/class/scsi_device/1:0:0:0/device/delete

to remove harddisk from bay third bay or bay number 3 (it is as host2), do:

# echo 1 > /sys/class/scsi_device/2:0:0:0/device/delete

to remove harddisk from the fourth bay or bay number 4 (it is as host3), do:

# echo 1 > /sys/class/scsi_device/3:0:0:0/device/delete



Hotswap a SCSI, SAS, or SATA drive in Linux
============================================
There seems to be not a lot of information on Google about this, thus, I post.
At my work, the majority of our servers have hot-swappable drive bays ? however, Linux doesn?t usually automatically notice the drive is gone.
 Worse, sometimes it doesn?t even notice new drives hooked up.
Now, SCSI and SAS both support hot-plugging on a protocol level, and SATA II does as well. If your chassis is equipped with a hotswap drive bays, then that?s all you really need.
 I haven?t noticed any problems with SATA I hotswaps, but they appear to be less successful.
A common task I need to do is either:
upgrade a drive in a server
replace a dying drive with a new one

If the drive is in a RAID, or if it?s a separate sized drive for a disk upgrade, it?s a good idea to trigger the kernel driver to rescan information about the new disk.
In /sys/bus/scsi/devices, you?ll find a number of numbers that?ll correspond to your disk drives:
server devices # ls -1
 0:0:0:0@
 1:0:0:0@
 2:0:0:0@

You can determine more information about the drive by cat?ing it?s model file:
 server devices # cat "0:0:0:0/model"
 ST3250410AS

Now, to cause the kernel to rescan the drive attached to the port, do this:
 echo > "0:0:0:0/rescan"

Check dmesg now:
 server devices # dmesg
 ---snip---
 SCSI device sda: 488397168 512-byte hdwr sectors (250059 MB)
 sda: Write Protect is off
 sda: Mode Sense: 00 3a 00 00
 SCSI device sda: drive cache: write back

Now, that?s pretty exciting, as this can be used to skip a reboot.  If you switch the drive, and trigger the rescan,
it?ll update the drive information including the partition layout.  Make sure you don?t swap out your main system drive,
 otherwise your computer will freeze, and there will be data loss and possibly corruption.
Sometimes, this strategy fails to work, and I don?t know why (and resort to rebooting.)
 I'm currently researching how to do this a bit better, I have a buggy script that I use to improve this method,
but it's not ready for public release yet. Does anyone have a better method?  Please comment!

//=================================================================================
// add new hard disk
//=================================================================================
1. plug in th hard disk on empty bay, let say, in to bay 4 (as host3, 1st bay as host0)
2. do this command:

# echo 0 0 0 > /sys/class/scsi_host/host3/scan

for bay number 3:
# echo 0 0 0 > /sys/class/scsi_host/host2/scan

for bay number 2 (1st):
# echo 0 0 0 > /sys/class/scsi_host/host1/scan

for bay number 1 (1st):
# echo 0 0 0 > /sys/class/scsi_host/host0/scan

>>> >>> the hard drive will be accessible

//=================================================================================
// remove new hard disk
//=================================================================================

to remove harddisk from the second bay or bay number 2 (it is as host1), do:

# echo 1 > /sys/class/scsi_device/1:0:0:0/device/delete

to remove harddisk from the third bay or bay number 3 (it is as host2), do:

# echo 1 > /sys/class/scsi_device/2:0:0:0/device/delete

to remove harddisk from the fourth bay or bay number 4 (it is as host3), do:

# echo 1 > /sys/class/scsi_device/3:0:0:0/device/delete


to see sdx ?
/sys/bus/scsi/devices/0:0:0:0/block


#
# -----------------------------------
# Commands used for working with Raid
# -----------------------------------
#
#
# Latest Version:
# ==============
#    http://www.1U-Raid5/HowTo/Commands.uhow2.txt
#
#
#    http://www.cse.unsw.edu.au/~neilb/source/mdadm/
#    http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/
#
#
# 04-21-99 amo Date-of-Birth
# 08-28-04 amo Added more commands
# 05-26-05 amo Added more mdadm commands
#
#
#
# To Define your Raid Array
# -------------------------
#    ln -s /etc/raidxx.conf /etc/raidtab
#
#
# To Stop and Start the Raid Array
# --------------------------------
#     /etc/init.d/raid2 stop
#     /etc/init.d/raid2 start
#
#    -- or --
#
#    raidstop  /dev/md0
#    raidstart /dev/md0
#
#
#    mdadm --stop /dev/md4
#
#
#
# Config Files
# ------------
#     /etc/{raid}/raidtab
#        raidstart -a
#
#    /etc/mdtab
#        mdadd -ar    ( mdcreate )
#
#
# To Create the Raid Devices
# --------------------------
#
#    raidadd -a
#        md0 : inactive hdc1 hde1 16402302 blocks
#
#    raidrun /dev/md0
#        md0 : active raid0 hdc1 hde1 16401920 blocks 256k chunks
#        -- turn it on
#
#    raidstart /dev/md0
#
#
#
# To Initialize Raid
# ------------------
#    mkraid -V    - version info
#       mkraid /dev/md0
#
#    mkraid -force /dev/md0
#
#    mkraid -c /etc/raidtab /dev/md0
#        mkraid is only relevant for RAID 1, 4, and 5 devices
#        mkraid: aborted
#
#       - or -
#
#    http://www.cse.unsw.edu.au/~neilb/source/mdctl/mdctl-0.5.tgz
#    mdctl --assemble [ --force ] /dev/md0 /dev/hdc1 /dev/hde1 /dev/hdg1
#
#    http://www.cse.unsw.edu.au/~neilb/source/mdadm/
#    http://acd.ucar.edu/~fredrick/linux/fedoraraid/
#
#
#    mdadm --stop /dev/md4
#    mdadm --zero-superblock /dev/hda1
#    mdadm --zero-superblock /dev/hdb1
#
#
#    mdadm --version
#
#    mdadm -QE --scan
#
#    mdadm --detail /dev/md0
#
#    mdadm --detail --scan
#
#
#    mdadm --examine /dev/sda1
#    mdadm --examine /dev/sdb1
#    mdadm --examine /dev/sdc1
#
#    mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1
#
#    mdadm --create /dev/md0 --verbose --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
#
#    # resync ??
#    mdadm --assemble --run --force --update=resync /dev/md2 /dev/sbd3  /dev/sbd4 /dev/sbd5
#
#    ???
#    mdadm -R -A /dev/md0 /dev/hdx /dev/hdy
#
#    # hotadd a disk
#    mdadm /dev/md0 -a /dev/hdg1
#    mdadm /dev/md0 --force -a /dev/hdg1
#
#
#    #
#    # stop and rebuild it
#    #
#    mdadm --stop --scan
#    mdadm --assemble /dev/md0 --auto --scan --update=summaries --verbose
#
#    #
#    # Monitoring the sw raid
#    #
#    nohup mdadm --monitor --mail=RaidSupport@your-domain.com --delay=300 /dev/md0
#
#
#
# To Check if its recognized
# --------------------------
#    cat /proc/mdstat
#        Personalities : [1 linear] [2 raid0] [3 raid1] [4 raid5]
#        read_ahead 128 sectors
#        md0 : active raid0 hdc1 hde1 16401920 blocks 256k chunks/usr/local/squid/sbin/starter.sh
#        md1 : inactive
#
#    mdadm -A /dev/hde1
#
#    lsraid -a /dev/md0
#
#
# To Format a NEW raid device
# ---------------------------
#    mkfs.ext2 /dev/md0
#        mke2fs -b 4096 -R stride=16 /dev/md0
#        mke2fs -m 1 /dev/md0
#
#        mke2fs -b 4096 -i 16384 -R stride=32 -s sparse-super-flag /dev/md0
#
#        tune2fs -c 1 -i 1d -m 10 /dev/md0
#
#
#    - for Ext3 Journeling FS  ( download new e2fsprogs-1.25 )
#    - define ext3 in /etc/fstab
#    mke2fs -j /dev/md0
#
#    -- or --
#
#    mkreiserfs /dev/md0
#
#
#
# To Check the formating before mounting
# --------------------------------------
#    e2fsck /dev/md0
#
#    e2fsck -yf /dev/md0    - just fix it  ( very dangerous though )
#
#
# To Mount and use the Raid devices
# ---------------------------------
#    mkdir /Raid5 ; mount /dev/md0 /Raid5
#
#
# To Un-mount the Raid devices
# ----------------------------
#    umount /Raid5
#
# To Stop Raid
# ------------
#    raidstop /dev/md0
#    cat /proc/mdstat    -- to verify devices not listed
#
# To Restart Raid
# ---------------
#    raidstart /dev/md0
#    raidstart -a
#    raidrun -a
#    cat /proc/mdstat    -- to verify devices not listed
#
#
# HotAdd and Remove /dev/sdb
# -----------------
#    mdadm /dev/md0 --fail /dev/sda1
#    mdadm /dev/md0 --remove /dev/sda1
#
#
#     -- or --
#
#    raidhotremove -->> without running: "raidstop /dev/md0"
#  
#    raidsetfaulty /dev/md0 /dev/sdb2
#    raidhotremove /dev/md0 /dev/sdb2
#
#    -- after adding the new disk ... watch it resync
#    raidhotadd /dev/md0 /dev/sdb2
#
#    #
#    # to hotadd the new/missing device
#    #
#    mdadm /dev/md0 -a /dev/hdc1
#

# To remove any faulties:

You can remove any faulty or failed drives with :

sudo mdadm --manage /dev/md0 --remove faulty
-- or --
sudo mdadm --manage /dev/md0 --remove failed

This lets mdadm know to deallocate the device space. When you hot-add a new spare drive it should replace the /dev/sd<failed> node. After the hot-add you can:

sudo mdadm --manage /dev/md0 --re-add /dev/sd<failed>

-- a real world case --

sudo mdadm --manage /dev/md0 --re-add /dev/sdc1

#
# To View the raid devices
# -------------------------
#    mdadm --detail /dev/md0
#
#
# Test your RAID Systems
# ----------------------
#    http://www.1U-Raid5.net/Testing
#
#
# Backup data on your RAID systems
# --------------------------------
#    http://www.Linux-Backup.net
#
#
#
# end of file
****************************************************************************
root@HO-168-22:~# mdadm --detail /dev/md0
/dev/md0:                              
        Version : 0.90                 
  Creation Time : Mon Aug 22 18:19:18 2011
     Raid Level : raid1                 
     Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
  Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Jan 30 22:17:04 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 1
  Spare Devices : 0

           UUID : bdebd41b:bd11b324:e932e65d:e0c75763
         Events : 0.39068

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       33        1      active sync   /dev/sdc1

       2       8       17        -      faulty spare
root@HO-168-22:~# mdadm --manage /dev/md0 --remove faulty
mdadm: hot removed 8:17
root@HO-168-22:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Aug 22 18:19:18 2011
     Raid Level : raid1
     Array Size : 1953511936 (1863.01 GiB 2000.40 GB)
  Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Jan 30 22:19:11 2012
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : bdebd41b:bd11b324:e932e65d:e0c75763
         Events : 0.39070

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       33        1      active sync   /dev/sdc1
root@HO-168-22:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid1 sdc1[1] sda1[0]
      1953511936 blocks [2/2] [UU]

unused devices: <none>
note : if mdadm show faulty you cam remove if with
mdadm --manage /dev/md0 --remove faulty

Tidak ada komentar:

Posting Komentar