RAID, also known as Redundant Array of Inexpensive Drives (or Disks).RAID is an term for data storage schemes that divide and/or replicate data among multiple hard drives. RAID can be designed to provide increased data reliability or increased I/O performance, though one goal may compromise the other.
There are total 10 types of RAID levels:
- RAID level 0
- RAID level RAID level 1
- RAID level 2
- RAID level 3
- RAID level 4
- RAID level 5
- RAID level 6
- RAID level 10
- RAID level 50
- RAID level 0+1
Following are commonly used RAID levels :
RAID level | Minmum Hard disks |
| Disadvantages | ||||||||
RAID 0 - Striped Set without parity | 2 Hard disks |
RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written to a separate disk drive
I/O
performance is greatly improved by spreading the I/O load across many channels and drives
Best performance is achieved when data is striped across multiple controllers with only one drive per controller
No parity calculation overhead is involved
Very simple design
Easy to implement | Not a "True" RAID because it is NOT fault - tolerant
The failure of just one drive will result in all data in an array being lost
Should never be used in mission critical environments | ||||||||
RAID 1 - Mirrored Set (2 disks minimum) without parity. | 2 Hard disks |
One Write or two Reads possible per mirrored pair
Twice the Read transaction rate of single disks, same Write transaction rate as single disks
100% redundancy of data means no rebuild is necessary in case of a disk failure, just a copy to the replacement disk
Transfer rate per block is equal to that of a single disk
Under certain circumstances, RAID 1 can sustain multiple simultaneous drive failures Simplest RAID storage subsystem design
| Highest disk overhead of all RAID types (100%) - inefficient
Typically the RAID function is done by system software, loading the CPU/Server and possibly degrading throughput at high activity levels. Hardware implementation is strongly recommended
May not support hot swap of failed disk when implemented in "software" | ||||||||
RAID 5 | 3 Hard disks |
Highest Read data transaction rate
Medium Write data transaction rate
Low ratio of ECC (Parity) disks to data disks means high efficiency
Good aggregate transfer rate
|
| ||||||||
RAID 10 (nested RAID 1+0) | 4 Hard disks |
RAID 10 is implemented as a striped array whose segments are RAID 1 arrays
RAID 10 has the same fault tolerance as RAID level 1
RAID 10 has the same overhead for fault-tolerance as mirroring alone
High I/O rates are achieved by striping RAID 1 segments
Under certain circumstances, RAID 10 array can sustain multiple simultaneous drive failures
Excellent solution for sites that would have otherwise gone with RAID 1 but need some additional performance boost
|
All drives must move in parallel to proper track lowering sustained performance
Very limited scalability at a very high inherent cost
|
> Create 3 partition (5gb each)
> Change the ID of Partition to FD. FD is the ID of RAID
> Configure the Raid device
> I m assuming that i have 3 partitions{sda4,5,6} and i m configuring raid level 5
# mdadm -C /dev/md0 -l 5 -n 3 /dev/sda{4,5,6} <-- mdadm stands for multiple disk administration
> Format the raid device 'md0'
# mke2fs -j /dev/md0
No comments:
Post a Comment