Akemi

基于mdadm创建与管理软raid

2023/10/27

如果出现无法载图的情况,请检查与github的连通性

环境

VMware workstation 17pro
CentOS Linux release 7.9.2009 (Core)
——内存8G,16vCPU
——硬盘系统盘100G(自动分区)
——四块20G硬盘,均为SCSI类型

#在没有操作系统的情况下,可以在装系统时将磁盘做软raid,然后使用软raid作为系统盘
#在重构时,软raid会大大增加CPU的负担,在实际生产环境中不建议使用
#同一块盘的不同分区也可以进行软raid

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#查看当前磁盘状态
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]
└─centos-home 253:2 0 45.1G 0 lvm /home
sdb 8:16 0 20G 0 disk
sdc 8:32 0 20G 0 disk
sdd 8:48 0 20G 0 disk
sde 8:64 0 20G 0 disk
sr0 11:0 1 4.5G 0 rom /run/media/root/CentOS 7 x86_64

1.安装mdadm

yum -y install mdadm

1
2
3
4
5
6
7
8
9
10
11
12
#查看系统集成的mod,能看到支持
lsmod | grep raid
raid456 151196 1
async_raid6_recov 17288 1 raid456
async_memcpy 12768 2 raid456,async_raid6_recov
async_pq 13332 2 raid456,async_raid6_recov
raid6_pq 102527 3 async_pq,raid456,async_raid6_recov
async_xor 13127 3 async_pq,raid456,async_raid6_recov
async_tx 13509 5 async_pq,raid456,async_xor,async_memcpy,async_raid6_recov
raid1 44113 0
raid0 18164 0
libcrc32c 12644 4 xfs,raid456,nf_nat,nf_conntrack

2.创建raid

1
2
3
4
5
6
7
8
9
10
11
#创建raid名为/dev/md0,选另外名字可能报错 使用mdadm -C亦可
mdadm --create /dev/md0 \
-a yes \ #自动创建raid设备
-l 0 \ #设定raid类型为raid0
-n 2 /dev/sdb /dev/sdc #指定2块硬盘,sdb与sdc
#mdadm: Defaulting to version 1.2 metadata
#mdadm: array /dev/md0 started.
mdadm --create /dev/md1 -a yes -l 1 -n 2 /dev/sdd /dev/sde
#输入yes忽略提示
#mdadm: Defaulting to version 1.2 metadata
#mdadm: array /dev/md1 started.

查看软raid信息与状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 99G 0 part
├─centos-root 253:0 0 50G 0 lvm /
├─centos-swap 253:1 0 3.9G 0 lvm [SWAP]
└─centos-home 253:2 0 45.1G 0 lvm /home
sdb 8:16 0 20G 0 disk
└─md0 9:0 0 40G 0 raid0
sdc 8:32 0 20G 0 disk
└─md0 9:0 0 40G 0 raid0
sdd 8:48 0 20G 0 disk
└─md1 9:1 0 20G 0 raid1
sde 8:64 0 20G 0 disk
└─md1 9:1 0 20G 0 raid1
sr0 11:0 1 4.5G 0 rom /run/media/root/CentOS 7 x86_64

mdadm --detail /dev/md0
#mdadm -D /dev/md0亦可
/dev/md0:
Version : 1.2
Creation Time : Tue Dec 12 05:41:07 2023
Raid Level : raid0
Array Size : 41908224 (39.97 GiB 42.91 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Dec 12 05:41:07 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Chunk Size : 512K

Consistency Policy : none

Name : 192.168.8.151:0 (local to host 192.168.8.151)
UUID : cb7e5ace:f809e250:75079d40:21413521
Events : 0

Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc

#查看raid状态
cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sde[1] sdd[0]
20954112 blocks super 1.2 [2/2] [UU]

md0 : active raid0 sdc[1] sdb[0]
41908224 blocks super 1.2 512k chunks

模拟故障

停止阵列
mdadm –stop /dev/md0
mdadm –stop /dev/md1

重新启动阵列
mdadm -A /dev/md1

清除使用后的raid超级块信息
mdadm –misc –zero-superblock /dev/sdb /dev/sdc
将信息彻底清除后,使其可以再被用于创建新阵列

模拟磁盘故障
mdadm /dev/md1 -f /dev/sdd

查看信息,此时可用为1块,1块失败

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat /proc/mdstat 
Personalities : [raid0] [raid1]
md1 : active raid1 sde[1] sdd[0](F)
20954112 blocks super 1.2 [2/1] [_U]
mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Dec 12 06:43:12 2023
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Dec 12 06:47:50 2023
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0

移除故障的磁盘
mdadm –manage /dev/md1 –remove /dev/sdd

此时再查看就只剩一块盘了

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Dec 12 06:43:12 2023
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Tue Dec 12 06:50:27 2023
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

再添加一块好的盘进去
mdadm –manage /dev/md1 –add /dev/sdc

此时再查看mdstat状态,可以看到硬盘正在重构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
cat /proc/mdstat 
Personalities : [raid0] [raid1]
md1 : active raid1 sdc[2] sde[1]
20954112 blocks super 1.2 [2/1] [_U]
[=>...................] recovery = 8.5% (1800192/20954112) finish=1.4min speed=225024K/sec

#重构完重新查看
mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Dec 12 06:43:12 2023
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Dec 12 06:56:35 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

#清理环境
mdadm --stop /dev/md1
mdadm --misc --zero-superblock /dev/sdc /dev/sde

fio磁盘性能测试工具

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
顺序读:

fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=30 -group_reporting -name=mytest

随机写:

fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=r andwrite -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=30 -group_reporting -name=mytest

顺序写:

fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=30 -group_reporting -name=mytest

混合随机读写:

fio -filename=/dev/sdb -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=70 -ioengine=psync -bs=16k size=2G -numjobs=10 -runtime=30 -group_reporting -name=mytest -ioscheduler=noop
CATALOG
  1. 1. 环境
  2. 2. 1.安装mdadm
  3. 3. 2.创建raid
    1. 3.1. 查看软raid信息与状态
  4. 4. 模拟故障
  5. 5. fio磁盘性能测试工具