【初學菜鳥作--LVM邏輯卷管理RAID軟陣列】

練習一:創建卷組

準備310G的空閒分區,將類型ID修改爲8e LVM

[root@localhost~]# fdisk /dev/sdb

進入交互模式通過新建n-p-分區號-開始位置-結束位置(分區大小)

交互模式t修改類型-分區號-類型爲8e

DeviceBoot   Start    End  Blocks   Id  System

/dev/sdb1    1   1217     9775521   8e Linux LVM

W保存後退出

使用其中2塊分區組建名爲myvg的卷組,查看此卷組信息

先檢查有哪些物理卷

[root@localhost~]# pvscan

Nomatching physical volumes found

將兩塊空閒分區轉換成物理卷

例:[root@localhost~]# pvcreate /dev/sdb1

Writingphysical volume data to disk "/dev/sdb1"

Physicalvolume "/dev/sdb1" successfully created

再檢查有哪些物理卷,查看其中一個物理卷的詳細信息

[root@localhost~]# pvscan

PV/dev/sdb1                 lvm2 [9.32 GB]

PV/dev/sdb2                   lvm2 [9.32GB]

Total:2 [18.65 GB] / in use: 0 [0   ] / in noVG: 2 [18.65 GB]

[root@localhost~]# pvdisplay  /dev/sdb1

"/dev/sdb1" is anew physical volume of "9.32 GB"

--- NEW Physical volume ---

PV Name               /dev/sdb1

VG Name

PV Size               9.32 GB

Allocatable           NO

PE Size (KByte)       0

Total PE              0

Free PE               0

Allocated PE          0

PV UUID               9QuHkE-pXKI-tlWM-vJdv-2qmt-Sd3A-p8Sbwq

 

先查看有哪些卷組

[root@localhost~]# vgdisplay

Novolume groups found

將兩個物理卷整編成卷組myvg

[root@localhost~]# vgcreatemyvg /dev/sdb1 /dev/sdb2

Volumegroup "myvg" successfully created

再查看有哪些卷組,並查看卷組myvg的詳細信息

[root@localhost~]# vgdisplay

---Volume group ---

VGName               myvg

SystemID

Format                lvm2

MetadataAreas        2

MetadataSequence No  1

VG Access             read/write

VGStatus             resizable

MAXLV                0

CurLV                0

OpenLV               0

MaxPV                0

CurPV                2

ActPV                2

VGSize               18.64 GB

PESize               4.00 MB

TotalPE              4772

AllocPE / Size       0 / 0

Free  PE / Size       4772 / 18.64 GB

VGUUID         oSPZlv-Gt6D-gTQA-Gmw6-OsRd-TRqD-gcfbr0

 

練習二:創建/使用/擴展邏輯卷

劃分一個16G的邏輯卷,名稱爲lvmox,查看邏輯卷信息

[root@localhost~]# lvcreate -L 16G -n lvmoxmyvg

Logicalvolume "lvmox" created

 

[root@localhost~]# lvdisplay

---Logical volume ---

LVName                /dev/myvg/lvmox

VGName                myvg

LVUUID               r22EGe-Cvg5-D1Qf-Q6lt-s3SJ-XuL1-gIALQD

LVWrite Access        read/write

LVStatus              available

#open                 0

LVSize                16.00 GB

CurrentLE             4096

Segments               2

Allocation             inherit

Readahead sectors     auto

-currently set to     256

Blockdevice           253:0

將此邏輯卷格式化爲ext3文件系統,並掛載到/mbox目錄

格式化該邏輯卷:

[root@localhost~]# mkfs.ext3 /dev/myvg/lvmox

掛載

[root@localhost~]# mkdir /mbox

[root@localhost~]# mount /dev/myvg/lvmox /mbox/

通過mount命令查看:

/dev/mapper/myvg-lvmoxon /mbox type ext3 (rw)

進入/mbox目錄,測試讀寫操作

寫入:[root@localhostmbox]#ifconfig> 121.txt

[root@localhostmbox]#ls

121.txtlost+found

讀取:[root@localhostmbox]# cat 121.txt

eth0      Link encap:EthernetHWaddr00:0C:29:19:BB:76

將邏輯卷從16G擴展爲24G,確保df識別的大小準確

先擴展卷組(增加一個10G物理卷),再擴展邏輯卷

[root@localhostmbox]#vgextendmyvg /dev/sdb3

Nophysical volume label read from /dev/sdb3

Writingphysical volume data to disk "/dev/sdb3"

Physicalvolume "/dev/sdb3" successfully created

Volumegroup "myvg" successfully extended

擴展邏輯卷:

[root@localhostmbox]#lvextend -L +8G /dev/myvg/lvmox

Extendinglogical volume lvmox to 24.00 GB

Logicalvolume lvmox successfully resized

 

resize2fs識別新文件系統的大小

[root@localhostmbox]#resize2fs /dev/myvg/lvmox

創建一個大小爲250M的邏輯卷lvtest

[root@localhostmbox]#vgchange -s 1M  myvg

Volumegroup "myvg" successfully changed

查看:

[root@localhostmbox]#vgdisplay

---Volume group ---

VGName               myvg

SystemID

Format                lvm2

MetadataAreas        3

MetadataSequence No  5

VGAccess             read/write

VGStatus             resizable

MAXLV                0

CurLV                1

OpenLV               1

MaxPV                0

CurPV                3

ActPV                3

VGSize               27.96 GB

PESize               1.00 MB

TotalPE              28632

AllocPE / Size       24576 / 24.00 GB

Free  PE / Size       4056 / 3.96 GB

VGUUID              oSPZlv-Gt6D-gTQA-Gmw6-OsRd-TRqD-gcfbr0

 

練習三:邏輯卷綜合應用

刪除上一練習建立的卷組myvg

保證沒有使用或者掛載的時候刪除

[root@localhost~]# vgremovemyvg

Doyou really want to remove volume group "myvg" containing 1 logicalvolumes? [y/n]: y

Doyou really want to remove active logical volume lvmox? [y/n]: y

Logicalvolume "lvmox" successfully removed

Volumegroup "myvg" successfully removed

使用其中2個物理卷組成卷組vgnsd,另一個物理卷組成卷組vgdata

[root@localhost~]# vgcreatevgnsd /dev/sdb1 /dev/sdb2

Volumegroup "vgnsd" successfully created

[root@localhost~]# vgcreatevgdata /dev/sdb3

Volumegroup "vgdata" successfully created

從卷組vgnsd中創建一個20G的邏輯卷lvhome

[root@localhost~]# lvcreate -L 16G -n lvhomevgnsd

Logicalvolume "lvhome" created

從卷組vgdata中創建一個4G的邏輯卷lvswap

[root@localhost~]# lvcreate -L 4G -n lvswapvgdata

Logicalvolume "lvswap" created

/home目錄遷移到邏輯卷lvhome

[root@localhost~]# mkfs.ext3 /dev/vgnsd/lvhome

 

[root@localhost~]# mkdir /1

[root@localhost~]# mv /home/* /1

 

[root@localhost~]# mount /dev/vgnsd/lvhome /home

 

/dev/mapper/vgnsd-lvhomeon /home type ext3 (rw)

將邏輯卷lvswap擴展到交換空間

格式化邏輯卷lvswap

[root@localhost~]# mkswap /dev/vgdata/lvswap

Settingup swapspace version 1, size = 4294963 kB

 

[root@localhost~]# swapon /dev/vgdata/lvswap

[root@localhost~]# swapon -s

Filename          Type                   Size                    Used                 Priority

/dev/sda3partition       200804                    0                    -1

/dev/mapper/vgdata-lvswap    partition 4194296 0    -2

爲第56步配置開機自動掛載,重啓後驗證

通過以下加載

[root@localhost ~]# vim/etc/fstab

練習四:創建軟RAID陣列

添加4塊大小均爲20GB的空磁盤

將其中第一塊、第二塊磁盤劃分爲單個主分區

把上述分區的類型ID改成fd

Device Boot   Start   End      Blocks   Id System

/dev/sdb1    1    2610    20964793+  fd Linux raid autodetect

2)陣列創建練習

創建RAID0設備/dev/md0RAID1設備/dev/md1

[root@localhost ~]# mdadm -C/dev/md0 -l0 -n2 /dev/sdb1 /dev/sdc1

mdadm: array /dev/md0started.

[root@localhost ~]# mdadm -C/dev/md1 -l1 -n2 /dev/sdd /dev/sde

mdadm: array /dev/md1started.

b)查看這兩個陣列的容量及成員盤個數    -Q-D

[root@localhost ~]# mdadm -D/dev/md0

/dev/md0:

Version : 0.90

Creation Time : Wed Jun  4 19:04:41 2014

Raid Level : raid0

Array Size : 41929344 (39.99GiB 42.94 GB)

Raid Devices : 2

Total Devices : 2

Preferred Minor : 0

Persistence : Superblock ispersistent

 

Update Time : Wed Jun  4 19:04:41 2014

State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

Spare Devices : 0

 

Chunk Size : 64K

 

UUID :923d3722:10437de4:f871f97a:b358ef7b

Events : 0.1

 

Number   Major  Minor   RaidDevice State

0       8      17        0      active sync   /dev/sdb1

1       8      33        1      active sync   /dev/sdc1

 

[root@localhost ~]# mdadm -D/dev/md1

/dev/md1:

Version : 0.90

Creation Time : Wed Jun  4 19:05:15 2014

Raid Level : raid1

Array Size : 20971456 (20.00GiB 21.47 GB)

Used DevSize : 20971456(20.00 GiB 21.47 GB)

Raid Devices : 2

Total Devices : 2

Preferred Minor : 1

Persistence : Superblock ispersistent

 

Update Time : Wed Jun  4 19:06:59 2014

State : clean

Active Devices : 2

Working Devices : 2

Failed Devices : 0

Spare Devices : 0

 

UUID :1a6e3772:e4b55604:dbe09f01:b78a3faa

Events : 0.4

 

Number   Major  Minor   RaidDevice State

0       8      48        0     active sync   /dev/sdd

1       8      64        1      active sync   /dev/sde

解散並刪除陣列設備/dev/md0/dev/md1  -S

[root@localhost~]# mdadm -S /dev/md0

mdadm:stopped /dev/md0

[root@localhost~]# rm -rf /dev/md0

創建RAID5軟陣列設備 /dev/md0

第一個成員盤用分區來做

其餘三個成員盤用整塊磁盤來做

fdisk分別查看第一塊、第二塊磁盤的分區表

[root@localhost ~]# mdadm -C/dev/md0 -l5 -n4 /dev/sdb1 /dev/sd[c-e]

mdadm: /dev/sdb1 appears tobe part of a raid array:

level=raid0 devices=2ctime=Wed Jun  4 19:04:41 2014

mdadm: /dev/sdd appears tobe part of a raid array:

level=raid1 devices=2ctime=Wed Jun  4 19:05:15 2014

mdadm: /dev/sde appears tobe part of a raid array:

level=raid1 devices=2ctime=Wed Jun  4 19:05:15 2014

Continue creating array? y

mdadm: array /dev/md0started.

 

 

練習五:格式化並使用陣列

RAID5陣列/dev/md0格式化成EXT3文件系統

[root@localhost~]# mkfs.ext3 /dev/md0

將陣列設備/dev/md0掛載到/mymd目錄

[root@localhost~]# mkdir /mymd

[root@localhost~]# mount /dev/md0 /mymd/

Mount查看

/dev/md0on /mymd type ext3 (rw)

進入/mymd目錄,測試讀寫

寫入:[root@localhostmymd]#ls> 12.txt

[root@localhostmymd]#ls

12.txtlost+found

 

讀取:[root@localhostmymd]#cat 12.txt

12.txt

lost+found

 

 

練習六:RAID5陣列的故障測試

通過VMware設置拔掉陣列/dev/md0的最後一個成員

[root@localhostmymd]#mdadm -D /dev/md0

/dev/md0:

Version: 0.90

CreationTime : Wed Jun  4 19:10:30 2014

RaidLevel : raid5

ArraySize : 62894016 (59.98 GiB 64.40 GB)

UsedDevSize : 20964672 (19.99 GiB 21.47 GB)

RaidDevices : 4

TotalDevices : 4

PreferredMinor : 0

Persistence: Superblock is persistent

 

UpdateTime : Wed Jun  4 19:16:02 2014

State: clean, degraded

ActiveDevices : 3

WorkingDevices : 3

FailedDevices : 1

SpareDevices : 0

 

Layout: left-symmetric

ChunkSize : 64K

 

UUID: 8a0dd0eb:2fdf8913:00f9e8e9:972e8b80

Events: 0.14

 

Number   Major  Minor   RaidDevice State

0       8      17        0     active sync   /dev/sdb1

1       8      32        1      active sync   /dev/sdc

2       8      48        2      active sync   /dev/sdd

3       0       0        3      removed

 

4       8      64        -      faulty spare   /dev/sde

2)再次訪問/mymd,測試讀寫

讀寫功能正常

3RAID5陣列的故障盤替換

將已失效的成員盤標記爲失敗

[root@localhostmymd]# mdadm/dev/md0 -f /dev/sde

mdadm: set /dev/sde faultyin /dev/md0

移除已失效的成員盤

[root@localhostmymd]# mdadm/dev/md0 -r /dev/sde

mdadm: hot removed /dev/sde

重新添加一個完好的成員盤(與其他成員盤大小一致)

[root@localhostmymd]#mdadm /dev/md0 -a /dev/sde

mdadm:added /dev/sde

觀察陣列狀態信息,查看修復過程

[root@localhostmymd]#watch cat /proc/mdstat

Every2.0s: cat /proc/mdstat                 Wed Jun  4 19:22:10 2014

 

Personalities: [raid0] [raid1] [raid6] [raid5] [raid4]

md0: active raid5 sde[4] sdd[2] sdc[1] sdb1[0]

62894016blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]

[================>....]  recovery = 82.3% (17257344/20964672) fi

nish=0.3minspeed=196343K/sec

 

unuseddevices: <none>

 

 

練習七:保存、重組陣列

查詢當前正在運行的陣列設置

[root@localhostmymd]#mdadm -vDs

ARRAY/dev/md0 level=raid5 num-devices=4 metadata=0.90UUID=8a0dd0eb:2fdf8913:00f9e8e9:972e8b80

devices=/dev/sdb1,/dev/sdc,/dev/sdd,/dev/sde

保存正在運行的陣列設置爲/etc/mdadm.conf

[root@localhostmymd]#mdadm -vDs> /etc/mdadm.conf

解散並刪除陣列/dev/md0

[root@localhost~]# umount /dev/md0

[root@localhost~]# mdadm -S /dev/md0

mdadm:stopped /dev/md0

[root@localhost~]# rm -rf /dev/md0

 

重組陣列/dev/md0,並掛載測試

[root@localhost~]# mdadm -A /dev/md0

mdadm:/dev/md0 has been started with 4 drives.

[root@localhost~]# mount /dev/md0 /mymd/

/dev/md0on /mymd type ext3 (rw)

 

 

 


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章