最新消息:阿啰哈,本人90后,目前单身,欢迎妹子们来撩!.(。→‿←。) 微信:frank01991

RAID 5下多种LVM的扩容情景

CentOS 林志斌 983浏览

下面是在raid 5中创建LVM的全过程

1、现有5个磁盘,每个磁盘中所有的容量都各分到一个单独的分区中,分区类型设置为fd。使用fdisk完成。
[[email protected] ~]# fdisk -l | egrep '/dev/sd[b-f]1'
/dev/sdb1               1         130     1044193+  fd  Linux raid autodetect
/dev/sdc1               1         130     1044193+  fd  Linux raid autodetect
/dev/sde1               1         130     1044193+  fd  Linux raid autodetect
/dev/sdd1               1         130     1044193+  fd  Linux raid autodetect
/dev/sdf1               1         130     1044193+  fd  Linux raid autodetect

2、创建一个raid5磁盘阵列,3个active成员磁盘,1个spare disk
[[email protected] ~]# mdadm -C /dev/md0 -l5 -n3 /dev/sd{b..d}1 -x1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

阵列正在创建中
[[email protected] ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      2087936 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=>...................]  recovery =  5.7% (60036/1043968) finish=0.5min speed=30018K/sec

unused devices: <none>

阵列已经创建完成,该阵列可用容量为2GB
[[email protected] ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      2087936 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
查看阵列信息
[[email protected] ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue May 21 06:56:26 2013
     Raid Level : raid5
     Array Size : 2087936 (2039.34 MiB 2138.05 MB)
  Used Dev Size : 1043968 (1019.67 MiB 1069.02 MB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue May 21 06:56:51 2013
          State : clean
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : kashu.localdomain:0  (local to host kashu.localdomain)
           UUID : 607fd958:4913ce16:7e1dcbf9:74939e2a
         Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       3       8       65        -      spare   /dev/sde1

3、把阵列信息写入到/etc/mdadm.conf配置文件中
[[email protected] ~]# mdadm -Ds > /etc/mdadm.conf
[[email protected] ~]# sed -n '$p' /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 spares=1 name=kashu.localdomain:0 

4、在新建好的阵列中创建3个分区,分区类型设置为8e。使用fdisk完成。
[[email protected] ~]# fdisk -l | grep '/dev/md0p[1-3]'
/dev/md0p1             257      128256      512000   8e  Linux LVM
/dev/md0p2          128257      256256      512000   8e  Linux LVM
/dev/md0p3          256257      384256      512000   8e  Linux LVM

5、暂时先使用两个分区创建两个PV(待会后面在扩容里会使用到剩下的那个PV)
[[email protected] ~]# pvcreate /dev/md0p{1,2}
  Writing physical volume data to disk "/dev/md0p1"
  Physical volume "/dev/md0p1" successfully created
  Writing physical volume data to disk "/dev/md0p2"
  Physical volume "/dev/md0p2" successfully created

查看一下,/dev/md0p1和/dev/md0p2就是新建的两个PV
[[email protected] ~]# pvs
  PV         VG       Fmt  Attr PSize   PFree
  /dev/md0p1          lvm2 a--  500.00m 500.00m
  /dev/md0p2          lvm2 a--  500.00m 500.00m
  /dev/sda2  VolGroup lvm2 a--   19.51g      0

6、使用这两个PV组成一个名字叫“kashuVG”的VG
[[email protected] ~]# vgcreate kashuVG /dev/md0p{1,2}
  Volume group "kashuVG" successfully created

[[email protected] ~]# vgs
  VG       #PV #LV #SN Attr   VSize   VFree
  VolGroup   1   2   0 wz--n-  19.51g      0
  kashuVG    2   0   0 wz--n- 992.00m 992.00m

7、再使用这个VG中的666MB空间创建一个名字叫“kashuLV”的LV
[[email protected] ~]# lvcreate -L 666M -n kashuLV kashuVG
  Rounding up size to full physical extent 668.00 MiB
  Logical volume "kashuLV" created

[[email protected] ~]# lvs
  LV      VG       Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  lv_root VolGroup -wi-ao--  18.51g
  lv_swap VolGroup -wi-ao--   1.00g
  kashuLV kashuVG  -wi-a--- 668.00m

8、在/dev/kashuVG/kashuLV中新建EXT4文件系统
[[email protected] ~]# mkfs.ext4 /dev/kashuVG/kashuLV
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=0 blocks
42816 inodes, 171008 blocks
8550 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=176160768
6 block groups
32768 blocks per group, 32768 fragments per group
7136 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

9、挂载使用
[[email protected] ~]# mount /dev/kashuVG/kashuLV /mnt/kashuLV/
[[email protected] ~]# df -hT | grep kashu
/dev/mapper/kashuVG-kashuLV    ext4    658M   17M  608M   3% /mnt/kashuLV

LVM扩容情景1,直接从VG中拿容量:

1、因为VG中还有剩余81个空闲PE,可以把这些PE全部分配到LV中,实现扩容
[[email protected] ~]# vgdisplay kashuVG
  --- Volume group ---
  VG Name               kashuVG
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               992.00 MiB
  PE Size               4.00 MiB
  Total PE              248
  Alloc PE / Size       167 / 668.00 MiB
  Free  PE / Size       81 / 324.00 MiB
  VG UUID               A66ZZB-l7l4-RkZi-yttN-6l23-jktz-3pJJGR
2、给/dev/kashuVG/kashuLV增加81个PE。kashuLV从原来的666MB扩容到1GB。
[[email protected] ~]# lvresize -l +81 /dev/kashuVG/kashuLV
  Extending logical volume kashuLV to 992.00 MiB
  Logical volume kashuLV successfully resized

[[email protected] ~]# lvs | grep kashu
  kashuLV kashuVG  -wi-ao-- 992.00m

3、但是,文件系统的容量并没有被同时更新,必需使用resize2fs把文件系统的容量也进行更新,这样才能让文件系统使用新增的容量。
[[email protected] ~]# df -hT | grep kashu
/dev/mapper/kashuVG-kashuLV    ext4    658M   17M  608M   3% /mnt/kashuLV

[[email protected] ~]# resize2fs /dev/kashuVG/kashuLV
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/kashuVG/kashuLV is mounted on /mnt/kashuLV; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/kashuVG/kashuLV to 253952 (4k) blocks.
The filesystem on /dev/kashuVG/kashuLV is now 253952 blocks long.
现在,文件系统也扩容到1GB了,这才算扩容完成!
[[email protected] ~]# df -hT | grep kashu
/dev/mapper/kashuVG-kashuLV    ext4    978M   17M  917M   2% /mnt/kashuLV

LVM扩容情景2,VG无可分配的PE,但有剩余的LVM磁盘或分区,从新建PV开始拿容量:

1)当前LV所在的VG中已经无剩余的PE可分配了
[[email protected] ~]# vgdisplay kashuVG | grep Free
  Free  PE / Size       0 / 0

2)且当前没有其它空闲的PV
[[email protected] ~]# pvs
  PV         VG       Fmt  Attr PSize   PFree
  /dev/md0p1 kashuVG  lvm2 a--  496.00m    0
  /dev/md0p2 kashuVG  lvm2 a--  496.00m    0
  /dev/sda2  VolGroup lvm2 a--   19.51g    0

3)但是,还有一个/dev/md0p3的LVM分区未被使用
[[email protected] ~]# fdisk -l | grep '/dev/md0p.'
/dev/md0p1             257      128256      512000   8e  Linux LVM
/dev/md0p2          128257      256256      512000   8e  Linux LVM
/dev/md0p3          256257      384256      512000   8e  Linux LVM

1、使用空闲的分区来创建一个新的PV
[[email protected] ~]# pvcreate /dev/md0p3
  Writing physical volume data to disk "/dev/md0p3"
  Physical volume "/dev/md0p3" successfully created

[[email protected] ~]# pvs
  PV         VG       Fmt  Attr PSize   PFree
  /dev/md0p1 kashuVG  lvm2 a--  496.00m      0
  /dev/md0p2 kashuVG  lvm2 a--  496.00m      0
  /dev/md0p3          lvm2 a--  500.00m 500.00m
  /dev/sda2  VolGroup lvm2 a--   19.51g      0

2、再扩容VG(从而才能给在此VG中的LV进行扩容)
[[email protected] ~]# vgextend kashuVG /dev/md0p3
  Volume group "kashuVG" successfully extended

[[email protected] ~]# vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup   1   2   0 wz--n- 19.51g      0
  kashuVG    3   1   0 wz--n-  1.45g 496.00m

3、扩容VG后,VG现在有124个空闲的PE可用于分配了
[[email protected] ~]# pvdisplay /dev/md0p3 | grep Free
  Free PE               124

把124个可分配的PE加入到kashuLV中,这样,该LV从原本的1GB就扩容到1.5GB。
[[email protected] ~]# lvresize -l +124 /dev/kashuVG/kashuLV
  Extending logical volume kashuLV to 1.45 GiB
  Logical volume kashuLV successfully resized

[[email protected] ~]# lvs
  LV      VG       Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  lv_root VolGroup -wi-ao-- 18.51g
  lv_swap VolGroup -wi-ao--  1.00g
  kashuLV kashuVG  -wi-ao--  1.45g

4、再次强调,文件系统的容量并没有被同时更新,必需使用resize2fs把文件系统的容量也进行更新,这样才能让文件系统使用新增的容量。
[[email protected] ~]# df -hT | grep kashu
/dev/mapper/kashuVG-kashuLV    ext4    978M   17M  917M   2% /mnt/kashuLV

[[email protected] ~]# resize2fs /dev/kashuVG/kashuLV
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/kashuVG/kashuLV is mounted on /mnt/kashuLV; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/kashuVG/kashuLV to 380928 (4k) blocks.
The filesystem on /dev/kashuVG/kashuLV is now 380928 blocks long.

这样,文件系统的容量才算被扩容到1.5GB
[[email protected] ~]# df -hT | grep kashu
/dev/mapper/kashuVG-kashuLV    ext4    1.5G   17M  1.4G   2% /mnt/kashuLV

LVM扩容情景3,VG无可分配的PE,也无剩余的LVM磁盘或分区,那就从先给RAID 5扩容开始做起:

1、现准备给RAID 5阵列中扩容一个sdf1的磁盘。使用fdisk把sdf这个磁盘分区并设置为fd类型。
[[email protected] ~]# fdisk -l | grep '/dev/sd[b-f]1'
/dev/sdb1               1         130     1044193+  fd  Linux raid autodetect
/dev/sdc1               1         130     1044193+  fd  Linux raid autodetect
/dev/sde1               1         130     1044193+  fd  Linux raid autodetect
/dev/sdd1               1         130     1044193+  fd  Linux raid autodetect
/dev/sdf1               1         130     1044193+  fd  Linux raid autodetect
再看一眼现有raid 5阵列的信息,共2GB
[[email protected] ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      2087936 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

2、扩容raid 5阵列,-G表示扩容模式,-a /dev/md0表示要向md0中添加成员磁盘,-n4 /dev/sdf1表示增加第4个成员磁盘,它是sdf1。
[[email protected] ~]# mdadm -G -a /dev/md0 -n4 /dev/sdf1
mdadm: added /dev/sdf1
mdadm: Need to backup 3072K of critical section..
raid阵列已从原来的2GB扩容到现在的3GB了
[[email protected] ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf1[5] sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      3131904 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

3、使用fdisk工具在扩容后的阵列中新增类型为8e的分区
[[email protected] ~]# fdisk /dev/md0
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/md0: 3207 MB, 3207069696 bytes
2 heads, 4 sectors/track, 782976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x69a741ef

    Device Boot      Start         End      Blocks   Id  System
/dev/md0p1             257      128256      512000   8e  Linux LVM
/dev/md0p2          128257      256256      512000   8e  Linux LVM
/dev/md0p3          256257      384256      512000   8e  Linux LVM

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Selected partition 4
First cylinder (1-782976, default 384257):
Using default value 384257
Last cylinder, +cylinders or +size{K,M,G} (384257-782976, default 782976):
Using default value 782976

Command (m for help): n
First cylinder (384257-782976, default 384513):
Using default value 384513
Last cylinder, +cylinders or +size{K,M,G} (384513-782976, default 782976): +500M

Command (m for help): t
Partition number (1-5): 5
Hex code (type L to list codes): 8e
Changed system type of partition 5 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/md0: 3207 MB, 3207069696 bytes
2 heads, 4 sectors/track, 782976 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x69a741ef

    Device Boot      Start         End      Blocks   Id  System
/dev/md0p1             257      128256      512000   8e  Linux LVM
/dev/md0p2          128257      256256      512000   8e  Linux LVM
/dev/md0p3          256257      384256      512000   8e  Linux LVM
/dev/md0p4          384257      782976     1594880    5  Extended
/dev/md0p5          384513      512512      512000   8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

如果上面的分区步骤完成后系统没有探测出新分区,并且使用partprobe也无法探测出,重启一下系统就行。
[[email protected] ~]# reboot

4、创建PV
[[email protected] ~]# pvcreate /dev/md0p5
  Writing physical volume data to disk "/dev/md0p5"
  Physical volume "/dev/md0p5" successfully created

[[email protected] ~]# pvs
  PV         VG       Fmt  Attr PSize   PFree
  /dev/md0p1 kashuVG  lvm2 a--  496.00m      0
  /dev/md0p2 kashuVG  lvm2 a--  496.00m      0
  /dev/md0p3 kashuVG  lvm2 a--  496.00m      0
  /dev/md0p5          lvm2 a--  500.00m 500.00m
  /dev/sda2  VolGroup lvm2 a--   19.51g      0

5、扩容VG
[[email protected] ~]# vgextend kashuVG /dev/md0p5
  Volume group "kashuVG" successfully extended

[[email protected] ~]# vgdisplay kashuVG | grep Free
  Free  PE / Size       124 / 496.00 MiB

6、扩容LV
[[email protected] ~]# lvresize -l +124 /dev/kashuVG/kashuLV
  Extending logical volume kashuLV to 1.94 GiB
  Logical volume kashuLV successfully resized

7、扩容文件系统,这里提示要先使用e2fsck -f /dev/kashuVG/kashuLV,那照做就行了
[[email protected] ~]# resize2fs /dev/kashuVG/kashuLV
resize2fs 1.41.12 (17-May-2010)
Please run 'e2fsck -f /dev/kashuVG/kashuLV' first.

[[email protected] ~]# e2fsck -f /dev/kashuVG/kashuLV
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/kashuVG/kashuLV: 11/85632 files (0.0% non-contiguous), 9736/380928 blocks

然后再使用resize2fs命令对文件系统进行扩容
[[email protected] ~]# resize2fs /dev/kashuVG/kashuLV
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/kashuVG/kashuLV to 507904 (4k) blocks.
The filesystem on /dev/kashuVG/kashuLV is now 507904 blocks long.LVM扩容情景3,VG无可分配的PE,也无剩余的LVM磁盘或分区,那就从先给RAID 5扩容开始做起:

8、挂载
[[email protected] ~]# mount /dev/kashuVG/kashuLV /mnt/kashuLV/

查看该LV已经从之前的1.5GB扩容到现在的2GB了。
[[email protected] ~]# df -hT | grep kashu
/dev/mapper/kashuVG-kashuLV    ext4    2.0G   17M  1.9G   1% /mnt/kashuLV

9、把挂载信息写入到/etc/fstab文件中,方便系统重启后自动进行挂载

[[email protected] ~]# echo 'UUID="22d02edc-204a-45ad-98ae-a71c5bfd8e15" /mnt/kashuLV ext4 defaults 0 0' >> /etc/fstab

转载请注明:林志斌 » RAID 5下多种LVM的扩容情景

发表评论
取消评论
表情

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址