iscsi

iscsi

iscsi实现多主机共享 三个以上节点服务器共享一台服务器存储路径

一、环境准备

1.操作系统

CentOS 6

存储机器两块以上硬盘

2.软件版本

scsi-target-utils-1.0.24-3.el6_4.x86_64

iscsi-initiator-utils-6.2.0.873-2.el6.x86_64

cman-3.0.12.1-49.el6_4.1.x86_64

rgmanager-3.0.12.1-17.el6.x86_64

gfs2-utils-3.0.12.1-49.el6_4.1.x86_64

lvm2-cluster-2.02.98-9.el6.x86_64

3.集群环境

(1).配置各节点名称

node1:

hostname node1.xdl.com

[root@node1 ~]# uname -n

node1.test.com

[root@node1 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.18.201 node1.test.com node1

192.168.18.202 node2.test.com node2

192.168.18.203 node3.test.com node3

192.168.18.208 target.test.com target

Scp /etc/hosts node2:/etc 传送到其他服务器同时hostname更改服务器名称

node2:

[root@node2 ~]# uname -n

node2.test.com

[root@node2 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.18.201 node1.test.com node1

192.168.18.202 node2.test.com node2

192.168.18.203 node3.test.com node3

192.168.18.208 target.test.com target

node3:

[root@node3 ~]# uname -n

node3.test.com

[root@node3 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.18.201 node1.test.com node1

192.168.18.202 node2.test.com node2

192.168.18.203 node3.test.com node3

192.168.18.208 target.test.com target

shared storage:

[root@target ~]# uname -n

target.test.com

[root@target ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.18.201 node1.test.com node1

192.168.18.202 node2.test.com node2

192.168.18.203 node3.test.com node3

192.168.18.208 target.test.com target

(2).配置各节点与跳板机ssh互信

生成ssh文件传输到对应服务器 (各个节点和target之间互相可以无障碍传输)

Ssh-keygen –t rsa

Ssh-copy-id node1

node1:

[root@node1 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''

[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@target.test.com

node2:

[root@node2 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''

[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@target.test.com

node3:

[root@node3 ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''

[root@node3 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@target.test.com

shared storage:

[root@target ~]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''

[root@target ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node1.test.com

[root@target ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2.test.com

[root@target ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node3.test.com

(3).配置各节点时间同步

vi /etc/ntp.conf

注释掉在线 打开本地同步添加本地同步和校验

# Hosts on local network are less restricted.

restrict 192.168.144.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.

# Please consider joining the pool (http://www.pool.ntp.org/join.html).

#server 0.centos.pool.ntp.org iburst

#server 1.centos.pool.ntp.org iburst

#server 2.centos.pool.ntp.org iburst

#server 3.centos.pool.ntp.org iburst

server 127.127.1.0

fudge 127.127.1.0 startum 10

39 service ntpd start

40 netstat -anpu | grep ntp

42 ntpq -p

43 ntpstat

node1:

其余机器直接ntpdate -u 192.168.14.11 其余机器仅执行此命令即可

配置本地文件

vi /etc/ntp.conf

server 192.168.144.211

restrict 192.168.144.211 mask 255.255.255.0 nomodify notrap noquery

server 127.127.1.0

fudge 127.127.1.0 startum 10

然后执行ntpdate -u 192.168.144.211 [root@node1 ~]# ntpdate 202.120.2.101

Chkconfig ntpd on

传输配置文件到另外两台如:

Scp /etc/ntp.conf node3:/etc

另外两台设置开机启动

node2:

[root@node2 ~]# ntpdate 202.120.2.101

node3:

[root@node3 ~]# ntpdate 202.120.2.101

shared storage:

[root@target ~]# ntpdate 202.120.2.101

大家有没有发现我们时间同步的操作包括下面的很操作都是相同的有没有一种方法,只要执行一次,方法有很多,我们来说一下最常用的方法,ssh。在上面的操作中我们已经配置了ssh互信,下面我们就在跳板机上操作一下。

[root@target ~]# alias ha='for I in {1..3}; do' #设置一个别名,因为每次都得用到时

[root@target ~]# ha ssh node$I 'ntpdate 202.120.2.101'; done #各节点都在时间同步

20 Aug 14:32:40 ntpdate[14752]: adjust time server 202.120.2.101 offset -0.019162 sec

20 Aug 14:32:41 ntpdate[11994]: adjust time server 202.120.2.101 offset 0.058863 sec

20 Aug 14:32:43 ntpdate[1578]: adjust time server 202.120.2.101 offset 0.062831 sec

二、iscsi 安装与配置

1.安装target

1 [root@target ~]# yum install -y scsi-target-utils

2.配置target

12345678910111213 [root@target ~]# vim /etc/tgt/targets.conf

#<target iqn.2008-09.com.example:server.target2>

# direct-store /dev/sdd

# incominguser someuser secretpass12

#</target>

<target iqn.2013-08.com.test:teststore.sdb> #配置target名称

<backing-store /dev/sdb> #配置共享磁盘

vendor_id test #配置发行商(任意)

lun 6 #配置LUN

</backing-store>

incominguser iscsiuser iscsiuser #配置认证的用户名和密码

initiator-address 192.168.18.0/24 #配置允许的网段

</target>

3.启动target并设置为开机自启动

1234 [root@target ~]# service tgtd start

[root@target ~]# chkconfig tgtd on

[root@target ~]# chkconfig tgtd –list

tgtd 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭

4.查看配置的target

tgtadm –lld iscsi –mode target –op show

Target 1: iqn.2013-08.com.test:teststore.sdb

System information:

Driver: iscsi

State: ready

I_T nexus information:

LUN information:

LUN: 0

Type: controller

SCSI ID: IET 00010000

SCSI SN: beaf10

Size: 0 MB, Block size: 1

Online: Yes

Removable media: No

Prevent removal: No

Readonly: No

Backing store type: null

Backing store path: None

Backing store flags:

Account information:

iscsiuser

ACL information:

192.168.18.0/24

5.在各节点上安装initiator

ha ssh node$i 'yum -y install iscsi-initiator-utils';done

6.配置initiator

node1:

1234567 [root@node1 ~]# vim /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2013-08.com.test:node1

[root@node1 ~]# vim /etc/iscsi/iscsid.conf

#修改下面三项

node.session.auth.authmethod = CHAP #开启CHAP认证

node.session.auth.username = iscsiuser #配置认证用户名

node.session.auth.password = iscsiuser #配置认证密码

node2:

1234567 [root@node2 ~]# vim /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2013-08.com.test:node2

[root@node2~]# vim /etc/iscsi/iscsid.conf

#修改下面三项

node.session.auth.authmethod = CHAP #开启CHAP认证

node.session.auth.username = iscsiuser #配置认证用户名

node.session.auth.password = iscsiuser #配置认证密码

node3:

1234567 [root@node3 ~]# vim /etc/iscsi/initiatorn:qame.iscsi

InitiatorName=iqn.2013-08.com.test:node3

[root@node3 ~]# vim /etc/iscsi/iscsid.conf

#修改下面三项

node.session.auth.authmethod = CHAP #开启CHAP认证

node.session.auth.username = iscsiuser #配置认证用户名

node.session.auth.password = iscsiuser #配置认证密码

7.各节点启动initiator并设置为开机自启动

123456 [root@target ~]# ha ssh node$I 'service iscsi start'; done

[root@target ~]# ha ssh node$I 'chkconfig iscsi on'; done

[root@target ~]# ha ssh node$I 'chkconfig iscsi –list'; done

iscsi 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭

iscsi 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭

iscsi 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭

8.在各节点上发现一下target

1234 [root@target ~]# ha ssh node$I 'iscsiadm -m discovery -t st -p 192.168.18.208:3260'; done

192.168.18.208:3260,1 iqn.2013-08.com.test:teststore.sdb

192.168.18.208:3260,1 iqn.2013-08.com.test:teststore.sdb

192.168.18.208:3260,1 iqn.2013-08.com.test:teststore.sdb

9.各节点登录一下target并查看一下磁盘

[root@target ~]# ha ssh node$I 'iscsiadm -m node -T iqn.2013-08.com.test:teststore.sdb -p 192.168.18.208 -l'; done

一定要显示successful

[root@target ~]# ha ssh node$I 'fdisk -l'; done

查询可以看到每个节点机器都挂载了新硬盘

cmanrgmanager 集群安装与配置

1.各节点安装cmanrgmanager

1 [root@target ~]# ha ssh node$I 'yum install -y cman rgmanager'; done

2.配置集群

(1).配置集群名称

1 [root@node1 ~]# ccs_tool create testcluster 在node1操作

(2).配置fencing设备

1234 [root@node1 ~]# ccs_tool addfence meatware fence_manual 防止出现脑分裂

[root@node1 ~]# ccs_tool lsfence 查看

Name Agent

meatware fence_manual

(3).配置集群节点

123456789 [root@node1 ~]# ccs_tool addnode -n 1 -f meatware node1.test.com

[root@node1 ~]# ccs_tool addnode -n 2 -f meatware node2.test.com

[root@node1 ~]# ccs_tool addnode -n 3 -f meatware node3.test.com

[root@node1 ~]# ccs_tool lsnode

Cluster name: testcluster, config_version: 5

Nodename Votes Nodeid Fencetype

node1.test.com 1 1 meatware

node2.test.com 1 2 meatware

node3.test.com 1 3 meatware

3.同步配置文件到各节点cd /etc/cluster/

12 [root@node1 cluster]# scp cluster.conf root@node2.test.com:/etc/cluster/

4.启动各节点集群

service NetworkManager stop

NetworkManager: 未被识别的服务

[root@localhost cluster]# chkconfig NetworkManager off

NetworkManager 服务中读取信息时出错:没有那个文件或目录

桌面有

解决超时问题

echo 'CMAN_QUORUM_TIMEOUT=0' >> /etc/sysconfig/cman

node1:

123456789101112131415 [root@node1 cluster]# service cman start

Starting cluster:

Checking if cluster has been disabled at boot… [确定]

Checking Network Manager… [确定]

Global setup… [确定]

Loading kernel modules… [确定]

Mounting configfs… [确定]

Starting cman… [确定]

Waiting for quorum… [确定]

Starting fenced… [确定]

Starting dlm_controld… [确定]

Tuning DLM kernel config… [确定]

Starting gfs_controld… [确定]

Unfencing self… [确定]

Joining fence domain… [确定]

node2:

123456789101112131415 [root@node2 cluster]# service cman start

Starting cluster:

Checking if cluster has been disabled at boot… [确定]

Checking Network Manager… [确定]

Global setup… [确定]

Loading kernel modules… [确定]

Mounting configfs… [确定]

Starting cman… [确定]

Waiting for quorum… [确定]

Starting fenced… [确定]

Starting dlm_controld… [确定]

Tuning DLM kernel config… [确定]

Starting gfs_controld… [确定]

Unfencing self… [确定]

Joining fence domain… [确定]

node3:

123456789101112131415 [root@node3 cluster]# service cman start

Starting cluster:

Checking if cluster has been disabled at boot… [确定]

Checking Network Manager… [确定]

Global setup… [确定]

Loading kernel modules… [确定]

Mounting configfs… [确定]

Starting cman… [确定]

Waiting for quorum… [确定]

Starting fenced… [确定]

Starting dlm_controld… [确定]

Tuning DLM kernel config… [确定]

Starting gfs_controld… [确定]

Unfencing self… [确定]

Joining fence domain… [确定]

5.查看各节点启动端口

node1:

123456789101112 [root@node1 cluster]# netstat -ntulp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1082/sshd

tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1158/master

tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 14610/sshd

tcp 0 0 :::22 :::* LISTEN 1082/sshd

tcp 0 0 ::1:25 :::* LISTEN 1158/master

tcp 0 0 ::1:6010 :::* LISTEN 14610/sshd

udp 0 0 192.168.18.201:5404 0.0.0.0:* 15583/corosync

udp 0 0 192.168.18.201:5405 0.0.0.0:* 15583/corosync

udp 0 0 239.192.47.48:5405 0.0.0.0:* 15583/corosync

node2:

123456789101112 [root@node2 cluster]# netstat -ntulp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1082/sshd

tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1158/master

tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 14610/sshd

tcp 0 0 :::22 :::* LISTEN 1082/sshd

tcp 0 0 ::1:25 :::* LISTEN 1158/master

tcp 0 0 ::1:6010 :::* LISTEN 14610/sshd

udp 0 0 192.168.18.201:5404 0.0.0.0:* 15583/corosync

udp 0 0 192.168.18.201:5405 0.0.0.0:* 15583/corosync

udp 0 0 239.192.47.48:5405 0.0.0.0:* 15583/corosync

node3:

123456789101112 [root@node3 cluster]# netstat -ntulp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name

tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1082/sshd

tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1158/master

tcp 0 0 127.0.0.1:6010 0.0.0.0:* LISTEN 14610/sshd

tcp 0 0 :::22 :::* LISTEN 1082/sshd

tcp 0 0 ::1:25 :::* LISTEN 1158/master

tcp 0 0 ::1:6010 :::* LISTEN 14610/sshd

udp 0 0 192.168.18.201:5404 0.0.0.0:* 15583/corosync

udp 0 0 192.168.18.201:5405 0.0.0.0:* 15583/corosync

udp 0 0 239.192.47.48:5405 0.0.0.0:* 15583/corosync

下面配置cLVM

四、cLVM 安装与配置

1.安装cLVM

[root@target ~]# ha ssh node$I 'yum install -y lvm2-cluster'; done 在每个节点执行

2.启用集群LVM

[root@target ~]# ha ssh node$I 'lvmconf –enable-cluster'; done

3.查看一下启用的集群LVM

[root@target ~]# ha ssh node$I 'grep "locking_type = 3" /etc/lvm/lvm.conf'; done

locking_type = 3

locking_type = 3

locking_type = 3

注:所有节点启用完成。

4.启动cLVM服务

[root@target ~]# ha ssh node$I 'service clvmd start'; done

Starting clvmd:

Activating VG(s): No volume groups found

[确定]

Starting clvmd:

Activating VG(s): No volume groups found

[确定]

Starting clvmd:

Activating VG(s): No volume groups found

[确定]

如果这里卡住了 就是lvm2cluster未安装

5.将各节点的cman rgmanger clvmd设置为开机自启动

[root@target ~]# ha ssh node$I 'chkconfig clvmd on'; done

[root@target ~]# ha ssh node$I 'chkconfig cman on'; done

[root@target ~]# ha ssh node$I 'chkconfig rgmanager on'; done

6.在集群节点上创建lvm

node1:

(1).查看一下共享存储

[root@node1 ~]# fdisk -l #查看一下共享存储

Disk /dev/sda: 21.5 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000dfceb

Device Boot Start End Blocks Id System

/dev/sda1 * 1 26 204800 83 Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2 26 1301 10240000 83 Linux

/dev/sda3 1301 1938 5120000 83 Linux

/dev/sda4 1938 2611 5405696 5 Extended

/dev/sda5 1939 2066 1024000 82 Linux swap / Solaris

Disk /dev/sdb: 21.5 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x5f3b697c

Device Boot Start End Blocks Id System

Disk /dev/sdd: 21.5 GB, 21474836480 bytes

64 heads, 32 sectors/track, 20480 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0c68b5e3

Device Boot Start End Blocks Id System

(2).创建集群逻辑卷

[root@node1 ~]# pvcreate /dev/sdd #创建物理卷

Physical volume "/dev/sdd" successfully created

[root@node1 ~]# pvs

PV VG Fmt Attr PSize PFree

/dev/sdd lvm2 a– 20.00g 20.00g

[root@node1 ~]# vgcreate clustervg /dev/sdd #创建卷组

Clustered volume group "clustervg" successfully created

[root@node1 ~]# vgs

VG #PV #LV #SN Attr VSize VFree

clustervg 1 0 0 wz–nc 20.00g 20.00g

[root@node1 ~]# lvcreate -L 10G -n clusterlv clustervg #创建逻辑卷

Logical volume "clusterlv" created

[root@node1 ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert

clusterlv clustervg -wi-a—- 10.00g

7.node2node3上查看一下创建的逻辑卷

node2:

[root@node2 ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert

clusterlv clustervg -wi-a—- 10.00g

node3:

[root@node3 ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert

clusterlv clustervg -wi-a—- 10.00g

生成集群文件系统(gfs

五、gfs2 安装与配置

1.安装gfs2

[root@target ~]# ha ssh node$I 'yum install -y gfs2-utils'; done

2.查看一下帮助文件

[root@node1 ~]# mkfs.gfs2 -h

Usage:

mkfs.gfs2 [options] <device> [ block-count ]

Options:

-b <bytes> Filesystem block size

-c <MB> Size of quota change file

-D Enable debugging code

-h Print this help, then exit

-J <MB> Size of journals

-j <num> Number of journals

-K Don't try to discard unused blocks

-O Don't ask for confirmation

-p <name> Name of the locking protocol

-q Don't print anything

-r <MB> Resource Group Size

-t <name> Name of the lock table

-u <MB> Size of unlinked file

-V Print program version information, then exit

注:对于我们用到的参数进行说明

-j # 指定日志区域的个数,有几个就能够被几个节点挂载

-J # 指定日志区域的大小,默认为128MB

-p {lock_dlm|lock_nolock}

-t <name>: 锁表的名称,格式为clustername:locktablename

3.格式化为集群文件系统 在node1操作

-j 节点数 -p lock_dlm 加锁 -t 锁表名 /dev/…锁设备

mkfs.gfs2 -j 2 -p lock_dlm -t testcluster:sharedstorage /dev/clustervg/clusterlv

This will destroy any data on /dev/clustervg/clusterlv.

It appears to contain: symbolic link to `../dm-0'

Are you sure you want to proceed? [y/n] y

Device: /dev/clustervg/clusterlv

Blocksize: 4096

Device Size 10.00 GB (2621440 blocks)

Filesystem Size: 10.00 GB (2621438 blocks)

Journals: 2

Resource Groups: 40

Locking Protocol: "lock_dlm"

Lock Table: "testcluster:sharedstorage"

UUID: 60825032-b995-1970-2547-e95420bd1c7c

注:testcluster是集群名称,sharedstorage为锁表名称

4.创建挂载目录并挂载

[root@node1 ~]# mkdir /mydata

[root@node1 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

[root@node1 ~]# cd /mydata/

[root@node1 mydata]# ll

总用量 0

mount -t gfs2 /dev/clustervg/clusterlv /var/www/html

5.node2node3进行挂载

node2:

[root@node2 ~]# mkdir /mydata

[root@node2 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

[root@node2 ~]# cd /mydata/

[root@node2 mydata]# ll

总用量 0

mount -t gfs2 /dev/clustervg/clusterlv /var/www/html

node3:

[root@node3 ~]# mkdir /mydata

[root@node3 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

Too many nodes mounting filesystem, no free journals

注:大家可以看到,node2成功挂载而node3没有功功挂载,Too many nodes mounting filesystem, no free journals,没有多于的日志空间。因为我们在格式化时只创建了2个日志文件,所以node1node2可以挂载,而node3无法挂载

六、测试

1.查看是否能快速同步文件

node1:

[root@node1 mydata]# touch 123.txt

[root@node1 mydata]# ll

总用量 4

-rw-r–r– 1 root root 0 8 20 16:13 123.txt

[root@node1 mydata]# ll

总用量 8

-rw-r–r– 1 root root 0 8 20 16:13 123.txt

-rw-r–r– 1 root root 0 8 20 16:14 456.txt

node2:

[root@node2 mydata]# ll

总用量 4

-rw-r–r– 1 root root 0 8 20 16:13 123.txt

[root@node2 mydata]# touch 456.txt

[root@node2 mydata]# ll

总用量 8

-rw-r–r– 1 root root 0 8 20 16:13 123.txt

-rw-r–r– 1 root root 0 8 20 16:14 456.txt

注:我们可以看到文件可以快速同步,直面我们来看一下挂载目录属性

2.查看挂载目录的属性

[root@node1 mydata]# gfs2_tool gettune /mydata

incore_log_blocks = 8192

log_flush_secs = 60

quota_warn_period = 10

quota_quantum = 60

max_readahead = 262144

complain_secs = 10

statfs_slow = 0

quota_simul_sync = 64

statfs_quantum = 30

quota_scale = 1.0000 (1, 1)

[root@node1 mydata]# gfs2_tool settune /mydata new_files_jdata 1

[root@node1 mydata]# gfs2_tool gettune /mydata

incore_log_blocks = 8192

log_flush_secs = 60

quota_warn_period = 10

quota_quantum = 60

max_readahead = 262144

complain_secs = 10

statfs_slow = 0

quota_simul_sync = 64

statfs_quantum = 30

quota_scale = 1.0000 (1, 1)

new_files_jdata = 1

3.查看一下日志文件

[root@node1 mydata]# gfs2_tool journals /mydata

journal1 – 128MB

journal0 – 128MB

2 journal(s) found.

注,大家可以看到只有两个日志文件,默认为128MB,下面我们来新增一个日志文件,并将node3挂载上

4.新增日志文件并挂载

[root@node1 ~]# gfs2_jadd -j 1 /dev/clustervg/clusterlv

Filesystem: /mydata

Old Journals 2

New Journals 3

[root@node1 ~]# gfs2_tool journals /mydata

journal2 – 128MB

journal1 – 128MB

journal0 – 128MB

3 journal(s) found.

[root@node3 ~]# mount -t gfs2 /dev/clustervg/clusterlv /mydata

[root@node3 ~]# cd /mydata/

[root@node3 mydata]# ll

总用量 8

-rw-r–r– 1 root root 0 8 20 16:13 123.txt

-rw-r–r– 1 root root 0 8 20 16:14 456.txt

5.扩展集群逻辑卷

(1).先查看一下大小

[root@node3 ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert

clusterlv clustervg -wi-ao— 10.00g

注,现在是10G,下面我们将其扩展到15G

(2).扩展物理边界

[root@node3 ~]# lvextend -L 15G /dev/clustervg/clusterlv

Extending logical volume clusterlv to 15.00 GiB

Logical volume clusterlv successfully resized

[root@node3 ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert

clusterlv clustervg -wi-ao— 15.00g

(3).扩展逻辑边界

[root@node3 ~]# gfs2_grow /dev/clustervg/clusterlv

FS: Mount Point: /mydata

FS: Device: /dev/dm-0

FS: Size: 2621438 (0x27fffe)

FS: RG size: 65533 (0xfffd)

DEV: Size: 3932160 (0x3c0000)

The file system grew by 5120MB.

gfs2_grow complete.

[root@node3 ~]#

[root@node3 ~]# df -h

文件系统 容量 已用 可用已用%% 挂载点

/dev/sda2 9.7G 1.5G 7.7G 17% /

tmpfs 116M 29M 88M 25% /dev/shm

/dev/sda1 194M 26M 159M 14% /boot

/dev/sda3 4.9G 138M 4.5G 3% /data

/dev/sdc1 5.0G 138M 4.6G 3% /mnt

/dev/mapper/clustervg-clusterlv

15G 388M 15G 3% /mydata

iscsi关机有问题

原文链接:https://blog.csdn.net/wuxingpu5/article/details/48188499/

原创文章,作者:优速盾-小U,如若转载,请注明出处:https://www.cdnb.net/bbs/archives/7866

(0)
上一篇 2022年10月2日 04:55
下一篇 2022年10月2日 08:19

相关推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注

优速盾注册领取大礼包www.cdnb.net
/sitemap.xml