GFS2 failisüsteemi kasutamine

Allikas: Kuutõrvaja
Redaktsioon seisuga 19. mai 2016, kell 11:36 kasutajalt Jj (arutelu | kaastöö) (Tarkvara paigaldamine centosis=)

Sissejuhatus

GFS2 (Global File System) http://sources.redhat.com/cluster/gfs/ on ... (ingl. k. shared disk cluster file system).

Tööpõhimõte

TODO

Tarkvara paigaldamine debianis

# apt-get install gfs2-tools

Tarkvara paigaldamine centosis

# yum groupinstall "iSCSI Storage Client" "High Availability" "Resilient Storage"

Seadistamine

/etc/hosts faili defineerime kõik workerid

10.100.0.1 moodle1
10.100.0.2 moodle2
10.100.0.3 moodle3

cmani seadistamine

et clvmd ja cman töötaks, on vaja /etc/cluster/cluster.conf tekitada. Selleks igas nodes käsud

ccs -f /etc/cluster/cluster.conf --createcluster moodle
ccs -f /etc/cluster/cluster.conf --addnode moodle1
ccs -f /etc/cluster/cluster.conf --addnode moodle2
ccs -f /etc/cluster/cluster.conf --addnode moodle3

Testime konfi

# ccs_config_validate
Configuration validates

et cman ka ilma quorumita starditaks, muidu ta hakkab kurjustama ja keeldub käivitumast ja tekib nokk kinni ja saba lahti stiilis ring

# echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/sysconfig/cman

Teenused käima, centosis sedasi

# service cman start

Või debianis

# /etc/init.d/cman start
Starting cluster: 
  Checking Network Manager... [  OK  ]
  Global setup... [  OK  ]
  Loading kernel modules... [  OK  ]
  Mounting configfs... [  OK  ]
  Starting cman... [  OK  ]
  Waiting for quorum... [  OK  ]
  Starting fenced... [  OK  ]
  Starting dlm_controld... [  OK  ]
  Starting gfs_controld... [  OK  ]
  Unfencing self... [  OK  ]
  Joining fence domain... [  OK  ]

Vaatame kas quorum on koos ehk kõik masinad suhtlevad

# corosync-quorumtool -l
Nodeid     Name
   1   moodle1
   2   moodle2
   3   moodle3

kus

  • cman two_node="1" expected_votes="1" - vajalik 2 komponendilise DRBD süsteemi puhul juhutumiks kui üks kaob

Clmvmd ja lvm tööle

lvm-i konfi muuta /etc/lvm/lvm.conf

locking_type = 3
fallback_to_local_locking = 0

Lvm tekitada, seadmeks meil multi0 mis tekitatud iscsi+multipathiga

pvcreate /dev/mapper/multi0
vgcreate clustervg /dev/mapper/multi0
vgchange -cy clustervg
lvcreate -l 100%FREE -n moodledata clustervg
lvmconf --enable-cluster

Ja samuti teenused tööle

# service clvmd start

GFS tekitamine

# mkfs.gfs2 -p lock_dlm -t moodle:moodledata -j 2 /dev/clustervg/moodledata
This will destroy any data on /dev/drbd1.
It appears to contain: LVM2 (Linux Logical Volume Manager) , UUID: THHRQpAS11PRt2eImKtkh7pxZOiOa3U

Are you sure you want to proceed? [y/n] y

Device:                    /dev/drbd1
Blocksize:                 4096
Device Size                100.00 GB (26213591 blocks)
Filesystem Size:           100.00 GB (26213591 blocks)
Journals:                  2
Resource Groups:           400
Locking Protocol:          "lock_dlm"
Lock Table:                "network-raid1:*"
UUID:                      C920F2FB-6E5D-57B9-C1AD-964FE8FDA3E0

kusjuures -t väärtused peavad kattuma cluster.conf failis oleva cluster nimega ja resource nimega

# gfs2_tool gettune /srv/gfs2
incore_log_blocks = 1024
log_flush_secs = 60
quota_warn_period = 10
quota_quantum = 60
max_readahead = 262144
complain_secs = 10
statfs_slow = 0
quota_simul_sync = 64
stall_secs = 600
statfs_quantum = 30
quota_scale = 1.0000   (1, 1)
new_files_jdata = 0

Kui kolmas node annab käivitamisel veate

Mounting GFS2 filesystem (/GFS): Too many nodes mounting filesystem, no free journals

Lisame täiendava journali

# gfs2_jadd -j 1 /dev/mapper/clustervg-moodledata
Filesystem:            /GFS
Old Journals           2
New Journals           3

Vaatame gfs2 enda staatust

# clustat
Cluster Status for moodle @ Wed May 18 15:20:02 2016
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 moodle1                                                             1 Online
 moodle2                                                             2 Online
 moodle3                                                             3 Online, Local

Kui väga hädasti veebiliidest ka vaja siis pigaldada luci

/etc/init.d/luci restart Point your web browser to https://moodle1:8084 (or equivalent) to access luci

Kasulikud lisamaterjalid