How To Configure GlusterFS on Centos/RHEL 6x
Q. What is GlusterFS ?
-- GlusterFS is an open source, powerful clustered file system capable of scaling to several petabytes of storage which is available to user under a single mount point. It uses already available disk filesystems like ext3, ext4, xfs etc to store data and client will able to access the storage as local filesystem. GlusterFS cluster aggregates storage blocks over Infiniband RDMA and/or TCP/IP interconnect in a single global namespace.
Before the Installation:
Ensure that TCP and UDP ports 24007 and 24008 are open on all Gluster Servers.
Apart from these ports, you need to open one port for each brick starting from port 24009.
For example: if you have five bricks, you need to have ports 24009 to 24014 open.
Step: 1. Make sure to Add the bind the hostname of every Servers :
# vi /etc/hosts
192.168.100.220 ser1.domain.com ser1 # (Server)
192.168.100.221 ser2.domain.com ser2 # (Server)
192.168.100.229 ser5.domain.com ser5 # (Client)
-- Save & Quit (:wq)
Step: 2. Stop Firewall & Disable the Selinux (All Nodes) :
# iptables -L
# service iptables stop
# chkconfig iptables off
# vi /etc/sysconfig/selinux
SELINUX=disabled
-- Save & Quit (:wq)
Step: 3. Reboot the All Servers :
# init 6
Step: 4. Time Synchronization NTP (Both Nodes) :
# yum -y install ntp
# chkconfig ntpd on
# ntpdate pool.ntp.org
# service ntpd start
Step: 5. Install Epel Repo (All Servers) :
# yum -y install epel-release
Step: 6. Installation Gusterfs (on ser1.domain.com & ser2.domain.com) :
# cd /tmp
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum -y install glusterfs glusterfs-fuse glusterfs-server
Step: 7. Installation on Client (ser5.domain.com) :
On Client execute following command to install clusterfs client side packages.
# cd /tmp
# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
# yum -y install glusterfs glusterfs-fuse fuse fuse-libs libibverbs
Step: 8. Start Glusterfs Service (on Ser1 & Ser2):
# service glusterd start
# chkconfig glusterfsd on
Step: 9. Creating Trusted Storage Pool (on Ser1.domain.com):
-- Trusted storage pool are the servers which are running as gluster servers and will provide bricks for volumes. You will need to probe all servers to the main server ( here : ser1 is main server ) (don't probe ser1 or localhost). We will now create all three servers in a trusted storage pool and probing will be done on ser1.
# gluster peer probe 192.168.100.221
OR
# gluster peer probe ser2
Probe successful.
# gluster peer status
-- To Remove Server from the Trusted Storage Pool :
# gluster peer detach ser1
Step: 10. Creating Replicated Glusterfs Server Volume (on Ser1.domain.com) :
-- A gluster volume is a logical collection of bricks where each brick is an export directory on a server in the trusted storage pool. Use replicated volumes in storage where high-availability and high-reliability are critical because replicated volumes create same copies of files across multiple bricks in the volume.
Important Note: /data is New Attached Volume for Ser1 & Ser2.
# gluster volume create rep-volume replica 2 ser1:/data ser2:/data force
Here: force is used as gluster permits us to create the volume in another disk only.
Step: 11. Starting the replicated volume (on Ser1.domain.com) :
# gluster volume start rep-volume
# gluster volume info
# gluster pool list
Step: 12. We cannot mount the /data bricks by the same name. So mounting it on /shared. on both the servers (on Ser1 & Ser2) :
# mkdir /shared
# mount -t glusterfs ser1:/rep-volume /shared
# df -h
-- Now add following line at the end of /etc/fstab file to make it available to server on every reboot.
# vi /etc/fstab
ser1.domain.com:/rep-volume /shared glusterfs defaults,_netdev 0 0
-- Save & Quit (:wq)
On Client Machine :
===============
You can just mount and the gluster volume is ready.
# mkdir /shared
# mount -t glusterfs ser1:/rep-volume /shared
# df -h
# vi /etc/fstab
ser1.domain.com:/rep-volume /shared glusterfs defaults,_netdev 0 0
-- Save & Quit (:wq)
Create a file in any of the nodes and Check the replication is working in all the nodes.
Thanks For Visiting on My Blog, For More Tutorials Keep Visiting My Blog