How to install GlusterFS on CentOS 6.9 Linux
This article is intended to give you a step by step guide to how to install GlusterFS on CentOS 6.9 Linux. For this tutorial, we will assume you are using CentOS 6.9.
as described in GlusterFS website: GlusterFS is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. GlusterFS is free and open source software and can utilize common off-the-shelf hardware. To learn more, please see the Gluster project home page.
1 – Requirements
At least two virtual disks, one for the OS installation (sda) and one to be used to serve GlusterFS storage (sdb) are needed. This will emulate a real world deployment, where you would want to separate GlusterFS storage from the OS install.
Note: GlusterFS stores its dynamically generated configuration files at /var/lib/glusterd, if at any point in time GlusterFS is unable to write to these files it will at minimum cause erratic behavior for your system, or worse take your system offline completely. It is advisable to create separate partitions for directories such as /var/log to ensure this does not happen.
2 – Format and mount the bricks
On CentOS 6 , you need to install xfsprogs package to be able to format an XFS file system.
# yum install xfsprogs
Note: These examples are going to assume the brick is going to reside on /dev/sdb1 and /dev/sdc1
# mkfs.xfs -i size=512 /dev/sdb1 # mkfs.xfs -i size=512 /dev/sdc1 # mkdir -p /bricks-sdb/brick1 # mkdir -p /bricks-sdc/brick1 # vim /etc/fstab
Add the following:
# /dev/sdb1 /bricks-sdb/brick1 xfs defaults 1 2 # /dev/sdc1 /bricks-sdc/brick1 xfs defaults 1 2
Save the file and exit
# mount -a && mount
You should now see sdb1 mounted at /bricks-sdb/brick1, and sdc1 mounted at /bricks-sdc/brick1
3 – Installing GlusterFS
The Storage SIG provides centos-release- packages in the CentOS Extras repository. This makes it easy for users to install projects from the Storage SIG, only two steps are needed, i.e.:
# yum install centos-release-gluster # yum install glusterfs-server
Start the GlusterFS management daemon
# service glusterd start # service glusterd status
4 – Firewall configuration
You can run gluster with iptables rules, but it’s up to you to decide how you’ll configure those rules. By default, glusterd will listen on tcp/24007 but opening that port isn’t enough on the gluster nodes. Each time you add a brick, it will open a new port (that you’ll be able to see with “gluster volume status”)
Depending on your design, it’s probably better to have dedicated NIC for the gluster/storage traffic, and so “trust” that nic/subnet/gluster nodes through your netfilter solution (/etc/sysconfig/iptables for CentOS 6, and firewalld/firewall-cmd for CentOS 7).
also you can use csf. for more information aout install and configure csf, please refer to How to install CSF on CentOS 7 Linux
5 – Set up a GlusterFS volume
On the server:
# mkdir /bricks-sdb/brick1/gv0 # mkdir /bricks-sdc/brick1/gv0
We want to create a “Distributed” gluster, so on the server run the following commands:
# gluster volume create gv0 Public_IP:/bricks-sdb/brick1/gv0 Public_IP:/bricks-sdc/brick1/gv0 # gluster volume start gv0
Confirm that the volume shows “Started”:
# gluster volume info
Note: If the volume is not started, clues as to what went wrong will be in log files under /var/log/glusterfs on one the servers – usually in etc-glusterfs-glusterd.vol.log
6 – Testing the GlusterFS volume
For this step, we will use one of the servers to mount the volume. Typically, you would do this from an external machine, known as a “client”. Since using the method here would require additional packages to be installed on the client machine, we will use the servers as a simple place to test first.
# mount -t glusterfs localhost:/gv0 /mnt # for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
First, check the mount point:
# ls -lA /mnt | wc -l
You should see 100 files returned. Next, check the GlusterFS mount points on each partition:
# ls -lA /bricks-sdb/brick1/gv0 # ls -lA /bricks-sdc/brick1/gv0
You should see about 50 per partition using the method (distribute) we listed here.