Feed on
Posts
Comments

After fighting a bit with EMC ScaleIO to setup a demo on VMware, I decided there has to be an easier and more cost-effective way to implement fault-tolerant software defined storage for VMware vSphere using existing open source packages.  I have to admit I’m a fan of RHEL based solutions, so a quick Google search for suitable packages quickly revealed GlusterFS.  So, here’s a quick procedure that I followed to deploy in my 3-host vSphere 5.5 home lab – you’re mileage may vary and your subnets will likely be different too…  My basic setup is pretty simple – I have 3 ESXi 5.5 hosts with multiple NICs that are connected to trunk ports on a gig switch.  Each ESXi host boots from a 128GB SSD, which also hosts a small local datastore for ISO images and temporary storage – along with host cache when I was playing around with that.  In addition, each host has local HDD storage (1 with a 900 GB RAID 5 and the other 2 with 3 x 1 TB non-RAID SATA drives each) that is primarily unused because it’s not sufficiently fault-tolerant, but that’s why I’m implementing this solution.  So, with no further adieu…

Some basic setup info…

  • If you don’t already have one, create a CentOS 6 (x86_64) or 7 template…I’m using CentOS 6, so if you’re using CentOS 7, you’ll need to make some adjustments…
  • Make sure you have a local datastore on each ESXi host to which you’ll deploy from that template, preferably on a local SSD and not on the same local datastore that you’ll be leveraging for the shared storage…
  • For the sake of simplicity (this is a home lab after all), I’m going to use my regular lab subnet (192.168.0.0/24) for internetworking and a standard vSwitch on each host on a dedicated subnet (172.16.1.0/29, 172.16.1.8/29 and 172.16.1.16/29) for the NFS connection to the local SVM.  I used non-overlapping subnets just in case I ever decide to add an uplink.  You could just as easily use the same subnet on each host if you want…
  • You’ll also need a Virtual Machine network on that vSwitch in order to connect the SVM to the host, I named mine SVM.
  • I’ve disabled the firewall and selinux in my templates – I have plenty of network security on my border, so no need to stifle the OS with that responsibility…
  • Obviously, start with an updated OS via yum -y update too…

First thing’s first…

  • Create a standard vSwitch that is not attached to a physical uplink (this is a host-only network, so no need for uplink) on each host and name the associated VMkernel Port something appropriate – I used HostNFS – and assign the appropriate IP address – I used 172.16.1.1/255.255.255.248 on esxi1, 172.16.1.9/255.255.255.248 on esxi2 and 172.16.1.17/255.255.255.248 on esxi3.
  • Deploy a storage VM from your template to the local storage on each ESXi host – I’m deploying them with 1 CPU, 1 GB RAM, a thin-provisioned 8 GB (overkill) vmdk and 2 vNICs – one connected to my 192.168.0.0/24 subnet (for mgmt, Internet and GlusterFS replication/synchronization) and one connected to the SVM virtual network.  Also, I’m naming my VMs by the host and function – so esxi1-svm, esxi2-svm and esxi3-svm (for future reference).
  • Power on each SVM and assign an IP address to the NICs as appropriate – mine are 192.168.0.31, 32 and 33 for the external NIC and 172.16.1.2, 172.16.1.10 and 172.16.1.18 for the HostNFS connectivity.
  • Install the EPEL repo
    yum -y install http://mirror.umd.edu/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm
  • Enable the GlusterFS repo
    yum -y install wget
    cd /etc/yum.repos.d
    wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
  • Install some packages
    yum -y install glusterfs-server ntp
  • Configure NTP – I point to 0/1/2/3.us.pool.ntp.org, but you can choose to point wherever you want for time services
  • Enable glusterd and ntp
    chkconfig ntpd on
    chkconfig glusterd on
  • REBOOT
  • To get the servers talking glusterfs, on esxi1-svm
    gluster peer probe 192.168.0.32
    gluster peer probe 192.168.0.33
  • To verify they’re talking to each other, on each host
    gluster peer status
  • Add virtual disks to the SVMs or configure passthrough for a dedicated controller, then create a partition and file system on that partition – I created a 512 GB vmdk on each SVM, then a primary partition of max size and ext4 file system within…
  • Turn off auto check: tune2fs -c 0 /dev/sdb1
  • Make a directory for GlusterFS brick and GlusterFS volume – I created /gfs within which I created b1 for the first GlusterFS brick: mkdir -p /gfs/b1
  • Mount new file system: mount /dev/sdb1 /gfs/b1
  • Update /etc/fstab accordingly:
    /dev/sdb1 /gfs/b1 ext4 defaults 1 2
  • Create and start a new GlusterFS volume:
    gluster volume create v1 replica 3 transport tcp 192.168.0.31:/gfs/b1/v1 192.168.0.32:/gfs/b1/v1 192.168.0.33:/gfs/b1/v1
    gluster volume start v1
  • Set the quorum count for the volume: gluster volume set v1 cluster.quorum-count 2
  • Check the volume status: gluster volume info
  • Allow NFS connectivity to the GlusterFS volume from each ESXi host to each SVM:
    gluster volume set v1 nfs.rpc-auth-allow 172.16.1.1,172.16.1.9,172.16.1.17
  • Mount the new NFS export of the GlusterFS volume on each ESXi host:
    On esxi1, mount 172.16.1.2:/v1 to esxi1-gfs-b1-v1
    On esxi2, mount 172.16.1.10:/v1 to esxi2-gfs-b1-v1
    On esxi3, mount 172.16.1.18:/v1 to esxi3-gfs-b1-v1
  • Have fun!

Comments are closed.