How to: Use the Ubiquity Load Balancer

Introduction

A web application’s success is highly dependent upon its overall performance perceptible to its users, in which several behind-the-scenes factors are of importance to this perception. Two factors that play an important role in an application’s success are its scalability and reliability. Scalability being the application’s ability to adapt to increased usage by taking advantage of additional resources allocated to it, and reliability simply being the application’s ability to sustain a workable state for its users — at all times.

Increased usage will eventually cause an application to outgrow its allocated resources, so ensuring that you use a solution that maintains scalability and reliability is imperative to its continued success. One solution for maintaining these factors is the implementation of one or more load balancer devices into your infrastructure. The implementation of a load balancer in front of your web servers allows for a load-balanced web hosting environment capable of adapting to future growth.

When a load balancer is used in conjunction with a web application and its web servers; it acts as a reverse proxy. A reverse proxy being a gateway device that retrieves resources from the servers running behind it. The methodology in which these resources are retrieved is determined by its configuration, which is all done in a manner that is invisible to the end-user. Essentially, load balancing your web hosting environment allows you to distribute resources across multiple servers. Additionally, it enables further scalability and reliability by making it possible to accommodate increased growth simply through the addition of more servers to the environment.

In the tutorial below, we will demonstrate how you can configure a web application (WordPress) to run in a load-balanced web hosting environment utilizing our new load balancer solution.

Step 1 Instance Preparation

Let’s begin by prepping our web hosting environment for the implementation of a load balancer device. Our load balancer setup will consist of four CentOS 7 instances and the load balancer device itself — for a total of five devices. Three instances will serve as web servers, and one will serve as the database server. The three web servers will be using Apache, and the database server will use MariaDB. Additionally, we will later be configuring the web server instances to use a Gluster filesystem volume, which will enable automatic data synchronization between each web server instance.

IMPORTANT NOTES:

This tutorial was written for the setup and configuration of a new WordPress installation and assumes that it will be the only website hosted in the load-balanced environment. An existing WordPress installation will either need to be migrated to this infrastructure configuration or have its infrastructure configuration updated to accommodate multiple web servers. Should you want to host more than one website, you will want to use an alternative load balancing methodology, which will not be covered in this tutorial.

To simplify the naming scheme of our instances, our three web servers will be named apache01, apache02, and apache03 with the database server being named database01. Feel free to name your instances otherwise, but make sure you update configuration file changes and commands accordingly as they will be referenced throughout this tutorial as mentioned. Additionally, all instances must be created in the same location to utilize private networking, which is necessary for the completion of sections of this tutorial.

Web Server Configuration

For all three web server instances, we’ll want to install the Apache web server and necessary PHP libraries:

[root@apache01 ~]# yum install httpd php php-mysql -y

Once everything has finished installing, we’ll then want to start and enable Apache to startup on boot on each one:

[root@apache01 ~]# systemctl start httpd && systemctl enable httpd
Database Server Configuration

On the fourth server, we will only need the MariaDB server-side software installed:

[root@database01 ~]# yum install mariadb-server -y

Once it has finished installing, start the database server and enable it on boot:

[root@database01 ~]# systemctl start mariadb && systemctl enable mariadb

Then run the secure installation tool:

[root@database01 ~]# mysql_secure_installation

Press ENTER as no root password has been set yet. On the next prompt to set a password, press y to accept and input your desired root MariaDB password. Then press ENTER for the remaining prompts. After you’ve finished the secure installation, let’s configure MariaDB to listen on its private address. We can determine our private IP address by running the below command:

[root@database01 ~]# ip a | grep eth1 | awk '{print $2}'

Its output should be similar to below:

eth1:
10.0.230.4

Now that we have obtained our private IP address, we can update MariaDB’s configuration file:

[root@database01 ~]# vi /etc/my.cnf

Append the following line to your configuration file (replacing 10.0.230.4 with your own unique private IP address) below the [mysqld] section:

bind-address=10.0.230.4

It should look similar to below when you’re done:


[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
bind-address=10.0.230.4

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

Lastly, save the file and then restart the MariaDB for the changes to take effect:

[root@database01 ~]# systemctl restart mariadb

We can then confirm that it is listening on the private address by running the following command:

[root@database01 ~]# netstat -na | grep 3306

Its output should then show the private IP address you assigned it:

tcp        0      0 10.0.230.4:3306        0.0.0.0:*               LISTEN

Step 2Configure Instance Firewalls

To make all three web servers publicly accessible, we will need to open port 80 on all of our instances’ firewalls. So, let’s add the HTTP service to the default zone (public) on our web servers using the following command:

[root@apache01 ~]# firewall-cmd --permanent --add-service=http

Reload the firewall service, so the rule takes effect:

[root@apache01 ~]# firewall-cmd --reload

Then confirm that the rule is active:

[root@apache01 ~]# firewall-cmd --zone=public --list-all

Its output should be similar to below and list http on the services line:


public (default)
  interfaces:
  sources:
  services: http ssh
  ports:
  masquerade: no
  forward-ports:
  icmp-blocks:
  rich rules:

The HTTP service rule that we added will permit anyone to access the web servers directly. If you prefer to have the web servers be only accessible by the load balancer itself or from specific IP addresses, then you will want to update your firewall rule(s) accordingly.

For all other services in this tutorial, we’ll be utilizing the private network for their communications. To permit all connections over the private network, we’ll want to add our eth1 interface to the trusted zone on each instance:

So, let’s open up each eth1 interface file on our instances:

[root@apache01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1

Then append the following text to the end of the file:

ZONE=trusted

It should look similar to below:

DEVICE=eth1
TYPE=Ethernet
IPADDR=10.0.230.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=static
NM_CONTROLLED=no
MTU=1450
ZONE=trusted

Lastly, we’ll want to restart the firewall and network services for the changes to take effect:

[root@apache01 ~]# systemctl restart firewalld && systemctl restart network

Step 3 Configure Web Server Data Synchronization

To accomplish the task of data synchronization for each of our web server cloud instances, we’ll be installing and configuring GlusterFS (Gluster FileSystem). GlusterFS is a userspace filesystem, which is a virtual filesystem that runs on top of the operating system’s filesystem. The creation of this virtual filesystem simplifies the data replication process for existing filesystem types. Utilizing GlusterFS will allow our WordPress installation to always use the same files — no matter which web server is serving the content to the user.

Update Hosts Files

First, let’s start defining each instance by its private IP address in all of our instance’s respective hosts files. The private address being the IP automatically configured on the eth1 interface on our cloud platform. Configuring our hosts files accordingly will allow us to provide a unique name for each instance without having to setup nameservers and DNS records.

HINT:

You can obtain your private IP for each instance by running the following command on each one:

[root@apache03 ~]# ip a | grep eth1 | awk '{print $2}'

Open up /etc/hosts with a text editor and then map each instance to its private IP on all instances:

[root@apache02 ~]# vi /etc/hosts

Each /etc/hosts file should then look similar to the below examples once you’ve finished:


#apache01 /etc/hosts file
127.0.1.1 apache01.localdomain apache01
127.0.0.1 localhost
10.0.230.1 apache01
10.0.230.2 apache02
10.0.230.3 apache03	
10.0.230.4 database01

#apache02 /etc/hosts file
127.0.1.1 apache02.localdomain apache02
127.0.0.1 localhost
10.0.230.1 apache01
10.0.230.2 apache02
10.0.230.3 apache03	
10.0.230.4 database01

#apache03 /etc/hosts file
127.0.1.1 apache03.localdomain apache03
127.0.0.1 localhost
10.0.230.1 apache01
10.0.230.2 apache02
10.0.230.3 apache03	
10.0.230.4 database01

#database01 /etc/hosts file
127.0.1.1 database01.localdomain database01
127.0.0.1 localhost
10.0.230.1 apache01
10.0.230.2 apache02
10.0.230.3 apache03	
10.0.230.4 database01

Once you’ve completed mapping your unique names in each server instance’s hosts file, we will need to edit another file to prevent these changes from eventually being overwritten. So, open the /etc/cloud/cloud.cfg file on all instances and remove the line that contains the following text:

- update_etc_hosts

[root@apache01 ~]# vi /etc/cloud/cloud.cfg

Below is an example cloud.cfg file:

#Example cloud.cfg truncated
disable_root: 0
ssh_pwauth:   1

locale_configfile: /etc/sysconfig/i18n
mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
resize_rootfs_tmp: /dev
ssh_deletekeys:   0
ssh_genkeytypes:  ~
syslog_fix_perms: ~

cloud_init_modules:
 - migrator
 - bootcmd
 - write-files
 - growpart
 - resizefs
 - set_hostname
 - update_hostname
 - update_etc_hosts
 - rsyslog
 - users-groups
 - ssh
Install GlusterFS

Now that each web server instance can easily identify each other, we can install GlusterFS on each one. First, download the appropriate repository:

[root@apache01 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

Then use yum to install GlusterFS:

[root@apache01 ~]# yum install glusterfs-server -y

Once it has finished installing, you’ll want to start it and enable it on boot:

[root@apache01 ~]# systemctl start glusterd && systemctl enable glusterd
Create and Start a Volume

After you’ve installed GlusterFS on your web server instances, you will want to create a Gluster volume. However, let’s first create a directory on each one of the web server instances so that we can later mount our volume to it:

[root@apache01 ~]# mkdir /gluster_mount

To create a volume, we need to probe our peers (instances) so that they are added to the cluster:

[root@apache01 ~]# gluster peer probe apache02
[root@apache01 ~]# gluster peer probe apache03

Now, we can create the volume:

[root@apache01 ~]# gluster volume create volume01 replica 3 transport tcp apache01:/gluster apache02:/gluster apache03:/gluster force

Upon success, you should receive the following message:

[root@apache01 ~]# volume create: volume01: success: please start the volume to access data

Now, let’s start the volume as the success message advises:

[root@apache01 ~]# gluster volume start volume01

A success message should then be returned:

[root@apache01 ~]# volume start: volume01: success
Mount the Volume and Update /etc/fstab

Next, let’s mount the volume that we just created on each web server instance (replacing each unique each instance name with the respective one being worked on):

[root@apache01 ~]# mount -t glusterfs apache01:/volume01 /gluster_mount

Then add it to each web server’s /etc/fstab file, so that it will automatically mount it upon booting (again replacing each instance name with the respective one being worked on):

apache01:/volume01 /gluster_mount glusterfs defaults,_netdev 0 0

It should look similar to below:

#Example /etc/fstab file on apache01
/dev/vda1       /                       ext4    defaults        1 1
apache01:/volume01       /gluster_mount        glusterfs       defaults,_netdev 0 0

Save the file and then move onto the next step.

Step 4Install and Configure WordPress

The preparation of our web servers and database server is now complete, so let’s start the WordPress installation and configuration process.

Download and Extract WordPress

First, we’ll want to download and extract WordPress onto any one of the web server instances in the directory where we mounted our Gluster volume:

/gluster_mount

[root@apache01 ~]# wget -P /gluster_mount https://wordpress.org/latest.tar.gz && cd /gluster_mount && tar -xzf latest.tar.gz -C /gluster_mount --strip-components 1 && rm -rf latest.tar.gz
Create the WordPress Database

Moving along, let’s create our WordPress database on the database server:

[root@database01 ~]# mysql -u root -p'your_root_mariadb_password' -e "create database wordpress_blog;"
Create Database Users

Once the database has been created, we will want to create a user for each web server instance:

[root@database01 ~]# mysql -u root -p'your_root_mariadb_password' -e "create user 'wordpress_user'@'apache01' identified by 'insert_your_db_password'; grant all privileges on wordpress_blog . * to 'wordpress_user'@'apache01'; flush privileges;"
[root@database01 ~]# mysql -u root -p'your_root_mariadb_password' -e "create user 'wordpress_user'@'apache02' identified by 'insert_your_db_password'; grant all privileges on wordpress_blog . * to 'wordpress_user'@'apache02'; flush privileges;"
[root@database01 ~]# mysql -u root -p'your_root_mariadb_password' -e "create user 'wordpress_user'@'apache03' identified by 'insert_your_db_password'; grant all privileges on wordpress_blog . * to 'wordpress_user'@'apache03'; flush privileges;"
Update wp-config.php

Everything database-wise is now setup, so we’ll just need to update wp-config-sample.php to utilize our remote database server, and then save it as wp-config.php so that it is activated. So, let’s open up our wp-config-sample.php:

[root@apache01 ~]# vi /gluster_mount/wp-config-sample.php

Then find the lines with the following text:

define('DB_NAME', 'database_name_here');
define('DB_USER', 'username_here');
define('DB_PASSWORD', 'password_here');
define('DB_HOST', 'localhost');

Update each variable with what you input earlier:

define('DB_NAME', 'wordpress_blog');
define('DB_USER', 'wordpress_user');
define('DB_PASSWORD', 'insert_your_db_password');
define('DB_HOST', 'database01');

Save the file and then move onto to the next step.

Update Apache DocumentRoot

On all of our web servers, we will now want to update our DocumentRoot setting in our httpd.conf files to use the GlusterFS volume we mounted earlier:

[root@apache01 ~]# vi /etc/httpd/conf/httpd.conf

Find the line that contains the following text:

DocumentRoot "/var/www/html"

Update the path inside the quotes to /gluster_mount.

Slightly further down, you will see a line with the following text:

Directory "/var/www"

Update its path to /gluster_mount as well and go to the line further down with the following text:

Directory "/var/www/html"

Again, update its path to /gluster_mount. Lastly, save the file and restart Apache for the changes to take effect:

[root@apache01 ~]# systemctl restart httpd

Step 5Create and Configure the Load Balancer

At this point, our environment should be configured how we want it with the exception of the load balancer device itself. So, let’s create our load balancer in Ubiquity Motion with the default settings in the same location as our instances and assign them to the device. Default settings for load balancers are the following:

Protocol: HTTP
Port: 80
Method: Round Robin
Session Persistence: None
Connection Limit: -1
Delay: 10
Timeout: 10
Max Retries: 10

All load balancer options are covered below for informational purposes so that you can better understand the choices available. Feel free to move onto the next step to complete the setup instead.

Methods: Round Robin vs Least Connections vs Source IP

Round Robin:

A load balancer configured to utilize the Round Robin method simply means that connections made to the load balancer will alternate between any number of hosts behind it for each request made. The same web server will never be utilized to respond to a user’s request twice in a row (as long as there are no issues with the other web servers).

Least Connections:

When a load balancer is configured to use Least Connections, it will always use the web server that has the least amount of connections to it.

Source IP:

Load balancers configured with the Source IP option will always utilize the same web server based on the user’s IP address. So, unless the user’s IP address changes, they will utilize the same web server throughout their session.

Session Persistence: HTTP Cookie vs Source IP

HTTP Cookie:

If the HTTP cookie option is selected, then a cookie containing information about the web server used is stored on the user’s computer. This allows for a user’s session to persist that have cookies enabled within their browser settings for use with stateful web applications.

Source IP:

Since some users don’t enable cookies, another way to maintain session persistence is to keep track of the IP that originated the request to the web server. However, should the user’s IP address change during their session, they may be routed to another web server making it so they are unable to maintain their session. In cases where you believe users IPs may frequently change, the option for an HTTP cookie will be the better choice.

Connectivity and Monitor Settings

Connection Limit:

By default, this is set to ‘-1’, which sets no limit on the amount of connections made to the load balancer. Set it to any number one or greater if you would like to define a limit.

Delay:

The delay setting is used for the monitoring of attached instances to a load balancer. It is defined in seconds and determines how frequently the status of an instance is checked.

Timeout:

This setting is used to define the amount of time in seconds that the load balancing monitor waits for a reply from an instance, which is also defined in seconds.

Max Retries:

This is the number of attempts that the load balancer will make before it removes an instance from the load balancer device. Once it has reached the limit, it will be considered unresponsive and be removed from the load balancer should it exceed the number defined for this setting.

Step 6Update Domain ‘A’ Record

The last and final step is to update your DNS ‘A’ record for your domain to the IP address of the load balancer. The process for this will vary depending on whether you utilize your own nameservers, or you use the nameservers provided by your registrar. To obtain the IP address of your load balancer, simply visit the load balancer overview tab in Ubiquity Motion. Once you’ve updated your ‘A’ record, you will have successfully completed the setup and configuration of your WordPress site in a load-balanced web hosting environment!

Written by
on August 21, 2015

Facebook Twitter Google+ LinkedIn Addthis