ScaleScaleScaleScale

Tips / Nginx


How to configure Nginx load balancing

Load balancing is a very powerful and useful technique to distribute traffic across different servers, and Nginx Load Balancing is one of the best options around to achieve full application redundancy at low cost, with an easy and quick server side setup. Nginx provides the load balancer service, and setting it up it’s faily easy. Let’s begin.

Install Nginx

First, you must install Nginx, if you are using CentOS/RHEL, I suggest you to take a look at this link.

If you use Debian/Ubuntu, just type:

apt-get install nginx

Nginx load balancing and the Upstream Module.

In order to set up a load balancer with round robin, we will use nginx upstream module.
First, let’s edit your virtual host file, in my case:

pico -w /etc/nginx/conf.d/mysite.com.conf

At the top of the file, add this:

upstream balancer {
server backend.yoursite.com;
server backend2.yoursite.com;
server backend3.yoursite.com;
}

In each one of this machines, you should have your nginx installation and web/php/static files exactly the same.

Where should I use the upstream module?

You must do it within your virutal host configuration, for example:

server {
location / {
proxy_pass http://balancer;
}
}

Start the nginx load balancer

service nginx restart

If you have all your virtual servers in place with the same content, you should see how the traffic start to distribute between all the nodes.

Load Balancing Options

In the previous example, the load balancer was configured to distribute all the traffic equally among all the nodes, however that may not be a good scenario for some platforms. Fortunately nginx provides lot of balancing options, let’s explore the most popular ones:

Weight Balancing

One way to allocate specific weight to certain machines is using the weight option. In this scenario the default weight is 1, however, if you use a weight of 2, backend2.yoursite.com will receive twice traffic as backend1 and backend3. And if you set weight 4 for backend3.yoursite.com, it will receive four times the traffic backend1.yoursite.com receives.

This is the example:

upstream balancer {
server backend.yoursite.com weight=1;
server backend2.yoursite.com weight=2;
server backend3.yoursite.com weight=4;
}

IP Hash Balancing

Hash option allows your backend servers to link with the clients according to their IP address, this means if your visitor originally recieved content from backend1.yoursite.com, it will continously recieve traffic from that server, until that machine goes down or fails, time where your visitor will be recieven traffic from other active nodes.

IP hash balancing example:

upstream balancer {
ip_hash;
server backend.yoursite.com;
server backend2.yoursite.com;
server backend3.yoursite.com down;
}

Max Fails Option

If you setup your round robin balancing with nginx, the web server will continue sending data to the backend nodes even if they are down. Max fails directive can help you to prevent that from happening by setting up the amount of time a node may be down. There are two important concepts involved in this configuration:

Max fails means the maximum number of failed attempts to connect to the backend node before it is labeled as ‘inactive’ or ‘down’.

fall_timeout is another directive, that will help you to specify the max length at where the server is considered totally down/inoperative. Once the fall_timeout time expires, a new attempt will be made.

Max fails with fail_timeout configuration example:

upstream balancer {
server backend.yoursite.com max_fails=2 fail_timeout=5s;
server backend2.yoursite.com weight=3;
server backend3.yoursite.com weight=4;
}

All done, by this time your should a fully working nginx load balancing service 😀

Popular search terms:

  • nginx load balancer configuration centos7
  • nginx load balancer configuration
  • ngenix load balancing machines
  • how to configure nginx as load balancer
profile

Esteban Borges

Linux Geek, Webperf Addict, Nginx Fan. CTO @Infranetworking

  • Vivek Bajpai

    Really good tutorial.