Avoid Asymmetric Routing in Load Balancing (pfSense example)

Introduction

My previous blogs Use pfSense to Load Balance Web Servers (1) and Use pfSense to Load Balance Web Servers (2) introduced the deployment of pfSense as load balancer to distribute web traffic to backend server nodes (i.e. Clst1-S1 and Clst2-S2; Clst2-S1 and Clst2-S2). pfSense hosts Server Cluster 1’s virtual IP 10.10.20.20 and Server Cluster 2’s virtual IP 10.10.20.30.

In the previous lab, when we accessed http://10.10.20.20 from internal Mgmt PC (10.10.10.10/24), the traffic was successfully load balanced to either Clst1-S1 (10.10.20.21) or Clst1-S2(10.10.20.22).
pfsense_lab_topo

Failed Scenario

However, I received a question that when access http://10.10.20.20 from Mgmt2 (diagram below) which is in the same subnet as the backend nodes, Mgmt2 cannot reach the web service.

Mgmt IP: 10.10.10.10 (successfully accessed http://10.10.20.20)
Mgmt2 IP: 10.10.20.10 (failed to access http://10.10.20.20)
Cluster 1 VIP: 10.10.20.20
Cluster 1 Node 1 IP: 10.10.20.21
Cluster 1 Node 2 IP: 10.10.20.22
pfsense_snat_topo_issue

I replicated the failed scenario and observe the following:pfSense_mgmt2_failed.png

Asymmetric Routing

What is the difference between access the web service from Mgmt and Mgmt2?

Mgmt PC is external to the web service subnet. When the user requests to access http://10.10.20.20, the traffic reaches pfSense load balancer, and then forwarded to either Clst1-S1(10.10.20.21) or Clst1-S2(10.10.20.22). Let’s assume Clst1-S1 responses to the request this time. Since Mgmt PC is in a different subnet (10.10.10.0/24), the return traffic reaches its default gateway on pfSense (10.10.20.1) first, and then routed to Mgmt PC.
pfSense_SNAT_topo_extmgmt.png

However, Mgmt2 PC is internal to the web service subnet. When the user requests to access http://10.10.20.20, the traffic reaches pfSense load balancer, and then forwarded to Clst1-S1 (10.10.20.21). Since Mgmt2 PC is in the same subnet as the web servers (10.10.20.0/24), the return traffic goest to Mgmt2 PC directly via SW1, without transiting through the default gateway on the load balancer.

Asymmetric routing occurs. Although some devices have tolerance on asymmetric routing,  these days, we still try to avoid whenever we can. For example, F5 load balancer allows asymmetric routing but it will limit the features. Asymmetric routing also adds network complexity and security concerns.
pfSense_SNAT_topo_intmgmt.png

SNAT as Solution

If the business requirement says the user must have access to the web service from the same subnet, then SNAT can be a solution to avoid the asymmetric routing problem.

pfSenseLB translates the source IP of the traffic initiated from Mgmt2 from 10.10.20.10 to 10.10.10.11. In this case, when Clst1-S1 receives the traffic from Mgmt2, it will response to 10.10.10.11, which forces the return traffic through pfSenseLB. pfSenseLB then translates 10.10.10.11 back to 10.10.20.10 and sends to Mgmt2.
pfSense_SNAT_topo_SNAT.png

pfSense_SNAT_topo_SNAT2.png

The following screenshot demonstrates SNAT configuration details on pfSense.
pfSense_snat.png

After the SNAT, we can successfully access http://10.10.20.20 from Mgmt2 now.
pfSense_mgmt2_success.png

NAT hit can be checked using shell command ‘pfctl -vvs nat’.
psSense_nat_hit.png

End

Be careful of asymmetric routing in load balancing design. For example, one-arm and multi-path (nPath) design may involve asymmetric routing. The selection of design models depends on business requirements. SNAT is a potential solution to asymmetric routing problem.

Advertisements

Set up NGINX as Reverse Proxy with Caching

Introduction

This lab reuses the server infrastructure built in Deploy Scalable and Reliable WordPress Site on LEMP(1), but add another Nginx server as load balancer/reverse proxy (LB01) in front of the web servers (WEB01 and WEB02). Caching will be enabled on LB01 and tested as well.

Boxes highlighted in RED below are deployed in the lab. Although WEB02 is not deployed in the current lab, it can be deployed in the same way as WEB01, described in Deploy Scalable and Reliable WordPress Site on LEMP(2); and proxied by LB01 as shown in the later configuration section.
nginx_reverseproxy_cache.png

Key Concepts

Forward Proxy vs. Reverse Proxy

Forward proxy can be used when servers/clients from a company’s internal network to reach internet resources. It helps keep user IP anonymous, filter URLs and may speed up internet surfing by caching web content.

Reverse proxy can be used when internet users try to access a company’s internal resource. The user request arrives at the reverse proxy server, which forward the request to a backend server that can fulfill it, and returns the server’s response to the client. It hides the company’s actual server IP from attackers, and reduces the load on the actual server by providing cached content itself.

Load Balancing vs. Reverse Proxy

Nginx site provides a good explanation on this topic: https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/

In this lab, Nginx is set up as load balancer and reverse proxy.

Deployment Steps

Step 1 – Install Nginx on Ubuntu 16.04

Select $5/month Ubuntu 16.04 droplet on DigitalOcean. DigitalOcean calls its Virtual Private Server (VPS) ‘droplet’. Refer to Deploy Scalable and Reliable WordPress Site on LEMP(1) for the details about DigitalOcean and the droplets used in my labs.

Install Nginx on the newly created droplet LB01, by executing the following command:

sudo apt-get update
sudo apt-get -y install nginx

Step 2 – Configure Reverse Proxy

Edit Nginx site configuration on LB01 to pass on web requests to backend web servers.

sudo nano  /etc/nginx/sites-enabled/default

Use Nginx HTTP ‘upstream’ module to realise load balacing and reverse proxy to multiple backend servers. Refer to the official module documentation for details. Update the content of ‘/etc/nginx/sites-enabled/default’ as below:

#define a upstream server group called 'webserver'. 'ip_hash' enables session persistence is required. '10.132.84.104' is WEB01's private IP; '10.132.84.105' is WEB02's private IP.
upstream webserver {
                ip_hash;
                server 10.132.84.104;
                server 10.132.84.105;
}

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;

	root /var/www/html;

	# Add index.php to the list if you are using PHP
	index index.php index.html index.htm;

	location / {
		# Call the upstream server group 'webserver', which we defined earlier. We can add additional proxy parameters in '/etc/nginx/proxy_params' if required.
                proxy_pass http://webserver;
                include /etc/nginx/proxy_params;
	}
      }  

Restart Nginx service to make our change work.

sudo service nginx restart

Test access to our WordPress site (created in Deploy Scalable and Reliable WordPress Site on LEMP(2)) via LB01’s public IP. We should see the same page as we directly access to WEB01’s public IP.
nginx_LB1.png
If things don’t work, check the error log ‘/var/log/nginx/error.log’ on LB01. We can use ‘cat’ to display file content, but we use ‘tail’ this time to list the final ‘n’ records.

# -n 6 means to display the final 6 records in the given file.
tail -n 6 /var/log/nginx/error.log

Step 3 – Configure Cache Server

Configure cache in Nginx site configuration file ‘/etc/nginx/sites-enabled/default’. The current file content can be viewed using ‘cat’, ‘less’ or ‘more’.

cat = can be used to join multiple files together and print the result on screen (it will not show page by page)

more = to view a text file one page at a time, press spacebar to go to the next page

less = is much the same as more command except it also supports page up/down and string search. Less is the enhanced version of ‘more’.

Further details refer to ‘Linux Command 7 – more, less, head, tail, cat‘.

Update ‘/etc/nginx/sites-enabled/default’ as following. Refer details in ‘Nginx Caching‘, but additional include ‘proxy_cache_valid’ directive. In my lab, if ‘proxy_cache_valid’ is unset, cached status always shows ‘MISS’. Please refer to Nginx Content Caching for proxy_cache_valid directive details.

#cache files will be saved in subdirectories (1:2) under '/tmp/nginx'. 
#cache zone called 'my_zone' is created with 10MB in size to store cache keys and other metadata
#'inactive=60m' means asset will be cleared from cache if not accessed within 60 mins 
 '200 10m' means response with the code 200 are considered valid for 10 mins.
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:10m inactive=60m;

#proxy_cache_key defines the key (identifier) for a request.If the request has the same key as a cached response, then cached response is sent to the client.
proxy_cache_key "$scheme$request_method$host$request_uri";

#'proxy_cache_valid' is to set how long cached responses are considered valid.
proxy_cache_valid 200 10m;

upstream webserver {
ip_hash;
server 10.132.84.104;
}

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;
        root /var/www/html;

	# Add index.php to the list if you are using PHP
	index index.php index.html index.htm;

	server_name _;

	location / {
                #use the cache zone 'my_zone', which we defined earlier.
                proxy_cache my_zone;
                
                #add an informative header 'X-Proxy-Cache' in response to tell us whether we hit cached content or miss.
                add_header X-Proxy-Cache $upstream_cache_status;
                proxy_pass http://webserver;
                include /etc/nginx/proxy_params;
                }
	}

‘/tmp/nginx’ is the cache file path we defined earlier.
cache_path.png

Finally, let’s test the content cache. When first time visit, ‘X-Proxy-Cache’ shows ‘MISS’; but shows ‘HIT’ when re-visit.’X-Proxy-Cache: EXPIRED’ shows, when the asset is not accessed in 60 mins and cleared from cache.
nginx_cache_hit.png

Deploy Scalable and Reliable WordPress Site on LEMP(3)

Introduction

In previous post Deploy Scalable and Reliable WordPress Site on LEMP(2), we successful set up Linux+Nginx+PHP+MySQL (LEMP) stack to hold WordPress site. However, Nginx and PHP services were enabled on the same server WEBo1.

In this lab, we will separate PHP to an external server PHP01 and leave WEB01 as Nginx web server only. It adds some flexibility scalability strategy. For performance enhancement details, please refer to Scaling PHP apps via dedicated PHP-FPM nodes, a test post I found online.

Deployment Steps

This lab involves PHP01 deployment and configuration change on WEB01 to forward PHP requests to PHP01. The topology is as below:
nginx_env

Step 1 – Configure PHP01 as php-fpm node

Boot another $5 ubuntu server from DigitalOcean, details available in Deploy Scalable and Reliable WordPress Site on LEMP(1).

#log onto PHP01
#update and install glusterfs client, php-fpm and php-mysql services on PHP01
sudo apt-get update
sudo apt-get -y install glusterfs-client php-fpm php-mysql
#make a folder called 'gluster' under root
sudo mkdir /gluster
#mount the 'file_store' volume on FS01 to '/gluster'on PHP01. '10.132.43.212' is FS01's private IP. 'glusterfs' is the filesystem type.
sudo mount -t glusterfs 10.132.43.212:/file_store /gluster

# add the partition to fstab so it mounts automatically at boot time.
sudo echo "10.132.43.212:/file_store /gluster glusterfs defaults,_netdev 0 0" >> /etc/fstab

Verify PHP01 can access the same WordPress folder we created on FS01 before, by executing the following command on PHP01.

sudo ls -la /gluster/www

Step 2 – WEB01 forward PHP requests to PHP01

Log onto WEB01, edit the nginx site default file by executing the following command:

sudo nano /etc/nginx/sites-enabled/default

Update the file as below; where ‘10.132.19.6’ is PHP01’s private IP, and 9000 is the port used by fastcgi. Comment the local fastcgi socket ‘unix:/run/php/php7.0-fpm.sock’.

	# pass the PHP scripts to FastCGI server listening on the php-fpm socket
        location ~ \.php$ {
                try_files $uri =404;
                #fastcgi_pass unix:/run/php/php7.0-fpm.sock;
                fastcgi_pass 10.132.19.6:9000;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                
        }

Restart Nginx service by executing the following command:

service nginx restart

Let’s now log onto the WordPress site, see whether it is still working as expected in Deploy Scalable and Reliable WordPress Site on LEMP(2).

Unfortunately, we get ‘502 Bad Gateway’ message this time.
nginx_badgateway
Fortunately, Nginx provides us error log. Execute the following command on WEB01 to view the log.

cat /var/log/nginx/error.log

Nginx error log reveals the following:

2016/11/04 10:25:21 [error] 18867#18867: *1 connect() failed (111: Connection refused) while connecting to upstream, client:x.x.x.x, server:y.y.y.y, request: "GET / HTTP/1.1", upstream: "fastcgi://10.132.19.6:9000", host: "y.y.y.y"

OK…it appears WEB01 passed on the request to PHP01, but PHP01 refused the connection. Step 3 will help resolve the issue.

Step 3 – Allow PHP01 to Listen WEB01

On PHP01, edit PHP ‘www.conf’ file to allow listen on WEB01.

sudo nano /etc/php/7.0/fpm/pool.d/www.conf

Perform the following changes in ‘www.conf’.

#Add the following line to allow WEB01's private IP
listen.allowed_clients = 10.132.84.104

#Comment the following line by adding ";" in the front.
;listen = /run/php/php7.0-fpm.sock

#Add the following line to have php-fpm listen on port 9000
listen = 9000

Restart PHP on PHP01 and Nginx on WEB01.

#On PHP01 restart php service
service php7.0-fpm restart
#On WEB01 restart nginx service
service nginx restart

Let’s now access the WordPress site again. ‘Hello world!’ – it’s working!

nginx_resumed.png

The End

Deploy Scalable and Reliable WordPress Site on LEMP(2)

Deploy Scalable and Reliable WordPress Site on LEMP(1) introduced the LEMP design, lab setup and MySQL configuration. This post will further introduce how to deploy Gluster distributed file system and Nginx web server. PHP will be initially enabled on the Nginx web server to prove WordPress site is working. The next post will introduce hosting PHP on a separate server.

Step 2 – Gluster Distributed File System

Gluster is a scale-out network attached file system. It aggregates storage servers, known as ‘storage bricks’ into one large parallel network file system. Virtual volumes are created across the member bricks. Servers with GlusterFS client service installed can have the remote virtual volume mounted.

There are 3 types of virtual volumes. Please refer to ‘GlusterFS Current Features & Roadmap‘ for details.

  • Distributed Volume: similar to RAID 0 without replica; files are evenly spread across bricks.
  • Replicated Volume: similar to RAID 1, which copies files to multiple bricks.
  • Distributed Replicated Volume: Distributes files across replicated bricks.

In this lab, we will deploy 1 node in the GlusterFS cluster, which means ‘Distributed Volume’ mode is used. Additional bricks can be added later.

Configuration is as below. Refer Gluster installation guide for details: https://gluster.readthedocs.io/en/latest/Install-Guide/Install/.

#update and install gluster on FS01. '-y' means automatically answering yes to all prompts.
sudo apt-get update

#ubuntu Personal Package Archive(PPA) requires 'software-properties-common' to be installed first.
sudo apt-get install -y software-properties-common

#add the community GlusterFS PPA
sudo add-apt-repository -y ppa:gluster/glusterfs-3.8

#update again
sudo apt-get update

#install GlusterFS server
sudo apt-get install -y glusterfs-server

#create a volume called 'file_store'. If replica is required, add 'replica n' after the volume name. 
#'10.132.43.212' is the brick's private IP. If multiple bricks exit, all member IPs are required in the command.
gluster volume create file_store transport tcp 10.132.43.212:/gluster force

#start the 'file_store' volume.The volume is ready-to-use now.
gluster volume start file_store

Step 3 – Nginx Web Server

In this step, we will create a Nginx web server with PHP integrated initially; mount the virtual volume created in Step 2  to the web server; download WordPress files to the mounted folder; and then update WordPress config file to point to the mounted folder and connect to the database created in Step 1.

The following configuration shows which services are to be installed and how to mount external volume as partition.

#update and install nginx, glusterfs client, php-fpm and php-mysql services on WEB01
sudo apt-get update
sudo apt-get -y install nginx glusterfs-client php-fpm php-mysql

#Nginx used to require set php-fpm pathinfo to false 'cgi.fix_pathinfo=0' in 'php.ini'.Default is true '1'. It was a security issue related with Nginx and older version (5.0) of php-fpm.
#the change is not required as we are using PHP7.0

#make a folder called 'gluster' under root
sudo mkdir /gluster
#mount the 'file_store' volume on FS01 to '/gluster'on WEB01. '10.132.43.212' is FS01's private IP. 'glusterfs' is the filesystem type.
sudo mount -t glusterfs 10.132.43.212:/file_store /gluster

# add the partition to fstab so it mounts automatically at boot time.
sudo echo "10.132.43.212:/file_store /gluster glusterfs defaults,_netdev 0 0" >> /etc/fstab

# create a folder 'www' under '/gluster'. '/gluster/www' will be the web root.
sudo mkdir /gluster/www

We now need to modify the default Nginx server block to point to our new web root, ‘/gluster/www’.

sudo nano /etc/nginx/sites-enabled/default

Modify ‘/etc/nginx/sites-enabled/default’ file content as below. Changes are highlight in BLUE.

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;

	root /gluster/www;
	index index.php index.html index.htm;

	# Make site accessible from http://localhost/
	server_name autrunk.com;

	location / {
		# First attempt to serve request as file, then
		# as directory, then fall back to displaying a 404.
		#try_files $uri $uri/ =404; 
                #the following config sends everything through to index.php and keeps the appended query intact.
                try_files $uri $uri/ /index.php?q=$uri&$args;
	}
	# pass the PHP scripts to FastCGI server listening on the php-fpm socket
        location ~ \.php$ {
                try_files $uri =404;
                fastcgi_pass unix:/run/php/php7.0-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;              
        }
}

Download and configure WordPress.

#download WordPress files and unzip
wget https://wordpress.org/latest.tar.gz -O /root/wp.tar.gz
tar -zxf /root/wp.tar.gz -C /root/

#copy the WordPress files to our new web root on WEB01. After the copy, the files are also shown on FS01, as it is the storage destination mounted to WEB01.
cp -Rf /root/wordpress/* /gluster/www/.

#copy sample WordPress config file to 'wp-config.php', where we can define the database connection.
cp /gluster/www/wp-config-sample.php /gluster/www/wp-config.php

We now update ‘/gluster/www/wp-config.php’ with the ‘wordpress1’ database and ‘wpuser1’ information, which was created in Step 1.

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpress1');

/** MySQL database username */
define('DB_USER', 'wpuser1');

/** MySQL database password */
define('DB_PASSWORD', 'password');

/** MySQL hostname */
define('DB_HOST', '10.132.88.196');

/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');

/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

We now make the web root ‘gluster/www’ owned by the user ‘www-data’ and the group ‘www-data’, which Nginx process will use. Then finally restart the Nginx and php-fpm services to make our previous config changes working.

chown -Rf www-data:www-data /gluster/www
service nginx restart
service php7.0-fpm restart

Step 4 – WordPress Site

We now can access the WordPress site by WEB01’s public IP or DNS name if it’s been setup. The following should show and lead to initial setup wizard.
wp_config.png

Self-hosted WordPress provides much more flexibility and features than hosted by WordPress.com
wordpress_selfhost.png

To be Continued

Deploy Scalable and Reliable WordPress Site on LEMP(1)

Introduction

In this lab, we will develop and deploy a scalable and reliable WordPress site on LEMP stack. Different from LAMP stack, LEMP uses nginx [engine x], instead of Apache as web server. Compared with Apache, nginx can handle much more concurrent hits. I found an article on the Internet, demonstrating the performance difference ‘Web server performance comparison‘.

Please refer to nginx official site https://nginx.org/en/ for further details.

Design Rationale

A design proposed for production environment is as below. Firewall is not illustrated in the digram, but required in production environment.
nginx_env

Design Key Points:

  • A pair of load balancers in active/passive HA cluster, sharing single virtual IP (VIP). DNS name will be associated to the VIP.
  • The separation of web, application and backend layers in different subnets, allows flexibility to enforce security.
  • Web cluster, PHP cluster, database cluster and file server cluster are created to achieve high availability and horizontal scalability
  • PHP servers, file serves and database servers are external to web serves, which helps scalability. File server cluster will store WordPress files and mounted to web nodes and PHP nodes.

Lab Description

Lab Scope

Due to the limit of my time and computing resource (= ‘money’), the boxes highlighted in RED in the above diagram are in the current lab scope. If time allows, I will deploy LVS load balancers as well. If you are interested in pfSense as load balancer, please refer to my posts Use pfSense to Load Balance Web Servers (1) and Use pfSense to Load Balance Web Servers (2).

Lab Servers and Software

I used DigitalOcean for cloud servers. It is cheaper than AWS. The cheapest instance is only US$5/month.

DigitalOcean $5 deal provides: 1 CPU, 512MB RAM, 20GB SSD, and 1000GB transfer. All servers used in this lab are $5 servers.

If you decide to use DigitalOcean, please use my referral link http://www.digitalocean.com/?refcode=81650e396096. You will get US$10 in credit immediately; and I may get some referral benefits as well, so win-win.

The downside is DigitalOcean provides less features. It allows private IP… but the private IP is automatically generated based on the selected datacentre (DC). For example, all my servers are in New York 3 DC, and their private IPs are all in the same subnet. It means I cannot manipulate IP allocation and routing as I did in AWS. In addition, it provides little ready-to-use security mechanism, though we can use Linux native firewall and install additional security services.

It is good and bad: good for quick DevOp testing, no hustle with network; bad for production environment or if you particular like network…myself for example 🙂

Web Service Layer Application/Service Platform
Load Balancer Layer Linux Virtual Server(LVS) Not deployed yet
Web Server Layer Nginx Ubuntu16.04.1×64
Application Layer php-fpm php-mysql Ubuntu16.04.1×64
Backend Layer-Database MySQL Ubuntu14.04.5×64
Backend Layer-File System Gluster Ubuntu16.04.1×64

Deployment Steps

Step 1 – MySQL Database

#update and install mysql server on DB01. A GUI window will appear to assign mysql root password
sudo apt-get update
sudo apt-get -y install mysql-server

#create 'wordpress1' database. '-u' followed by username.'-p' means password, it will require password in a separate line.
sudo mysqladmin -u root -p create wordpress1

#change root password if required.
sudo mysqladmin -u root -p password 

#enter mysql shell and enter password in separate line.
sudo mysql -u root -p

#create a user (CREATE USER 'wpuser1'), who can access from any host (@'%'), and with a password ('password'). Remember to add ';' at the end of each command under mysql shell to complete a command.
CREATE USER 'wpuser1'@'%' IDENTIFIED BY 'password';

#grant user 'wpuser1' full permission in particular to 'wordpress1'database, but not global database.
GRANT ALL PRIVILEGES ON wordpress1.* TO 'wpuser1'@'%';

#verify the existence of 'wordpress1' database and 'wpuser1'
show databases;
select User from mysql.user;

#Update database permissions and exit mysql shell
flush privilege;
exit

#Edit mysql config file to update the bind address from 127.0.0.1(loopback) to the actual private address. 
#Refer 'Note' section for details.
sudo nano /etc/mysql/my.cnf

#restart mysql service
sudo service mysql restart

Note – MySQL Bind Address: 

If the bind address in ‘my.cnf’ remains loopback, the database will not allow remote database access. When accessing the website, the following will show:
db_error.png
We will need to edit ‘/etc/mysql/my.cnf‘ as following, where ‘10.132.88.196’ is the DB server’s private IP.
mycnf.png

Then we execute ‘service mysql restart‘ to restart mysql service. The following screenshot shows the DB server was listening on localhost port 3306 before restarting mysql service; and listening on DB01’s IP after restarting the service.

I also checked firewall status to make no traffic is accidentally blocked.

db_error2.png

To be Continued

Next Deploy Scalable and Reliable WordPress Site on LEMP(2)

Use pfSense to Load Balance Web Servers (2)

Use pfSense to Load Balance Web Servers (1) introduces pfSense, the lab setup, VM specs and download links. This blog will demonstrate pfSense configuration, test and troubleshooting details.

Configuration

pfSense Configuration

An overview of pfSense configuration steps are as below along with key information for each step, testing and troubleshooting approach.
pfSense_config_LB.png

Step 1: Initial Configuration

Boot up pfSense VM and wait till installation is completed. Remove pfSense.iso image from the VM and reboot the VM. The following screen will show and guide you through the initial setup.
pfsense_intial_setup.png

Select 2) to configure interface IPs. Please note LAN interface is the default management interface. In our case, we can access pfSense web GUI from https://10.10.10.1.

WAN interface requires default gateway address, ‘192.168.10.1’ in our case. Routing can also be modified after accessing pfSense webconfig GUI.

Step 2: Access pfSense Web GUI

Access pfSense Web GUI from https://10.10.10.1 from the management PC 10.10.10.10. The default username is ‘admin‘ and password ‘pfsense‘. User password can be changed under ‘System/User Management’ as below. Radius and LDAP authentication is also supported.
pfsense_user.png

The default web GUI (HTTPS) port is 443. It can be changed to user-defined port number under’System/Advanced/Admin Access’, as below:
pfsense_https_port.png

Step 3: Create Virtual IP

We need to create a virtual IP under ‘Firewall/Virtual IPs’, which will be used as load balancer’s virtual server IP later in Step 5. The virtual server IP will further forward traffic to the web servers in the load balancing pool. Please refer to the load balanced data flow diagram in Use pfSense to Load Balance Web Servers (1).

Create ‘IP Alias’ type virtual IP if there is single pfSense. Create ‘CARP’ type virtual IP if there are two pfSense in a cluster.CARP stands for ‘Common Address Redundancy Protocol’, functioning similar to VRRP and HSRP.
pfsense_VIP.png
As part of testing/troubleshooting, please make sure the virtual IP is reachable from required subnet. Ping may be temporarily allowed for test purpose.Please note ‘ping’ is ICMP, neither TCP nor UDP.

Step 4: Create Load Balancer Pool

We then create load balancer pool where we can define member servers, under ‘Services/Load Balancer/Pools’. Default monitoring protocol includes ICMP, TCP, HTTP, HTTPS and SMTP. If additional protocol is required, it can be added under ‘Monitors’.
pfSense_Pools.png

Step 5: Create Load Balancer Virtual Servers

Virtual server is created to host the load balancer’s shared IP. It uses the virtual IP we created before in Step 3. We also assign load balancer pool created in Step 4 to virtual server as below:
pfsense_VS.png

As part of testing/troubleshooting, please make sure no error under ‘Status/Load Balancer’ and ‘Status/System Logs/Load Balancer’. For HTTP and HTTPS traffic, if the load balancer members and/or the virtual server are not configured appropriate, the access may fallback to the pfSense web GUI.

Step 6: Tailor Firewall Rules

Since pfSense also functions as firewall, we will need to tailor the firewall rules to allow required traffic and block unwanted traffic. Firewall rules are configured under ‘Firewall/Rules’, as below:
pfsense_firewall_rules.png

Please note, pfSense firewall rules allow us to define traffic direction as well as application to the specified interface. For example, if we have traffic initiated from LAN to SVR; then we allow traffic from LAN net (all LAN subnet IPs) to SVR net (all SVR subnet IPs) and apply the rule to LAN interface on the pfSense. pfSense is stateful firewall by default, we don’t have to set up rules for the return traffic.

Another easy way to figure out what firewall rules are required is to block all uncertain traffic and check what traffic is blocked under ‘Status/System/Logs/Firewall’. Then pass the required traffic directly from the blocked list by clicking ‘+’, as blow:
pfsense_firewall_log.png

Test Access to Load Balanced IP

We then test access to the load balanced IP. The network topology is in Use pfSense to Load Balance Web Servers (1).
pfsense_data_flow

User access the load balanced IPs from a computer over the Internet. When s/he access http://10.10.20.20, the following shows:
pfsense_clst1_LAN.png

The user access is load balanced between Server 1 and Server 2 in Cluster 1 as above screenshot.

Similarly, when the user access http://10.10.20.30 or http://192.168.10.30, the following shows:
pfsense_clst2.png

The user access is load balanced between Server 1 and Server 2 in Cluster 2 as above screenshot.

10.10.20.20 and 10.10.20.30 are examples of using internal IP as load balanced IP; while 192.168.10.30 is example of using external IP as load balanced IP.

You may need to clear cache if browser is not working as expected.

Use pfSense as Layer2 Firewall/Bridged Interface

pfSense does support Layer 2 firewall mode (also called transparent mode) by bridging the required interfaces, under ‘Interfaces/(assign)/Bridges’ as below:
pfsense_bridge

Layer 2 mode will allow the load balanced IP using external IP, while member servers also use external IP subnet. Use case example is as below:
pfsense_layer2_usecase.png

pfSense firewall bridge configuration reference is available here.

Site-R1 Cisco 7200 Router Configuration

Site-Site-R1#show run
Current configuration : 1258 bytes
!
version 12.4
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname Site-Site-R1
!
boot-start-marker
boot-end-marker
!
no aaa new-model
no ip icmp rate-limit unreachable
ip cef
ip tcp synwait-time 5
!
no ip domain lookup
!         
multilink bundle-name authenticated
!
interface FastEthernet0/0
 ip address 200.10.10.10 255.255.255.0
 duplex full
!
interface Ethernet1/0
 ip address 192.168.10.1 255.255.255.0
 duplex full
!
interface Ethernet1/1
 no ip address
 shutdown
 duplex half
!
interface Ethernet1/2
 no ip address
 shutdown
 duplex half
!
interface Ethernet1/3
 no ip address
 shutdown
 duplex half
!
ip route 0.0.0.0 0.0.0.0 200.10.10.1
ip route 10.10.10.0 255.255.255.0 192.168.10.10
ip route 10.10.20.0 255.255.255.0 192.168.10.10
ip route 192.168.20.0 255.255.255.0 192.168.10.10
no ip http server
no ip http secure-server
!
logging alarm informational
no cdp log mismatch duplex
!
control-plane
!
gatekeeper
 shutdown
!
line con 0
 exec-timeout 0 0
 privilege level 15
 logging synchronous
 stopbits 1
line aux 0
 exec-timeout 0 0
 privilege level 15
 logging synchronous
 stopbits 1
line vty 0 4
 login
!
end

Last But Not Least

  • Make sure routing, IP schema  and etc. are well planned.
  • Make sure only open minimum required ports on firewall.
  • Make sure proper zone segmentation using firewall to enforce security.
  • Use centrally managed authentication and authorisation, using remote user data source.