AUTRUNK Migrating to Self-Hosted Site

Hi all AUTRUNK subscribers,

I am migrating AUTRUNK to self-hosted site. The domain name remains same as http://www.autrunk.com.

Your subscription has been migrated; therefore, you should be able to receive subscription as before.

Another news is…I am running my own ICT engineering company (AltairX) in Canberra Australia, to extend my aspiration to help those who want to change career to IT, female with passion in IT, and enjoy IT with my colleagues.

Please let me know if there is any area you think we can work together ūüôā

Thanks

MengMeng

Advertisements

Build Secure File Transfer Solution Using AWS S3 (2)

Introduction

In the previous article Build Secure File Transfer Solution Using AWS S3 (1), I introduced the solution design, security considerations and hardening in particular, when using AWS S3 for secure file transfer. S3 bucket policy and IAM user policy are jointly used to enforce access control.

This article will demonstrates the configuration activities required to deploy the secure file transfer solution using AWS S3 service.

Configuration Steps

Overview

I developed a process map to provide an overview of the configuration activities. Boxes bordered in red require JSON scripts, which are attached in this article.

S3_creation_processpng.png

1.1 Create S3 Bucket

Create two S3 buckets, one for file and one for log records. I selected ‘Sydney’ as my bucket region so that the documents will stay in Australia onshore.

Please note S3 naming requirements:

  • Start with lowercase or number
  • Only contain lowercase, numbers, periods and dashes
  • Globally unique

1.2 Configure Bucket Properties

Select the target bucket and configure Properties as required. I enabled logging and sent logs to the log bucket. In addition, versioning is enabled to track and revert to previous file change. It not only enforces security but also allows file recovery.

S3_bucket_property.png

1.3 Create Bucket Policy

Bucket policy is created from’Properties > Permissions > Edit bucket policy’.

The following  JSON script enforces:

  • Upload using server side encryption AES256.
  • Download only allowed from whitelisted¬†IP ‘8.8.8.8’.
altairxfile Bucket Policy
{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::altairxfile/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::altairxfile/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        },
        {
            "Sid": "IPDeny",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::altairxfile/*",
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": "8.8.8.8/32"
                }
            }
        }
    ]
}

2.1 Create User Policy

We then create S3 user policies using JSON, from ‘IAM > Policies’.Please note IAM doesn’t have regional setting and always ‘Global’.

In the following example, we create three policies, which will be applied to three user groups ‘S3_HR’, S3_Log’ and ‘S3_USER’ respectively. Custom-built¬†polices can be filtered though ‘Customer Managed’.

S3_IAM_policy.png

‘S3_HR’ policy enforces the following rules:

  • S3_HR can manage all files and subfolders under ‘altairxfile/user/’.
  • S3_HR cannot access any other bucket or folders.
S3_HR Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowGroupToSeeBucketListInTheConsole",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": [
                "arn:aws:s3:::altairxfile"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": [
                "arn:aws:s3:::altairxfile/user"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::altairxfile/user/*"
            ]
        }
    ]
}

‘S3_USER’ policy enforces the following rules:

  • S3_USER will have a home folder under ‘altairxfile/user’, with their username as home folder name.
  • S3_USER can only upload and delete files from their home folder.
  • S3_USER cannot access any other bucket or folders.
S3_USER Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowGroupToSeeBucketListInTheConsole",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Sid": "AllowRootAndHomeListingOfCompanyBucket",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::altairxfile"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:prefix": [
                        "",
                        "user/"
                    ],
                    "s3:delimiter": [
                        "/"
                    ]
                }
            }
        },
        {
            "Sid": "AllowListingOfUserFolder",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::altairxfile"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "user/${aws:username}/*"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInUserFolder",
            "Action": [
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::altairxfile/user/${aws:username}/*"
            ]
        }
    ]
}

2.2 Create Group

Create user group under ‘IAM > Groups’ and attach respective policy.

S3_IAM_group.png

2.3 Manage Password Policy

Password policy can be managed under ‘IAM > Account settings’, as below:

S3_IAM_pwdpolicy.png

If users are supposed to change password upon first logon, you need to enable ‘Allow users to change their own password’.

2.4 Create User

Users are created under ‘IAM > Users’. User are assigned to groups (created in 2.2); therefore, inherit respective group policies (created in 2.1).

S3_IAM_user.png

2.5 Notify User

Upon completing user creation, we can directly send an email to the user.

S3_IAM_email.png

 

The AWS auto-generated email content includes logon details, as below:

S3_email.png

2.6 Configure MFA (Optional)

Multifactor authentication can be enabled from ‘IAM > Users > [select user] > Security credentials > Assigned MFA device’, as below:

S3_IAM_MFA.png

If users shall use soft token for dual factor authentication, they can install Google Authenticator on their mobile phone and follow AWS virtual MFA instruction to finish configuration.

To Be Continued

In the next article, I will demonstrate how user uploads files to their home folder in AWS S3 bucket and a few tests on security policies.

 

Build Secure File Transfer Solution Using AWS S3 (1)

We All Need Secure File Transfer

It is not unusual for companies to protect their commercial and client information. It is not unusual for government agencies to protect national security and personal information.

However,¬†during job application¬†or other kind of assessment/application, our personal information may be transferred to the recruiting agent and/or the employer in a less secure way – via public email. Especially many employers these days require far more than just CV, but also quite a few¬†personal documents; passport copy, drivers’ license, birth certificates, citizenship proof, social welfare card, academic certificates, just list a few for example.

If such information leaks, someone may impersonate us, gain our access and privilege, even endanger our company or country – OK, I might watch too much movies ūüôā

It can be really easy and inexpensive to secure our file transfer. File encryption tool can be a simple and free answer, such as 7-Zip for Windows and Keka for MacOS.

Following is  an encryption example from Keka on my Mac. Job applicant can then email the encrypted file to the agent/employer and advise the password via text message or phone. The separation of the actual file and the password helps enhance security.
Keka_Encrypt.png

If files are too large for email to handle or more comprehensive security is required, then the following AWS S3 can be an easy and inexpensive solution.

Why AWS S3 Storage?

AWS S3 receives IRAP accreditation and is an Australian federal government certified cloud service.Reference is as below:

Some benefits of AWS S3 include but not limited to:

  • Regional storage is available, which meets government requirement of onshore storage
  • Physical hardware and environment etc. passed IRAP assessment
  • Central authentication and authorisation, dual factor authentication are available
  • User access policies, whitelisting, file and transport encryption can be enforced
  • Log information is available
  • Versioning is available in case accidental delete and for auditing purpose
  • High availability and tape backup is available – please refer to AWS S3 Product Page
  • Inexpensive especially when using Reduced Redundancy Storage (RRS) and the service is charged based on ongoing storage usage.

Lab Solution Design

I built a secure file transfer solution over the weekend for personal and small group use, not fully polished yet though. The example organisation is called AltairX.

Design diagram is as below:
S3_SecureTransfer_Design.jpg
Security considerations are as below, the following classification is based on AWS functions:

1. Authentication – User and Credential

  • Username must not¬†reflect the user’s actual name to enhance security, e.g. u12fx is used.
  • User must be¬†assigned to group(s) for access permission.
  • User can only access the file storage via browser, i.e. API access is not allowed in¬†our case – though it can be designed if required.
  • User will be assigned an initial auto-generated password and must¬†change password at next sign-in.
  • Password complexity and expiration/renewal requirements are enforced.
  • Privilege users, HRs in our case, must use dual factor authentication to login.

2. Authorisation – Group Policies

3 groups are created: S3_User, S3_HR, and S3_LOG. Each group is associate with a group policy. User is assigned to required group.

2.1 S3_USER Policy

  • Users can only¬†access¬†their home¬†folder. e.g. user ‘umezh’ can only access ‘altairxfile/user/umezh’, but not ‘altairxfile/user/u12fx’.
  • Users can upload and delete the files in their home folder, but not download files from the folder.

2.2 S3_HR Policy

  • HR users can only access all users’ home folder, i.e. ‘altairxfile/user/*’, but not other folders under or not under ‘altairxfile’ bucket.
  • HR users can upload and delete, as well as download files from any user’s home folder. e.g. download files from¬†‘altairxfile/user/u12fx’ and¬†‘altairxfile/user/umezh’.

2.3 S3_LOG Policy

  • Log¬†users¬†can only access log files stored in ‘altairxlog’ bucket, but not other buckets.
  • Log users can only read¬†and download logs, but not delete, modify, and upload logs.

3. Resource Access Control – Bucket Policies

3.1 ‘altairxfile’ Bucket Policy

  • Any documents stored in this bucket must have server-side AES256 encryption. It means the encryption will be handled by AWS using AWS certificates, users don’t have to encrypt at their side.
  • Files download is only allowed¬†from whitelisted¬†IPs, e.g. the organisation¬†AltairX’ public IP¬†in our case.
  • Private access is enforced on all files and folders. Public access without authentication is not allowed.

3.2 ‘altairxlog’ Bucket Policy

  • Same requirements as applied to ‘altairxfile’ bucket.

4. Logging and Auditing

  • User access and activity¬†logs are stored in a separate bucket, i.e.’altairxlog’.
  • Versioning is enabled to track object¬†changes (folder and file in our case).
  • Event alert can be configured¬†to allow email and/or message notification if required.

5. File Transmission Encryption – HTTPS(TLS)

AWS S3 service stopped SSL support a few years ago and enforce TLS. I used SSL lab to assess AWS S3 HTTPS security. We can see the overall rating is pretty good.
s3_ssllab
TLS and SSL support information is as below. It shows that SSL is not supported any more. End user can also force TLS1.2 only connection by modifying the browser security setting to TLS1.2 only.

s3_ssllab_tls

6. Other processes and policies

  • Files should be downloaded from AWS S3 within 24 hours upon being received and stored in the company’s¬†secured on-premises storage, if required.
  • Files are deleted from AWS S3 once downloaded.

To be continued…

In the next article, I will test the secure file transfer setup, include user manual, and share some policy scripts written in JSON.

Cryptography – How are RSA, AES and SHA different?

Services of Cryptography System

Cryptography is more than encryption. The services provided by cryptography system may include the following:

Confidentiality: renders the information unintelligible except by authorised entities.

Integrity: data has not been altered in an unauthorised manner since it was created, transmitted, or stored.

Authentication: verifies the identity of the user of system that created the information.

Authorisation: upon proving identity, the individual is then provided with the key or password that will allow access to some resource.

Nonrepudiation: ensures that the sender cannot deny sending the message.

Encryption

RSA, AES and SHA can all provide encryption but for different purpose.

RSA

RSA fits in in PKI asymmetric key structure. It provides message encryption and supports authentication and nonrepudiation services.

However, the downside is the encryption process is much slower than symmetric key, such as AES and DES. Therefore, it is often used to encrypt and distribute symmetric key.

Online RSA public and private key generator: http://travistidwell.com/blog/2013/09/06/an-online-rsa-public-and-private-key-generator/

AES

AES fits in symmetric key structure and provides longer key (safer) than DES. It provides message encryption, much faster than asymmetric key such as RSA. Therefore, it is used to encrypt file content and communication.

Online AES encryption: http://aes.online-domain-tools.com/

SHA

SHA and MD5 hashing are used to generate message digest to verify message integrity – message is not altered during transition. Hasing is one-way function and cannot be reversed. Same content always generates the same hash value. Therefore, hashing is often used to ensure message integrity; or when no decryption is required, such as Cisco enable password.

However, if simple text is used, the hash value may be reversible and the plaintext password is revealed. It can be done by hashing dictionaries to achieve a hash value library and then matching the hash value of the password to the library to figure out the original password. Therefore, complicated password should always be required for security reason.

Online hashing: http://www.fileformat.info/tool/hash.htm

Cisco password reverser: http://packetlife.net/toolbox/type7/

The following table summarises the pros, cons and usage of different cryptographies.

Cryptography Pros Cons Usage
Symmetric
(e.g. DES, 3DES, AES)
·      Fast
·      Hard to break if using large key size
·      How to securely deliver keys?
¬∑¬†¬†¬†¬†¬† Scalability ‚Äď too many unique keys
·      authenticity or nonrepudiation not provided
·      Encrypt files and communication paths
Asymmetric Cryptography – PKI
(e.g. RSA, DH, DSA, ECC)
·      Better key distribution than symmetric
·      Better scalability
·      Provide authentication and nonrepudiation
·      1000+ times slower than symmetric ·      Distribute symmetric key (except DSA)
·      Digital signature (except DH)
Hashing
(e.g. MD5, SHA)
·      One-way function, fast
¬∑¬†¬†¬†¬†¬† Provide message digest ‚Äď easy file comparison
·      Safety Рsame content always generates same hash value
·      Decryption is not supported, due to one-way function
¬∑¬†¬†¬†¬†¬† Check message integrity ‚Äď no alteration

Summary of Cryptography Mechanism

Following diagram is a summary of cryptography mechanism, including i) key distribution process, enabled by RSA or DH; ii) content and communication encryption process, enabled by AES, 3DES or DES; iii) hashing process, enabled by SHA or MD5; iv) digital signature process; enabled by RSA or DSA etc.
CryptoMechanism.jpg

Avoid Asymmetric Routing in Load Balancing (pfSense example)

Introduction

My previous blogs¬†Use pfSense to Load Balance Web Servers (1)¬†and¬†Use pfSense to Load Balance Web Servers (2)¬†introduced the deployment of pfSense as load balancer to distribute web traffic to backend¬†server nodes (i.e. Clst1-S1 and Clst2-S2; Clst2-S1 and Clst2-S2). pfSense hosts Server Cluster 1’s virtual IP 10.10.20.20 and Server Cluster 2’s virtual IP 10.10.20.30.

In the previous lab, when we accessed http://10.10.20.20 from internal Mgmt PC (10.10.10.10/24), the traffic was successfully load balanced to either Clst1-S1 (10.10.20.21) or Clst1-S2(10.10.20.22).
pfsense_lab_topo

Failed Scenario

However, I received a question that when access http://10.10.20.20 from Mgmt2 (diagram below) which is in the same subnet as the backend nodes, Mgmt2 cannot reach the web service.

Mgmt IP: 10.10.10.10 (successfully accessed http://10.10.20.20)
Mgmt2 IP: 10.10.20.10 (failed to access http://10.10.20.20)
Cluster 1 VIP: 10.10.20.20
Cluster 1 Node 1 IP: 10.10.20.21
Cluster 1 Node 2 IP: 10.10.20.22
pfsense_snat_topo_issue

I replicated the failed scenario and observe the following:pfSense_mgmt2_failed.png

Asymmetric Routing

What is the difference between access the web service from Mgmt and Mgmt2?

Mgmt PC is external to the web service subnet. When the user requests to access http://10.10.20.20, the traffic reaches pfSense load balancer, and then forwarded to either Clst1-S1(10.10.20.21) or Clst1-S2(10.10.20.22). Let’s assume Clst1-S1 responses to the request this time. Since Mgmt PC is in a different subnet (10.10.10.0/24), the return traffic reaches its default gateway on pfSense (10.10.20.1) first, and then routed to Mgmt PC.
pfSense_SNAT_topo_extmgmt.png

However, Mgmt2 PC is internal to the web service subnet. When the user requests to access http://10.10.20.20, the traffic reaches pfSense load balancer, and then forwarded to Clst1-S1 (10.10.20.21). Since Mgmt2 PC is in the same subnet as the web servers (10.10.20.0/24), the return traffic goest to Mgmt2 PC directly via SW1, without transiting through the default gateway on the load balancer.

Asymmetric routing occurs. Although some devices have tolerance on asymmetric routing,  these days, we still try to avoid whenever we can. For example, F5 load balancer allows asymmetric routing but it will limit the features. Asymmetric routing also adds network complexity and security concerns.
pfSense_SNAT_topo_intmgmt.png

SNAT as Solution

If the business requirement says the user must have access to the web service from the same subnet, then SNAT can be a solution to avoid the asymmetric routing problem.

pfSenseLB translates the source IP of the traffic initiated from Mgmt2 from 10.10.20.10 to 10.10.10.11. In this case, when Clst1-S1 receives the traffic from Mgmt2, it will response to 10.10.10.11, which forces the return traffic through pfSenseLB. pfSenseLB then translates 10.10.10.11 back to 10.10.20.10 and sends to Mgmt2.
pfSense_SNAT_topo_SNAT.png

pfSense_SNAT_topo_SNAT2.png

The following screenshot demonstrates SNAT configuration details on pfSense.
pfSense_snat.png

After the SNAT, we can successfully access http://10.10.20.20 from Mgmt2 now.
pfSense_mgmt2_success.png

NAT hit can be checked using shell command ‘pfctl -vvs nat’.
psSense_nat_hit.png

End

Be careful of asymmetric routing in load balancing design. For example, one-arm and multi-path (nPath) design may involve asymmetric routing. The selection of design models depends on business requirements. SNAT is a potential solution to asymmetric routing problem.

Set up NGINX as Reverse Proxy with Caching

Introduction

This lab reuses the server infrastructure built in Deploy Scalable and Reliable WordPress Site on LEMP(1), but add another Nginx server as load balancer/reverse proxy (LB01) in front of the web servers (WEB01 and WEB02). Caching will be enabled on LB01 and tested as well.

Boxes highlighted in RED below are deployed in the lab. Although WEB02 is not deployed in the current lab, it can be deployed in the same way as WEB01, described in Deploy Scalable and Reliable WordPress Site on LEMP(2); and proxied by LB01 as shown in the later configuration section.
nginx_reverseproxy_cache.png

Key Concepts

Forward Proxy vs. Reverse Proxy

Forward proxy can be used when servers/clients from a company’s internal network to reach internet resources. It helps keep user IP¬†anonymous, filter URLs and may speed up internet surfing by caching web content.

Reverse proxy can be used when internet users try to access a company’s internal resource. The user request arrives at the reverse proxy server,¬†which forward the request to a backend server that can fulfill it, and returns the server‚Äôs response to the client. It hides the company’s actual server IP from attackers, and reduces the load on the actual server by providing cached content itself.

Load Balancing vs. Reverse Proxy

Nginx site provides a good explanation on this topic: https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/

In this lab, Nginx is set up as load balancer and reverse proxy.

Deployment Steps

Step 1 ‚Äď Install Nginx on Ubuntu 16.04

Select $5/month Ubuntu 16.04 droplet on DigitalOcean. DigitalOcean calls its Virtual Private Server (VPS) ‘droplet’. Refer to Deploy Scalable and Reliable WordPress Site on LEMP(1)¬†for the details about DigitalOcean and the droplets used in my labs.

Install Nginx on the newly created droplet LB01, by executing the following command:

sudo apt-get update
sudo apt-get -y install nginx

Step 2 ‚ÄstConfigure Reverse Proxy

Edit Nginx site configuration on LB01 to pass on web requests to backend web servers.

sudo nano  /etc/nginx/sites-enabled/default

Use Nginx HTTP ‘upstream’ module to realise load balacing and reverse proxy to multiple backend servers. Refer to the official¬†module documentation for details. Update the content of ‘/etc/nginx/sites-enabled/default’ as below:

#define a upstream server group called 'webserver'. 'ip_hash' enables session persistence is required. '10.132.84.104' is WEB01's private IP; '10.132.84.105' is WEB02's private IP.
upstream webserver {
                ip_hash;
                server 10.132.84.104;
                server 10.132.84.105;
}

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;

	root /var/www/html;

	# Add index.php to the list if you are using PHP
	index index.php index.html index.htm;

	location / {
		# Call the upstream server group 'webserver', which we defined earlier. We can add additional proxy parameters in '/etc/nginx/proxy_params' if required.
                proxy_pass http://webserver;
                include /etc/nginx/proxy_params;
	}
      }  

Restart Nginx service to make our change work.

sudo service nginx restart

Test access to our WordPress site (created in Deploy Scalable and Reliable WordPress Site on LEMP(2)) via LB01’s public IP. We should see the same page as we directly access to WEB01’s public IP.
nginx_LB1.png
If things don’t work, check the error log ‘/var/log/nginx/error.log’ on LB01. We can use ‘cat’ to display file content, but we use ‘tail’ this time to list the final ‘n’ records.

# -n 6 means to display the final 6 records in the given file.
tail -n 6 /var/log/nginx/error.log

Step 3 ‚ÄstConfigure Cache Server

Configure¬†cache in Nginx site configuration file ‘/etc/nginx/sites-enabled/default’. The current file content can be viewed using ‘cat’, ‘less’ or ‘more’.

cat = can be used to join multiple files together and print the result on screen (it will not show page by page)

more = to view a text file one page at a time, press spacebar to go to the next page

less = is much the same as more command except it also supports page up/down and string search. Less is the enhanced version of ‘more’.

Further details refer to ‘Linux Command 7 – more, less, head, tail, cat‘.

Update¬†‘/etc/nginx/sites-enabled/default’ as following. Refer details in ‘Nginx Caching‘, but additional include ‘proxy_cache_valid’ directive. In my lab, if ‘proxy_cache_valid’ is unset, cached status always shows ‘MISS’. Please refer to¬†Nginx Content Caching¬†for proxy_cache_valid directive details.

#cache files will be saved in subdirectories (1:2) under '/tmp/nginx'. 
#cache zone called 'my_zone' is created with 10MB in size to store cache keys and other metadata
#'inactive=60m' means asset will be cleared from cache if not accessed within 60 mins 
 '200 10m' means response with the code 200 are considered valid for 10 mins.
proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:10m inactive=60m;

#proxy_cache_key defines the key (identifier) for a request.If the request has the same key as a cached response, then cached response is sent to the client.
proxy_cache_key "$scheme$request_method$host$request_uri";

#'proxy_cache_valid' is to set how long cached responses are considered valid.
proxy_cache_valid 200 10m;

upstream webserver {
ip_hash;
server 10.132.84.104;
}

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;
        root /var/www/html;

	# Add index.php to the list if you are using PHP
	index index.php index.html index.htm;

	server_name _;

	location / {
                #use the cache zone 'my_zone', which we defined earlier.
                proxy_cache my_zone;
                
                #add an informative header 'X-Proxy-Cache' in response to tell us whether we hit cached content or miss.
                add_header X-Proxy-Cache $upstream_cache_status;
                proxy_pass http://webserver;
                include /etc/nginx/proxy_params;
                }
	}

‘/tmp/nginx’ is the cache file path we defined earlier.
cache_path.png

Finally, let’s test the content cache. When first time visit, ‘X-Proxy-Cache’ shows ‘MISS’; but shows ‘HIT’ when re-visit.’X-Proxy-Cache: EXPIRED’ shows, when the asset is not accessed in 60 mins and cleared from cache.
nginx_cache_hit.png