Build Secure File Transfer Solution Using AWS S3 (2)

Introduction

In the previous article Build Secure File Transfer Solution Using AWS S3 (1), I introduced the solution design, security considerations and hardening in particular, when using AWS S3 for secure file transfer. S3 bucket policy and IAM user policy are jointly used to enforce access control.

This article will demonstrates the configuration activities required to deploy the secure file transfer solution using AWS S3 service.

Configuration Steps

Overview

I developed a process map to provide an overview of the configuration activities. Boxes bordered in red require JSON scripts, which are attached in this article.

S3_creation_processpng.png

1.1 Create S3 Bucket

Create two S3 buckets, one for file and one for log records. I selected ‘Sydney’ as my bucket region so that the documents will stay in Australia onshore.

Please note S3 naming requirements:

  • Start with lowercase or number
  • Only contain lowercase, numbers, periods and dashes
  • Globally unique

1.2 Configure Bucket Properties

Select the target bucket and configure Properties as required. I enabled logging and sent logs to the log bucket. In addition, versioning is enabled to track and revert to previous file change. It not only enforces security but also allows file recovery.

S3_bucket_property.png

1.3 Create Bucket Policy

Bucket policy is created from’Properties > Permissions > Edit bucket policy’.

The following  JSON script enforces:

  • Upload using server side encryption AES256.
  • Download only allowed from whitelisted IP ‘8.8.8.8’.
altairxfile Bucket Policy
{
    "Version": "2012-10-17",
    "Id": "PutObjPolicy",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::altairxfile/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::altairxfile/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        },
        {
            "Sid": "IPDeny",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::altairxfile/*",
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": "8.8.8.8/32"
                }
            }
        }
    ]
}

2.1 Create User Policy

We then create S3 user policies using JSON, from ‘IAM > Policies’.Please note IAM doesn’t have regional setting and always ‘Global’.

In the following example, we create three policies, which will be applied to three user groups ‘S3_HR’, S3_Log’ and ‘S3_USER’ respectively. Custom-built polices can be filtered though ‘Customer Managed’.

S3_IAM_policy.png

‘S3_HR’ policy enforces the following rules:

  • S3_HR can manage all files and subfolders under ‘altairxfile/user/’.
  • S3_HR cannot access any other bucket or folders.
S3_HR Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowGroupToSeeBucketListInTheConsole",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": [
                "arn:aws:s3:::altairxfile"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": [
                "arn:aws:s3:::altairxfile/user"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::altairxfile/user/*"
            ]
        }
    ]
}

‘S3_USER’ policy enforces the following rules:

  • S3_USER will have a home folder under ‘altairxfile/user’, with their username as home folder name.
  • S3_USER can only upload and delete files from their home folder.
  • S3_USER cannot access any other bucket or folders.
S3_USER Policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowGroupToSeeBucketListInTheConsole",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation"
            ],
            "Effect": "Allow",
            "Resource": "*"
        },
        {
            "Sid": "AllowRootAndHomeListingOfCompanyBucket",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::altairxfile"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:prefix": [
                        "",
                        "user/"
                    ],
                    "s3:delimiter": [
                        "/"
                    ]
                }
            }
        },
        {
            "Sid": "AllowListingOfUserFolder",
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::altairxfile"
            ],
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "user/${aws:username}/*"
                    ]
                }
            }
        },
        {
            "Sid": "AllowAllS3ActionsInUserFolder",
            "Action": [
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::altairxfile/user/${aws:username}/*"
            ]
        }
    ]
}

2.2 Create Group

Create user group under ‘IAM > Groups’ and attach respective policy.

S3_IAM_group.png

2.3 Manage Password Policy

Password policy can be managed under ‘IAM > Account settings’, as below:

S3_IAM_pwdpolicy.png

If users are supposed to change password upon first logon, you need to enable ‘Allow users to change their own password’.

2.4 Create User

Users are created under ‘IAM > Users’. User are assigned to groups (created in 2.2); therefore, inherit respective group policies (created in 2.1).

S3_IAM_user.png

2.5 Notify User

Upon completing user creation, we can directly send an email to the user.

S3_IAM_email.png

 

The AWS auto-generated email content includes logon details, as below:

S3_email.png

2.6 Configure MFA (Optional)

Multifactor authentication can be enabled from ‘IAM > Users > [select user] > Security credentials > Assigned MFA device’, as below:

S3_IAM_MFA.png

If users shall use soft token for dual factor authentication, they can install Google Authenticator on their mobile phone and follow AWS virtual MFA instruction to finish configuration.

To Be Continued

In the next article, I will demonstrate how user uploads files to their home folder in AWS S3 bucket and a few tests on security policies.

 

Advertisements

Build Secure File Transfer Solution Using AWS S3 (1)

We All Need Secure File Transfer

It is not unusual for companies to protect their commercial and client information. It is not unusual for government agencies to protect national security and personal information.

However, during job application or other kind of assessment/application, our personal information may be transferred to the recruiting agent and/or the employer in a less secure way – via public email. Especially many employers these days require far more than just CV, but also quite a few personal documents; passport copy, drivers’ license, birth certificates, citizenship proof, social welfare card, academic certificates, just list a few for example.

If such information leaks, someone may impersonate us, gain our access and privilege, even endanger our company or country – OK, I might watch too much movies 🙂

It can be really easy and inexpensive to secure our file transfer. File encryption tool can be a simple and free answer, such as 7-Zip for Windows and Keka for MacOS.

Following is  an encryption example from Keka on my Mac. Job applicant can then email the encrypted file to the agent/employer and advise the password via text message or phone. The separation of the actual file and the password helps enhance security.
Keka_Encrypt.png

If files are too large for email to handle or more comprehensive security is required, then the following AWS S3 can be an easy and inexpensive solution.

Why AWS S3 Storage?

AWS S3 receives IRAP accreditation and is an Australian federal government certified cloud service.Reference is as below:

Some benefits of AWS S3 include but not limited to:

  • Regional storage is available, which meets government requirement of onshore storage
  • Physical hardware and environment etc. passed IRAP assessment
  • Central authentication and authorisation, dual factor authentication are available
  • User access policies, whitelisting, file and transport encryption can be enforced
  • Log information is available
  • Versioning is available in case accidental delete and for auditing purpose
  • High availability and tape backup is available – please refer to AWS S3 Product Page
  • Inexpensive especially when using Reduced Redundancy Storage (RRS) and the service is charged based on ongoing storage usage.

Lab Solution Design

I built a secure file transfer solution over the weekend for personal and small group use, not fully polished yet though. The example organisation is called AltairX.

Design diagram is as below:
S3_SecureTransfer_Design.jpg
Security considerations are as below, the following classification is based on AWS functions:

1. Authentication – User and Credential

  • Username must not reflect the user’s actual name to enhance security, e.g. u12fx is used.
  • User must be assigned to group(s) for access permission.
  • User can only access the file storage via browser, i.e. API access is not allowed in our case – though it can be designed if required.
  • User will be assigned an initial auto-generated password and must change password at next sign-in.
  • Password complexity and expiration/renewal requirements are enforced.
  • Privilege users, HRs in our case, must use dual factor authentication to login.

2. Authorisation – Group Policies

3 groups are created: S3_User, S3_HR, and S3_LOG. Each group is associate with a group policy. User is assigned to required group.

2.1 S3_USER Policy

  • Users can only access their home folder. e.g. user ‘umezh’ can only access ‘altairxfile/user/umezh’, but not ‘altairxfile/user/u12fx’.
  • Users can upload and delete the files in their home folder, but not download files from the folder.

2.2 S3_HR Policy

  • HR users can only access all users’ home folder, i.e. ‘altairxfile/user/*’, but not other folders under or not under ‘altairxfile’ bucket.
  • HR users can upload and delete, as well as download files from any user’s home folder. e.g. download files from ‘altairxfile/user/u12fx’ and ‘altairxfile/user/umezh’.

2.3 S3_LOG Policy

  • Log users can only access log files stored in ‘altairxlog’ bucket, but not other buckets.
  • Log users can only read and download logs, but not delete, modify, and upload logs.

3. Resource Access Control – Bucket Policies

3.1 ‘altairxfile’ Bucket Policy

  • Any documents stored in this bucket must have server-side AES256 encryption. It means the encryption will be handled by AWS using AWS certificates, users don’t have to encrypt at their side.
  • Files download is only allowed from whitelisted IPs, e.g. the organisation AltairX’ public IP in our case.
  • Private access is enforced on all files and folders. Public access without authentication is not allowed.

3.2 ‘altairxlog’ Bucket Policy

  • Same requirements as applied to ‘altairxfile’ bucket.

4. Logging and Auditing

  • User access and activity logs are stored in a separate bucket, i.e.’altairxlog’.
  • Versioning is enabled to track object changes (folder and file in our case).
  • Event alert can be configured to allow email and/or message notification if required.

5. File Transmission Encryption – HTTPS(TLS)

AWS S3 service stopped SSL support a few years ago and enforce TLS. I used SSL lab to assess AWS S3 HTTPS security. We can see the overall rating is pretty good.
s3_ssllab
TLS and SSL support information is as below. It shows that SSL is not supported any more. End user can also force TLS1.2 only connection by modifying the browser security setting to TLS1.2 only.

s3_ssllab_tls

6. Other processes and policies

  • Files should be downloaded from AWS S3 within 24 hours upon being received and stored in the company’s secured on-premises storage, if required.
  • Files are deleted from AWS S3 once downloaded.

To be continued…

In the next article, I will test the secure file transfer setup, include user manual, and share some policy scripts written in JSON.

Deploy Scalable and Reliable WordPress Site on LEMP(3)

Introduction

In previous post Deploy Scalable and Reliable WordPress Site on LEMP(2), we successful set up Linux+Nginx+PHP+MySQL (LEMP) stack to hold WordPress site. However, Nginx and PHP services were enabled on the same server WEBo1.

In this lab, we will separate PHP to an external server PHP01 and leave WEB01 as Nginx web server only. It adds some flexibility scalability strategy. For performance enhancement details, please refer to Scaling PHP apps via dedicated PHP-FPM nodes, a test post I found online.

Deployment Steps

This lab involves PHP01 deployment and configuration change on WEB01 to forward PHP requests to PHP01. The topology is as below:
nginx_env

Step 1 – Configure PHP01 as php-fpm node

Boot another $5 ubuntu server from DigitalOcean, details available in Deploy Scalable and Reliable WordPress Site on LEMP(1).

#log onto PHP01
#update and install glusterfs client, php-fpm and php-mysql services on PHP01
sudo apt-get update
sudo apt-get -y install glusterfs-client php-fpm php-mysql
#make a folder called 'gluster' under root
sudo mkdir /gluster
#mount the 'file_store' volume on FS01 to '/gluster'on PHP01. '10.132.43.212' is FS01's private IP. 'glusterfs' is the filesystem type.
sudo mount -t glusterfs 10.132.43.212:/file_store /gluster

# add the partition to fstab so it mounts automatically at boot time.
sudo echo "10.132.43.212:/file_store /gluster glusterfs defaults,_netdev 0 0" >> /etc/fstab

Verify PHP01 can access the same WordPress folder we created on FS01 before, by executing the following command on PHP01.

sudo ls -la /gluster/www

Step 2 – WEB01 forward PHP requests to PHP01

Log onto WEB01, edit the nginx site default file by executing the following command:

sudo nano /etc/nginx/sites-enabled/default

Update the file as below; where ‘10.132.19.6’ is PHP01’s private IP, and 9000 is the port used by fastcgi. Comment the local fastcgi socket ‘unix:/run/php/php7.0-fpm.sock’.

	# pass the PHP scripts to FastCGI server listening on the php-fpm socket
        location ~ \.php$ {
                try_files $uri =404;
                #fastcgi_pass unix:/run/php/php7.0-fpm.sock;
                fastcgi_pass 10.132.19.6:9000;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                
        }

Restart Nginx service by executing the following command:

service nginx restart

Let’s now log onto the WordPress site, see whether it is still working as expected in Deploy Scalable and Reliable WordPress Site on LEMP(2).

Unfortunately, we get ‘502 Bad Gateway’ message this time.
nginx_badgateway
Fortunately, Nginx provides us error log. Execute the following command on WEB01 to view the log.

cat /var/log/nginx/error.log

Nginx error log reveals the following:

2016/11/04 10:25:21 [error] 18867#18867: *1 connect() failed (111: Connection refused) while connecting to upstream, client:x.x.x.x, server:y.y.y.y, request: "GET / HTTP/1.1", upstream: "fastcgi://10.132.19.6:9000", host: "y.y.y.y"

OK…it appears WEB01 passed on the request to PHP01, but PHP01 refused the connection. Step 3 will help resolve the issue.

Step 3 – Allow PHP01 to Listen WEB01

On PHP01, edit PHP ‘www.conf’ file to allow listen on WEB01.

sudo nano /etc/php/7.0/fpm/pool.d/www.conf

Perform the following changes in ‘www.conf’.

#Add the following line to allow WEB01's private IP
listen.allowed_clients = 10.132.84.104

#Comment the following line by adding ";" in the front.
;listen = /run/php/php7.0-fpm.sock

#Add the following line to have php-fpm listen on port 9000
listen = 9000

Restart PHP on PHP01 and Nginx on WEB01.

#On PHP01 restart php service
service php7.0-fpm restart
#On WEB01 restart nginx service
service nginx restart

Let’s now access the WordPress site again. ‘Hello world!’ – it’s working!

nginx_resumed.png

The End

Deploy Scalable and Reliable WordPress Site on LEMP(2)

Deploy Scalable and Reliable WordPress Site on LEMP(1) introduced the LEMP design, lab setup and MySQL configuration. This post will further introduce how to deploy Gluster distributed file system and Nginx web server. PHP will be initially enabled on the Nginx web server to prove WordPress site is working. The next post will introduce hosting PHP on a separate server.

Step 2 – Gluster Distributed File System

Gluster is a scale-out network attached file system. It aggregates storage servers, known as ‘storage bricks’ into one large parallel network file system. Virtual volumes are created across the member bricks. Servers with GlusterFS client service installed can have the remote virtual volume mounted.

There are 3 types of virtual volumes. Please refer to ‘GlusterFS Current Features & Roadmap‘ for details.

  • Distributed Volume: similar to RAID 0 without replica; files are evenly spread across bricks.
  • Replicated Volume: similar to RAID 1, which copies files to multiple bricks.
  • Distributed Replicated Volume: Distributes files across replicated bricks.

In this lab, we will deploy 1 node in the GlusterFS cluster, which means ‘Distributed Volume’ mode is used. Additional bricks can be added later.

Configuration is as below. Refer Gluster installation guide for details: https://gluster.readthedocs.io/en/latest/Install-Guide/Install/.

#update and install gluster on FS01. '-y' means automatically answering yes to all prompts.
sudo apt-get update

#ubuntu Personal Package Archive(PPA) requires 'software-properties-common' to be installed first.
sudo apt-get install -y software-properties-common

#add the community GlusterFS PPA
sudo add-apt-repository -y ppa:gluster/glusterfs-3.8

#update again
sudo apt-get update

#install GlusterFS server
sudo apt-get install -y glusterfs-server

#create a volume called 'file_store'. If replica is required, add 'replica n' after the volume name. 
#'10.132.43.212' is the brick's private IP. If multiple bricks exit, all member IPs are required in the command.
gluster volume create file_store transport tcp 10.132.43.212:/gluster force

#start the 'file_store' volume.The volume is ready-to-use now.
gluster volume start file_store

Step 3 – Nginx Web Server

In this step, we will create a Nginx web server with PHP integrated initially; mount the virtual volume created in Step 2  to the web server; download WordPress files to the mounted folder; and then update WordPress config file to point to the mounted folder and connect to the database created in Step 1.

The following configuration shows which services are to be installed and how to mount external volume as partition.

#update and install nginx, glusterfs client, php-fpm and php-mysql services on WEB01
sudo apt-get update
sudo apt-get -y install nginx glusterfs-client php-fpm php-mysql

#Nginx used to require set php-fpm pathinfo to false 'cgi.fix_pathinfo=0' in 'php.ini'.Default is true '1'. It was a security issue related with Nginx and older version (5.0) of php-fpm.
#the change is not required as we are using PHP7.0

#make a folder called 'gluster' under root
sudo mkdir /gluster
#mount the 'file_store' volume on FS01 to '/gluster'on WEB01. '10.132.43.212' is FS01's private IP. 'glusterfs' is the filesystem type.
sudo mount -t glusterfs 10.132.43.212:/file_store /gluster

# add the partition to fstab so it mounts automatically at boot time.
sudo echo "10.132.43.212:/file_store /gluster glusterfs defaults,_netdev 0 0" >> /etc/fstab

# create a folder 'www' under '/gluster'. '/gluster/www' will be the web root.
sudo mkdir /gluster/www

We now need to modify the default Nginx server block to point to our new web root, ‘/gluster/www’.

sudo nano /etc/nginx/sites-enabled/default

Modify ‘/etc/nginx/sites-enabled/default’ file content as below. Changes are highlight in BLUE.

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;

	root /gluster/www;
	index index.php index.html index.htm;

	# Make site accessible from http://localhost/
	server_name autrunk.com;

	location / {
		# First attempt to serve request as file, then
		# as directory, then fall back to displaying a 404.
		#try_files $uri $uri/ =404; 
                #the following config sends everything through to index.php and keeps the appended query intact.
                try_files $uri $uri/ /index.php?q=$uri&$args;
	}
	# pass the PHP scripts to FastCGI server listening on the php-fpm socket
        location ~ \.php$ {
                try_files $uri =404;
                fastcgi_pass unix:/run/php/php7.0-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;              
        }
}

Download and configure WordPress.

#download WordPress files and unzip
wget https://wordpress.org/latest.tar.gz -O /root/wp.tar.gz
tar -zxf /root/wp.tar.gz -C /root/

#copy the WordPress files to our new web root on WEB01. After the copy, the files are also shown on FS01, as it is the storage destination mounted to WEB01.
cp -Rf /root/wordpress/* /gluster/www/.

#copy sample WordPress config file to 'wp-config.php', where we can define the database connection.
cp /gluster/www/wp-config-sample.php /gluster/www/wp-config.php

We now update ‘/gluster/www/wp-config.php’ with the ‘wordpress1’ database and ‘wpuser1’ information, which was created in Step 1.

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'wordpress1');

/** MySQL database username */
define('DB_USER', 'wpuser1');

/** MySQL database password */
define('DB_PASSWORD', 'password');

/** MySQL hostname */
define('DB_HOST', '10.132.88.196');

/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');

/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');

We now make the web root ‘gluster/www’ owned by the user ‘www-data’ and the group ‘www-data’, which Nginx process will use. Then finally restart the Nginx and php-fpm services to make our previous config changes working.

chown -Rf www-data:www-data /gluster/www
service nginx restart
service php7.0-fpm restart

Step 4 – WordPress Site

We now can access the WordPress site by WEB01’s public IP or DNS name if it’s been setup. The following should show and lead to initial setup wizard.
wp_config.png

Self-hosted WordPress provides much more flexibility and features than hosted by WordPress.com
wordpress_selfhost.png

To be Continued

Deploy Scalable and Reliable WordPress Site on LEMP(1)

Introduction

In this lab, we will develop and deploy a scalable and reliable WordPress site on LEMP stack. Different from LAMP stack, LEMP uses nginx [engine x], instead of Apache as web server. Compared with Apache, nginx can handle much more concurrent hits. I found an article on the Internet, demonstrating the performance difference ‘Web server performance comparison‘.

Please refer to nginx official site https://nginx.org/en/ for further details.

Design Rationale

A design proposed for production environment is as below. Firewall is not illustrated in the digram, but required in production environment.
nginx_env

Design Key Points:

  • A pair of load balancers in active/passive HA cluster, sharing single virtual IP (VIP). DNS name will be associated to the VIP.
  • The separation of web, application and backend layers in different subnets, allows flexibility to enforce security.
  • Web cluster, PHP cluster, database cluster and file server cluster are created to achieve high availability and horizontal scalability
  • PHP servers, file serves and database servers are external to web serves, which helps scalability. File server cluster will store WordPress files and mounted to web nodes and PHP nodes.

Lab Description

Lab Scope

Due to the limit of my time and computing resource (= ‘money’), the boxes highlighted in RED in the above diagram are in the current lab scope. If time allows, I will deploy LVS load balancers as well. If you are interested in pfSense as load balancer, please refer to my posts Use pfSense to Load Balance Web Servers (1) and Use pfSense to Load Balance Web Servers (2).

Lab Servers and Software

I used DigitalOcean for cloud servers. It is cheaper than AWS. The cheapest instance is only US$5/month.

DigitalOcean $5 deal provides: 1 CPU, 512MB RAM, 20GB SSD, and 1000GB transfer. All servers used in this lab are $5 servers.

If you decide to use DigitalOcean, please use my referral link http://www.digitalocean.com/?refcode=81650e396096. You will get US$10 in credit immediately; and I may get some referral benefits as well, so win-win.

The downside is DigitalOcean provides less features. It allows private IP… but the private IP is automatically generated based on the selected datacentre (DC). For example, all my servers are in New York 3 DC, and their private IPs are all in the same subnet. It means I cannot manipulate IP allocation and routing as I did in AWS. In addition, it provides little ready-to-use security mechanism, though we can use Linux native firewall and install additional security services.

It is good and bad: good for quick DevOp testing, no hustle with network; bad for production environment or if you particular like network…myself for example 🙂

Web Service Layer Application/Service Platform
Load Balancer Layer Linux Virtual Server(LVS) Not deployed yet
Web Server Layer Nginx Ubuntu16.04.1×64
Application Layer php-fpm php-mysql Ubuntu16.04.1×64
Backend Layer-Database MySQL Ubuntu14.04.5×64
Backend Layer-File System Gluster Ubuntu16.04.1×64

Deployment Steps

Step 1 – MySQL Database

#update and install mysql server on DB01. A GUI window will appear to assign mysql root password
sudo apt-get update
sudo apt-get -y install mysql-server

#create 'wordpress1' database. '-u' followed by username.'-p' means password, it will require password in a separate line.
sudo mysqladmin -u root -p create wordpress1

#change root password if required.
sudo mysqladmin -u root -p password 

#enter mysql shell and enter password in separate line.
sudo mysql -u root -p

#create a user (CREATE USER 'wpuser1'), who can access from any host (@'%'), and with a password ('password'). Remember to add ';' at the end of each command under mysql shell to complete a command.
CREATE USER 'wpuser1'@'%' IDENTIFIED BY 'password';

#grant user 'wpuser1' full permission in particular to 'wordpress1'database, but not global database.
GRANT ALL PRIVILEGES ON wordpress1.* TO 'wpuser1'@'%';

#verify the existence of 'wordpress1' database and 'wpuser1'
show databases;
select User from mysql.user;

#Update database permissions and exit mysql shell
flush privilege;
exit

#Edit mysql config file to update the bind address from 127.0.0.1(loopback) to the actual private address. 
#Refer 'Note' section for details.
sudo nano /etc/mysql/my.cnf

#restart mysql service
sudo service mysql restart

Note – MySQL Bind Address: 

If the bind address in ‘my.cnf’ remains loopback, the database will not allow remote database access. When accessing the website, the following will show:
db_error.png
We will need to edit ‘/etc/mysql/my.cnf‘ as following, where ‘10.132.88.196’ is the DB server’s private IP.
mycnf.png

Then we execute ‘service mysql restart‘ to restart mysql service. The following screenshot shows the DB server was listening on localhost port 3306 before restarting mysql service; and listening on DB01’s IP after restarting the service.

I also checked firewall status to make no traffic is accidentally blocked.

db_error2.png

To be Continued

Next Deploy Scalable and Reliable WordPress Site on LEMP(2)

AWS Exam Preparation: Product Mindmap

Finally I got a chance to update the blog before October. It was a busy month. I got a new job hopefully to be more focused on cloud, preparing AWS exam, starting Linux/OpenStack architecture training and travel 🙂

As a fan of sharing, I put my exam prep in the following mind map, which not only helps exam, but also outline AWS products, if you are interested in what they do.

Enjoy!

(https://autrunk.files.wordpress.com/2016/09/aws_product_mindmap.jpg)