Enhancements to Cloud Deployment architecture for web applications

In this chapter, we will discuss enhancements to our previous server deployment architecture. Previously, it was a very basic implementation and with this enhancement, we are focusing on Scalability, High Availability and Security.

Below diagram shows our enhanced architecture and since we are using AWS platform for this series, some of the terms and services are specific to the AWS. Other cloud service providers provides services under different name.

Cloud Deployment architecture for web applications with high availability


Key points from our server deployment architecture

  • Port 22 for ssh access is removed from all the server instances. This will help us narrow down the attack vectors for hackers. Also, all the hassles of keeping SSH keys secured is now removed.
  • Introduction of session manager functionality, a fully managed AWS systems manager capability. A session manager is a new interactive shell and CLI that helps to provide secure, access-controlled and audited Windows and EC2 instance management.
  • Implementation of Cloudwatch feature, a monitoring and management service that provides data and actionable insights. We can use cloudwatch to centralize the logs from all of our systems, applications and AWS services.
  • Implementation of Load balancer feature provided by Lightsail, it routes web traffic across our multiple lightsail instances so that the applications can accommodate variations in traffic and also elegantly handle server outages by routing the traffic to another healthy servers. There are many limitations on the load balancer provided by lightsail which is just a simplified version.
  • Usage of multiple Availability zones to host the multiple instances of the same application to provide high availability.
  • Implementation of database replication feature with MySQL to maintain primary servers as well as multiple secondary read replicas. If the primary server goes down, then one of the secondary replicas is promoted to be the primary.
  • Implementation of redis clustering to have a primary server as well as multiple read replicas. If the primary server goes down, then one of the read replicas will become primary.

We had implemented our previous server deployment architecture using manual process. With manual process, it takes a lot of time to setup the infrastructure and there is also no versioning information on how the infrastructure evolved over time. Also, to setup multiple environments - (dev, stage and prod), it takes almost identical amount of time. With the help of Infrastructure as Code (IaC) tools, we can automate the provisioning of server infrastructure easily and also reliably. With IaC, we write code to manage the infrastructure and we can host those code files in some git repository, thus, giving us the history of the change implementation. Terraform is one of the most popular IaC tool, developed by HashiCorp. In our later chapters, we will setup our entire infrastructure using terraform.

With Lightsail instances, there are many limitations which can be overcame by using EC2 instances in AWS platform. In our later chapters, we will discuss the server deployment architecture using AWS VPC and EC2 instances.

In our next chapter, we will setup load balancing feature using NGINX web server.

Prev Chapter                                                                                          Next Chapter