Deploy Node.js application in Cloud Server

In this chapter, we will run our Node.js application using docker containers in the application server instance. To configure server instance with all the necessary software dependencies, we can use SSH command to connect to the instance. We have multiple options to connect to the instance. Some of them are as follows:

  • Using terminal/command prompt with SSH connection string. It looks like  below:
   
   	ssh -i "travel-app-vpn.cer" ec2-user@100.21.180.236
   

File can be of .pem extension as well. We need to modify the file permission before we can successfully connect to the instance using SSH command.

chmod 400 command sets the permission in the file so that only current user/owner can read the file. No other actions(write and execute) are allowed and apart from the user/owner who issued the command, no one else(groups and others) will have the access as well.

   
   	chmod 400 travel-app-vpn.cer
   
  • Using Lightsail browser-based SSH client for linux instances and Lightsail browser-based RDP client for windows instances.

    Click on Connect using SSH button to connect to the instance from the browser
Lightsail browser-based SSH client for linux instances
  • Using Session Manager feature in case of AWS.

    We will look into this as part of enhancements in the server architecture, in our later chapters in detail.

While configuring the server instances, we configured SSH port to be accessible from static ip associated to the vpn server only. Let's keep that as it is. For this chapter, let's use the lightsail browser-based ssh client. Click on Connect using SSH button in the application server details section as shown in above image. It will open a browser-based terminal in another tab which looks like below:

Lightsail browser-based SSH client terminal

Now, we are ready to start configuring the server. Please note that Amazon Linux 2 server instances use yum package manager.

The first thing that we should do with new servers is, update all the presently installed packages to their latest available versions and remove all the obsolete packages. Issue following command:

   
   	sudo yum -y update
	sudo yum -y upgrade
   

In cloud servers, we usually install docker engines to interact with docker containers. You can refer to the following link for installing docker engines as per your linux operating system.

https://docs.docker.com/engine/install/

In AWS, the docker installation process is slightly different than others. Amazon and Docker has partnered to make the usage of docker in AWS infrastructure very simple.  For Amazon Linux 2 server instance, let's follow the below steps to install docker engine and docker cli.

  • Issue the following command to install docker and it's dependencies in instances using Amazon Linux 2 image.
   
   	sudo amazon-linux-extras install docker -y

   

Now that the docker is successfully installed, check the version of the docker.

   
   	docker --version

   

If the output is something similar as shown below, we are ready to use docker.

   
	Docker version 20.10.13, build a224086
   

Start the docker service

   
   	sudo service docker start

   

Restart the docker service

   
   	sudo service docker restart

   

Stop the docker service

   
   	sudo service docker stop

   

Check the status of docker service

   
   	sudo service docker status

   
Check docker status


To make sure that docker daemon starts after each system reboot, issue the following command:

   
   	sudo systemctl enable docker

   

We have been using sudo command to run docker till now. To execute docker commands without sudo keyword, let's add our current user, ec2-user in our case, to the docker group.

   
   	sudo usermod -a -G docker ec2-user

   

Now, restart your ssh session and then you will be able to run docker commands without sudo keyword. This is the recommended approach. To restart the session, you can just refresh the page in case of browser based ssh client and in case of terminal, exit from current session and start the new session again.

Run the following command to verify the changes:

   
   	docker info

	docker ps
   

Now that the docker is ready for use, let's clone our Node.js application git repository in our cloud server.

Create a folder called workspace at the current working directory which in our case is /home/ec2-user and then navigate to the workspace folder.

   
   	mkdir workspace && cd workspace/

   

We need to now download our project files in the cloud server. For that, let's use git. Install git.

   
   	sudo yum install git -y

   

Clone our application git repository:

   
   	git clone https://github.com/nodexplained/travel-application.git

   

Since this is a public repository, it will download the repository without username and password prompts. For private repositories, it will prompt for username and password.

Now, let's build the docker image from the root of our project directory. Navigate to the project root directory.

   
   	cd travel-application/

   
git project assets

Issue the following command to build a docker image:

   
   	docker build -t travel-app .

   

Now that our docker image is ready with tag name travel-app, issue the following command to start our Node.js application server.

   
   	docker run -d --name travel-app -p 3000:3000 travel-app

   

Check for running containers:

   
   	docker ps

   
running docker containers

Now, test one of the api endpoints to verify that the api is working fine.

   
   	curl --location --request GET 'http://localhost:3000/api/hotel' -w '\n'

   
curl command with response in new line

Note:

The way we have cloned git repository and then build docker image from inside of that application repository is only good for basic applications. However, it's not a  recommended approach even though it works. There is a much better way to handle that process. We build a image, use a container registry service to host that docker images and then use that hosted application docker image to run a container in a cloud server, all in automated way using CI/CD pipeline. Every cloud service providers offer this container registry service, and for AWS, it is the Amazon Elastic Container Registry(ECR).

Since our application server instance is not available from the internet, we will have to wait for sometime until our web server instance is ready, so that everyone can test from their computers.

That is all for running Node.js application server using docker in cloud server instances. Do you see one thing that we can implement here which will massively improve our workflow?

If you are thinking a way, in which, we can automatically configure our servers with all the necessary software dependencies and then run our application, you are in a very good learning direction. No manual effort required.

So, we can automate everything that we have implemented manually in this chapter with the help of a bash script. A bash script is a plain text file containing a series of commands. It's similar to the Dockerfile with the exception that in bash script, we write series of linux commands line by line. When we run the bash script file, it executes the linux commands line by line.

To create a bash script file, we need to have .sh file extension. Let's create a bash script file.

   
   	touch configure_application_server.sh
   

At the top of the file, We need to have the following content. It must be at the first line in the file.

   
   	#! usr/bin/bash
   

Copy all the commands that we have executed manually to configure this server in the file.

echo keyword is for printing helpful messages.

Final content of configure_application_server.sh looks like below:

   
	#! usr/bin/bash

	echo "Updating presently installed packages and removing obsolete packages"
	sudo yum -y update
	sudo yum -y upgrade

	echo "Installing docker"
	sudo amazon-linux-extras install docker -y
	docker --version

	echo "Starting docker"
	sudo service docker start
	sudo service docker status
	sudo systemctl enable docker
	sudo usermod -a -G docker ec2-user
	docker info

	echo "Cloning application git repository"
	mkdir workspace && cd workspace/
	sudo yum install git -y
	git clone https://github.com/nodexplained/travel-application.git

	echo "Building docker images"
	cd travel-application/
	docker build -t travel-app .

	echo "Running docker images to start application server"
	docker run -d --name travel-app -p 3000:3000 travel-app

	docker ps
   

To make the bash script executable, we need to issue following command:

   
   	chmod u+x configure_application_server.sh
   

Here, u+x will grant the executable permission to the particular user.

To run/execute the bash script,

   
   	./configure_application_server.sh
   

Now when creating lightsail instances, we can place the contents of this bash script file in launch script section and then our server instance is automatically configured with no manual effort at all. This will save us a lot of time. We can use bash script to automate much more complex tasks.

In our next chapter, we will setup OpenVPN server.

Prev Chapter                                                                                          Next Chapter