Research collaborate build

Mar 27, 2020

Cloud Native Approach For Software Engineering. Part 2 - Containerization

A Concept of Unified Environments Where If A Program Runs On Local, Will Run In Production Too.
Vasuki Vardhan G
Vasuki Vardhan GTech Lead - II
lines

The previous article in the series discussed what Cloud Native Approach is and the infrastructure behind it. Those concepts will serve as a foundation for the concepts discussed in this article.

If you missed the previous part, read it here.

So picking up from I left off, we know what exactly cloud-native applications are. Let's focus our attention to a specific layer of the Cloud Native stack, namely the scheduling and orchestration layer, or to be even more specific, a component of this layer.

Welcome to the World of Containers.

You would have probably heard about Dockerization or Containerization. Yes that's what we will be talking about today in this article. To understand containerization, we will go through a comparative case scenario between VM's and containers.

Figure 1 - Containerisation

Let's consider building a NodeJS application. In the VM realm on the bare metal, we have our OS and our Hypervisor (a software layer which acts as a bridge between your host and VM) which take a certain set of resources. So, in our traditional way, let's deploy our NodeJS application over a VM.

Figure 2 - Containerisation

As you can see, in the VM, when we deploy the app, we also have an OS known as the Guest OS being deployed with the required libraries and the application itself. This makes the deployment very heavy. I have not seen a small NodeJS VM being less than 400MB. Moving as per cloud-native approach, this deployment must be scalable. So, let's try and horizontally scale this deployment.

Figure 3 - Conatinerisation

As we can see, when we horizontally scaled the application, the server's resources are completely utilised and there is nothing left to work with anymore. Thus, we needed a better solution to do things.

That is where containerization comes into the picture.

Figure 4 - ContainerisationAs As evident in the above image, in the containers realm, we have an OS and a Container Runtime (ex. Docker Engine) on the bare metal server which this takes up a certain set of resources. But running a container is not a straight-up process like spinning up a VM Image and deploying code in it. It's a three-step process as shown in the image. So, first, we create a Manifest describing our deployment this can be your Dockerfile in case of Docker or manifest.yaml in case of Cloudfoundry.

A sample Dockerfile looks like this:

Hire our Development experts.
FROM node:lts-slim
# 1. Creating the app directory
WORKDIR /usr/src/app
# 2. Install app dependencies for the application
# A package*.json is used to make sure the 
# package.json, package-lock.json are to be copied
COPY package*.json ./
# 3 RUN npm install --only=production for production
RUN npm install --only=production
# Bundle the app source
COPY . .
EXPOSE 4007
CMD [ "npm", "start" ]

Then using this manifest, we create an image in the docker realm. This is a Docker image and ACI (Application Container Image) in case of Rocket. From the Image, we get the actual container itself which contains all the libraries and environment configs required to run your application. Basically, no matter you choice of solution be Docker, Rocker or even Cloudfoundry, this process remains the same. Now, let's go ahead and deploy out JS Container onto the server.

Figure 5 - Containerisation

As you can see, the deployment doesn't have an OS in it but only has the required libraries. This makes containers very lightweight and they consume less amount of resources. So, as per the cloud-native approach let us proceed to horizontally scale the deployment.

Figure 6 - ContainerisationAs Even after scaling we have plenty of resources to play with. Let's say you need to integrate with a Cognitive API like Google Image Recognition with a Python wrapper built around it. In this case, if we are using VM's we will be pushed towards having that python wrapper and the NodeJS application in the same VM to manage resources. But if the demand is to create a composite service out of the Python wrapper, we would want to create a separate VM for that Python application.

Figure 7 - Containerisation

In our case, we don't have any resources left in the VM server. But if we club Python and JS applications together, we will lose the ability to scale JS/Python app up or down individually, which is not truly cloud-native. In order to make it adhere to the cloud-native infrastructure, we have to clear out certain resources in our server and then deploy the Python app. That basically means that we have to bring down a JS instance and spin up a Python instance.

Now let's consider the same deployment in the container realm.

Figure 8 - Containerisation

In the container realm, we deployed the Python application with all the JS application and still have the ability scale every application independently, while having a bit of resources still left. And the best is that if we have a bit of resource freely available, containers can share it between them for optimization and even if a container is not using the resources, it'll be made available for other containers in the server.

With Containerization, we can truly take advantage of the cloud native architecture.

We spoke about the portability of containers, ease of scaling with containers and overall with the three-step way of pushing a container into the server that allows us to have a much more agile DevOps function with continuous integration and delivery.

Demo

Now that we know about containerization, let's see a few scenarios of containerization with Docker as our containerization tool.

Prerequisites to follow these set of demos:

  • Docker CE
  • Docker Hub Account
  • Text/Code Editor (I personally prefer VSCode but any editor that makes you comfortable is good to go)
  • NodeJS

We will be going through two scenarios:

Scenario 1: Containerizing a React Application with NGINX as our server

This is how our React Project would look like:

Project Structure

But to follow with this post you can just create a basic react app by running create react app

Hire our Development experts.
$> npx create-react-app my-docker-demo

Next, we create a folder named nginx and inside that, we create a file name default.conf. This file holds the configurations required for nginx to host our React build.

Hire our Development experts.
server {

  listen 80;

  sendfile on;

  default_type application/octet-stream;


  gzip on;
  gzip_http_version 1.1;
  gzip_disable      "MSIE [1-6]\.";
  gzip_min_length   1100;
  gzip_vary         on;
  gzip_proxied      expired no-cache no-store private auth;
  gzip_types        text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
  gzip_comp_level   9;


  root /usr/share/nginx/html;


  location / {
    try_files $uri $uri/ /index.html =404;
  }

}

This file points out to the port to listen on and that is port number 80 in our case and also stuff like what is the root directory that the server has to look into and also about the server-side compression of the payload before being transferred.

The next step is to create a Dockerfile for your project. This Dockerfile describes your container.

Hire our Development experts.
### STAGE 1: Build ###

# We label our stage as ‘builder’
FROM node:11-alpine as builder

COPY package.json package-lock.json ./

## Storing node modules on a separate layer will prevent unnecessary npm installs at each build

RUN npm ci && mkdir /react-app && mv ./node_modules ./react-app

WORKDIR /react-app

COPY . .

## Build the react app in production mode and store the artifacts in build folder

RUN npm run build


### STAGE 2: Setup ###

FROM nginx:1.14.1-alpine

## Copy our default nginx config
COPY nginx/default.conf /etc/nginx/conf.d/

## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*

## From ‘builder’ stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=builder /react-app/build /usr/share/nginx/html

CMD ["nginx", "-g", "daemon off;"]

In the above example, we describe 2 containers for our react application where Stage 1 container is used for building and the Stage 2 container is used for serving.

In Stage 1, Container is built out of a prebuilt image of Alpine Linux with node 11 already installed in it. First, we move the package.json and package-lock.json to the container and we run npm install to install all the required node_modules. String the node_modules in a different layer will prevent unnecessary installs in every build. Then we switch our working directory and copy all our application code into the container and then we run npm run build to create a production optimized build of our React App.

In Stage 2, this is the container that will be handling our delivery. This container is built from pre-built Alpine Linus Image with only nginx in it (Mind that this container does not have anything to do with NodeJS). First, we will copy our Nginx configuration to the container and then remove the default website of nginx from the container. Next, we copy off all the build artefacts from our builder container into this container and then we issue the command to start the nginx server.

Coming on how to build this with Docker, navigate to your project root and run:

Hire our Development experts.
$> docker build -t {dockerUsername}/reactdemo .

This will build your container and with the -t option provided, it'll tag the image with the provided name. DockerHub username is very important if you want to push it to DockerHub. So, to push your container to DockerHub we run:

Hire our Development experts.
$> docker login
$> docker push {dockerUsername}/reactdemo

Docker login just logs you into DockerHub and docker push will push it to your docker hub. Next we shall check if that worked with the following command:

Hire our Development experts.
$> docker images

you should see {dockerUsername}/reactdemo listed.

Now to run this image basically anywhere just run the command,

Hire our Development experts.
$> docker run -d -p 8080:80 {dockerUsername)/reactdemo

Here -d means the process will run detached as a Daemon process and -p gives the port binding. In this case, the port 8080 of the host is bound to port 80 of the container. If you are running this on your local you could just hit http://localhost:8080 and it'll point to your react app running in the container.

Scenario 2: Multi Container with NodeJS Backend and MongoDB.

In this scenario, your project would look something like this:

Multi Container Project Structure

In the root, create a folder for your node application and inside that folder run, npm init and create a file app.js.

Hire our Development experts.
// 1. load all required packages for the application
var express = require("express");
var path = require("path");
var bodyParser = require("body-parser");
var mongoose = require("mongoose");
mongoose.Promise = global.Promise;
var cors = require("cors");
// 2. router, body parser and cors
var instance = express();
var router = express.Router();
instance.use(router);
instance.use(bodyParser.urlencoded({
  extended: false
}));
instance.use(bodyParser.json());
instance.use(cors());
// 3. connect to mongodb. since we will be configuring
// mongodb on docker image, the port must be specified.
// This will map from the docker-compose.yml file 
mongoose.connect(
  "mongodb://db:27017/ProductsAppsDb", {
    useNewUrlParser: true
  }
);

// 4. connection with the mongodb database
var dbConnect = mongoose.connection;
if (!dbConnect) {
  console.log("Sorry Connection is not established");
  return;
}

//  5. schema for the collection
var productsSchema = mongoose.Schema({
  ProductId: Number,
  ProductName: String,
  CategoryName: String,
  Manufacturer: String,
  Price: Number
});
// 6 mapping with the collection
var productModel = mongoose.model("Products", productsSchema, "Products");
// 7. REST Apis for get/post
instance.get("/api/products", function (request, response) {
  productModel.find().exec(function (err, res) {
    if (err) {
      response.statusCode = 500;
      response.send({
        status: response.statusCode,
        error: err
      });
    }
    response.send({
      status: 200,
      data: res
    });
  });
});

instance.post("/api/products", function (request, response) {
  // parsing posted data into JSON
  var prd = {
    ProductId: request.body.ProductId,
    ProductName: request.body.ProductName,
    CategoryName: request.body.CategoryName,
    Manufacturer: request.body.Manufacturer,
    Price: request.body.Price
  };
  // pass the parsed object to "create()" method
  productModel.create(prd, function (err, res) {
    if (err) {
      response.statusCode = 500;
      response.send(err);
    }
    response.send({
      status: 200,
      data: res
    });
  });
});
// 8. listen on the port
instance.listen(4007, function () {
  console.log("started listening on port 4007");
});

Now if you try to run this project, it would throw an error like this:

Hire our Development experts.
(node:3361) UnhandledPromiseRejectionWarning: MongoNetworkError: failed to connect to server [db:27017] on first connect [Error: getaddrinfo ENOTFOUND 
db
    at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:64:26) {
  name: 'MongoNetworkError',
  [Symbol(mongoErrorContextSymbol)]: {}
}]

This is because the DB is not defined. So let's add the DB in a containerized way.

We will first add a Dockerfile.

Hire our Development experts.
FROM node:lts-slim
# 1. Creating the app directory
WORKDIR /usr/src/app
# 2. Install app dependencies for the application
# A package*.json is used to make sure the 
# package.json, package-lock.json are to be copied
COPY package*.json ./
# 3 RUN npm install --only=production for production
RUN npm install --only=production
# Bundle the app source
COPY . .
EXPOSE 4007
CMD [ "npm", "start" ]

This Dockerfile describes that the container is built on node:lts-slim image. Then we switch the working directory and copy package.json and package-lock.json and then run npm install. This layering will prevent module installations every time.

Then copy the source files to the container and start it with npm start while exposing port 4007.

In the root directory add a docker-compose.yml file.

Hire our Development experts.
# 1. specify docker-compose version
version: "3.0"

# 2. Defining the application containers to be run
services:
  #  2a. name of the application
  express:
    # 2b. specify the directory of the application containing of the Dockerfile
    build: demo
    ports:
      # 2c. specify ports mapping
      - "4007:4007"
    links:
      # 2d. link this service to the database service
      - db
    image: {dockerUsername}/dockerdemo
  # 2e. name of the database service
  db:
    # 2f. specify image to build container from mongo
    image: mongo
    ports:
      # 2g. specify port mapping for the database
      - "27017:27017"

This file describes the composition of the 2 containers. We have defined 2 services here, namely express and db.

In express, we specify what to build. Here, we just specify the name of the directory with the Dockerfile. Next, we provide the host to the container port bindings and we link the express service to the db service. The image key is used to tag the generated image.

In db, using the image key we specify the prebuilt image with mongo and then we provide port binding.

Now just run

Hire our Development experts.
$> docker-compose up

This will start the service with the DB dependency. Now you will see:

Hire our Development experts.
db_1       | 2020-03-22T19:08:54.428+0000 I  NETWORK  [listener] Listening on 0.0.0.0
db_1       | 2020-03-22T19:08:54.431+0000 I  NETWORK  [listener] waiting for connections on port 27017
express_1  | 
express_1  | > dockerdemo@1.0.0 start /usr/src/app
express_1  | > node app.js
express_1  | 
db_1       | 2020-03-22T19:08:55.001+0000 I  SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
express_1  | (node:25) DeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.
express_1  | started listening on port 4007
db_1       | 2020-03-22T19:08:57.717+0000 I  NETWORK  [listener] connection accepted from 172.18.0.3:36282 #1 (1 connection now open)
db_1       | 2020-03-22T19:08:57.746+0000 I  NETWORK  [conn1] received client metadata from 172.18.0.3:36282 conn1: { driver: { name: "nodejs|Mongoose", version: "3.4.1|5.8.11" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.19.76-linuxkit" }, platform: "Node.js v12.15.0, LE" }

Now you can hit your service on http://localhost:4007 

POST

 GET

I hope that this gives a clear idea of how containerization can be implemented in 2 different scenarios. It is quite easy to unify our development and production environments, while increasing portability with containers.

That's it, guys. I hope you liked the article and learned something new today. 

See you soon.

Hire our Development experts.