In this post let’s collate some insight into building distributed applications using the Microservices application architecture by leveraging the Spring Boot platform. We will also delve into the cost benefit analysis of using different AWS deployment models with the objective of achieving horizontal scalability, high availability and better performances.
Building micro-services using Spring Boot stack will solve many existing problems and offer below advantages
- Greater agility
- Faster time to market
- Better scalability
- Faster development cycles (easier deployment and debugging)
- Easier to create a CI/CD pipeline for single-responsibility services
- Isolated services have better fault tolerance
- Platform- and language agnostic services
- Cloud-readiness (can be easily integrated)
Spring Boot Micro-service Application Flow Architecture
In the above architectural flow individual layers are described as below:
- HTTPS/API : REST API’s for individual micro-services
- Controller : Spring Boot controller to create REST resources mapping for each micro-service
- Service Layer : Service layer can be used to do major computation/business logic and integrated with controller using Spring auto wiring approach.
- Model : Plain POJO will be created for different entities which has basic setter/getter function along with mapping with entities using annotations.
- Repository : JPA abstraction over ORM we will choose and provide for easy CRUD and native query operation and get away with boilerplate codes of session factory or transaction boundaries creations
- MySQL Local or AWS RDS will be used as persistent layer to be compatible with AWS cloud
- Spring Boot with Maven plugin integration will provide us different artifact as deployment unit (JAR/DOCKER_Images)
Deployment Architecture Models using AWS
We can use below deployment Models from AWS Cloud for deploying our JAR/Image file
- Direct deployment to EC2 Instance
- Deployment to ECS
- Deployment to EKS
Deployment to AWS -EC2
Direct deployment to EC2 offer maximum flexibility but also involves doing all configurations yourself to achieve scalability, HA, security, and load balancing our request by choosing right load balancer and algorithms. This approach can be used if we do not expect high traffic and horizontal scalability and only couple of EC2 instances is needed.
Deployment to AWS-ECS (Elastic Container Service)
- Amazon Elastic Container Service (Amazon ECS) is a fast, highly scalable, high-performance service that manages Docker container orchestration using the compute capabilities of Amazon. Some of Amazon ECS’ characteristics are the following
- Orchestrates Docker containers as a service.
- Supports Docker Compose.
- Integrates with other Amazon services (IAM, security groups, Amazon CloudWatch Logs, VPCs, etc).
- Allows you to manage the infrastructure behind the containers with an EC2 Launch Type model.
- Uses Amazon ECS task definitions to describe the containers to run inside the cluster.
- ECS Architecture Model
- ECS Architecture Model
ECS Architecture Model
The architecture diagram above shows an application running two clusters. It has an API Gateway that receives the request from the users and sends them to a load balancer. The load balancer sends the request to the corresponding service that is running inside containers monitored by AWS Auto Scaling and Amazon ECS. Amazon ECS ensures that the containers are healthy and replaces them when needed, using Docker images stored on Amazon ECR. Complete AWS services architecture is deployed in VPC (virtual private cloud) settings
Deployment to AWS-EKS (Elastic Kubernetes Service)
Amazon Elastic Container Service for Kubernetes (Amazon EKS) is Amazon’s Kubernetes as a service. It allows you to deploy, manage and scale containers using Kubernetes and Amazon’s cloud infrastructure. Amazon EKS runs infrastructure management using three different availability zones, to increase reliability and eliminate a single point of failure. Some of EKS’ characteristics are the following
- Manages the availability and scalability of the Kubernetes nodes.
- Integrates with AWS network and security services.
- Automatically detects and replaces unhealthy nodes. Monitors container status.
EKS Architecture Model
The above architecture example shows an application running inside an EKS cluster with the following configuration
- Kubectl is used to manage the EKS cluster.
- The Amazon EKS control plane connects to the worker nodes where the containers are running.
- Users connect to a load balancer when they want to use the application.
- The load balancer forwards the request to one of the worker nodes.
Deployment to AWS-ECS using Fargate
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate depicts Container as a service (CaaS) and provide you benefit of both Orchestration model like Kubernetes because you can set and tune CPU and memory requirements for your containers and also Lambda (serverless model) in that you then don’t need to worry about the underlying servers that it’s running on.
Our Spring boot Jar file will be containerized using docker and will be deployed to AWS ECS.
ECS- Fargate architecture Model
The above architecture example shows different tasks running in two different ECS services using AWS Fargate launch type with the following configuration
- The tasks are spread across two different Availability Zones in the same AWS Region.
- Each service instantiates containers using ECS task definitions.
- The Fargate tasks pull the Docker images (defined in the ECS task definitions) from Amazon ECR or Docker Hub.
- Each Fargate instance owns a unique Elastic Network Interface that provides it with an IP address to be able to communicate with the network.
Comparison – Amazon ECS, AWS Fargate, and Amazon EKS
Comparison above is helpful in deciding which business case support which model and advantages/disadvantages across different option
- Official AWS site for various AWS services