Introduction
As organizations rapidly move towards cloud adoption, they are also looking at microservices and an agile way of development, deployment, and testing. With such rapid adoption, large organizations are facing many challenges. In large organizations, generally, there are multiple projects running simultaneously. All these projects would typically have a large number of microservices. This results in hundreds of microservices being under development and a number of individuals and teams with varying degrees of skills, expertise, and experience working on those.
Microservices Architecture
With monolithic architectures, all processes are tightly coupled and run as a single service. This means that if one process of the application experiences a spike in demand, the entire architecture must be scaled. Adding or improving a monolithic application’s features becomes more complex as the code base grows. This complexity limits experimentation and makes it difficult to implement new ideas. Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of a single process failure.
Microservices architecture
With a microservices architecture, an application is built as independent components that run each application process as a service. These services communicate via a well-defined interface using lightweight APIs. Services are built for business capabilities and each service performs a single function. Because they are independently run, each service can be updated, deployed, and scaled to meet demand for specific functions of an application.
Characteristics of Microservices
Autonomous
Each component service in a microservices architecture can be developed, deployed, operated, and scaled without affecting the functioning of other services. Services do not need to share any of their code or implementation with other services. Any communication between individual components happens via well-defined APIs.
Specialized
Each service is designed for a set of capabilities and focuses on solving a specific problem. If developers contribute more code to a service over time and the service becomes complex, it can be broken into smaller services.
Automating Microservices on AWS
Developing a utility tool that would create microservices and take care of the deployment is an excellent solution to managing and monitoring resources on the cloud and providing ongoing support. This can be used as a command line tool or can be uploaded and distributed through AWS Service Catalog.
The utility tool is a one-click solution that would generate the structure of a microservice, create all necessary resources on AWS along with a CI/CD pipeline, commit the skeleton code and deploy onto ECS post a successful build. It’s a command line tool that takes the project, application, and service names as input and auto-generates everything else.
Generate a spring-boot project
To generate a skeleton spring-boot project, the maven archetype is used.
It generates the structure of the project, complete with proper method signatures along with request and response classes. It also generates associated unit test classes. The generated code results in a clean build. Going forward, failure of unit tests or a low coverage will result in a failed build.
It generates the structure of the project, complete with proper method signatures along with request and response classes. It also generates associated unit test classes. The generated code results in a clean build. Going forward, failure of unit tests or a low coverage will result in a failed build.
package com.example.springboot;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class HelloController {
@GetMapping(“/”)
public String index() {
return “Greetings from Spring Boot!”;
}
}
Create the required AWS resources
Here, we leverage AWS Cloud Development Kit (CDK) that helps to define infrastructure using code and then provision them through Cloudformation stacks. Here CDK is used to create a CodePipeline comprising CodeCommit, Codebuild, and CodeDeploy, an Elastic Container Repository (ECR), and an Elastic Container Service (ECS) with Fargate provisioning.
version: 0.2
phases:
pre_build:
commands:
– mvn clean install
– echo Logging in to Amazon ECR…
– $(aws ecr get-login –region $AWS_DEFAULT_REGION –no-include-email)
– REPOSITORY_URI=123456789123.dkr.ecr.us-east-2.amazonaws.com/demoservice
– COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
– IMAGE_TAG=build-$(echo $CODEBUILD_BUILD_ID | awk -F”:” ‘{print $2}’)
build:
commands:
– echo Building the docker image…
– docker build -t $REPOSITORY_URI:latest .
– docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
– echo Pushing the docker images…
– docker push $REPOSITORY_URI:latest
– docker push $REPOSITORY_URI:$IMAGE_TAG
– echo Writing image definitions file…
– printf ‘[{“name”:”demo-service”,”imageUri”:”%s”}]’ $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
artifacts:
files: imagedefinitions.json
All resources that are created follow the same standards. They are properly tagged with values for the project name, application name, etc.
Once the tool executes successfully, the code is available in the code repository, an initial build has been done, and the corresponding docker image has been pushed into ECR. The image is deployed onto an ECS Fargate instance and can be accessed from external or internal clients. Developers can now go in and start putting business logic in the skeleton code without being concerned about anything else.
What are some of the advantages of automating microservices on AWS?
- Achieve standardization across projects in terms of naming, versioning, etc. This increases the observability of the system. With standard nomenclature, monitoring becomes simple. Also, it’s easy to search for resources in logs.
- Proper tags are put in place for all resources created in AWS. This helps in categorizing resources and generating costs for projects or accounts.
- Build and deployment are automated from the beginning. This ensures proper DevOps practices. Also, it opens up scope for enforcing best practices like a low unit test coverage would fail a build.
- The structure of the project is generated, enabling developers to focus on business logic. Developers generally don’t like to work on boilerplate code. Generating the structure and removing the need to write boilerplate code makes things exciting for developers, thereby increasing productivity.
- It’s a one-click solution that ensures that services are up and running within minutes. This drastically reduces the time to start developing microservices on AWS.
- Reduces development and testing cycles for services. The structure is already created by the tool, and it also enforces strict unit testing coverage. This results in high code quality with minimal effort.
Final thoughts
Once the tool executes successfully, the code is available in the code repository, an initial build has been done, and the corresponding docker image has been pushed into ECR. The image is deployed onto an ECS Fargate instance and can be accessed from external or internal clients. Developers can now go in and start putting business logic in the skeleton code without being concerned about anything else.