Introduction
Kubernetes is a powerful DevOps tool for achieving scalable software applications deployments. Kubernetes is a tool for automating the deployment of multiple instances of an application, scaling, updating and monitoring the health of deployments.
This article will help you deploy your REST API in Kubernetes. First, you’ll need to set up a local Kubernetes cluster, then create a simple API to deploy.
Set Up Local Kubernetes
There’s a couple options for running Kubernetes locally, with the most popular ones including minikube, k3s, kind, microk8s. In this guide, any of these will work, but we will be using k3s because of the lightweight installation.
Install k3d, which is a utility for running k3s. k3s will be running in Docker, so make sure you have that installed as well.
curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
Create a Simple API
Create a simple API using Express.js.
mkdir my-backend-api && cd my-backend-api
touch server.js
npm init
npm i express –saveq
// server.js
const express = require(“express”);
const app = express();
app.get(“/user/:id”, (req, res) => {
const id = req.params.id;
res.json({
id,
name: `John Doe #${id}`
});
});
app.listen(80, () => {
console.log(“Server running on port 80”);
});
Deploy
Now, deploy the image to your local Kubernetes cluster. Use the default namespace.
Create a deployment:
kubectl create deploy my-backend-api –image=andyy5/my-backend-api
Check that everything was created and the pod is running:
kubectl get deploy -A
kubectl get svc -A
kubectl get pods -A
Once the pod is running, the API is accessible within the cluster only. One quick way to verify the deployment from our localhost is by doing port forwarding:
Replace the pod name below with the one in your cluster
kubectl port-forward my-backend-api-84bb9d79fc-m9ddn 3000:80
Now, you can send a curl request from your machine
curl localhost:3000/user/123
Manage external access in a cluster
To correctly manage external access to the services in a cluster, we need to use ingress. Close the port-forwarding and let’s expose our API by creating an ingress resource.
An ingress controller is also required, but k3d by default deploys the cluster with a Traefik ingress controller (listening on port 80).
Create an Ingress resource with the following YAML file:
kubectl create -f ingress.yaml
kubectl get ing -A
// ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-backend-api
annotations:
ingress.kubernetes.io/ssl-redirect: “false”
spec:
rules:
– http:
paths:
– path: /user/
pathType: Prefix
backend:
service:
name: my-backend-api
port:
number: 80
Final thoughts
As one can see it is quite quick to let the traffic in or even deploy an autoscaling cluster onto Google Cloud Platform. There are some caveats mostly resulting from a different structure of a production cluster and a cluster one would run locally to test their applications. Without an ingress controller, like NGINX, there are a number of extra steps to bear in mind. With minikube clusters, one needs to enable ingress and when it comes to Docker Desktop one shall make sure the service resource is a LoadBalancer. Implementing an ingress controller seems to be the most suitable option as there is little overhead when moving from local deployments to production deployments
In 2024 we're witnessing a critical point in democratic technology: the integration of blockchain and…
We’re thrilled to announce an exciting opportunity for you to win not one but two…
Acquiring practical skills is crucial for career advancement and personal growth. Education Ecosystem stands out…
Artificial Intelligence (AI) has been making significant strides in various industries, and the software development…
Another week to bring you the top yield platforms for three of the most prominent…
If you hold a large volume of LEDU tokens above 1 million units and wish…