Kubernetes Project: Deploying a Kubernetes Cluster and Running Applications
Table of contents
Hello π I just completed an exciting project deploying a Kubernetes cluster and running a sample application. π I wanted to share my journey with you, so here's a detailed guide along with screenshots for each step. Let's dive in! π’
Kubernetes is an open-source platform for automating the deployment, scaling, and management of containerized applications. It is a popular tool for container orchestration and provides a way to manage large numbers of containers as a single unit rather than having to manage each container individually.
Importance of Kubernetes
Kubernetes has become an essential tool for managing and deploying modern applications, and its importance lies in its ability to provide a unified platform for automating and scaling the deployment, management, and scaling of applications. With Kubernetes, organizations can achieve increased efficiency and agility in their development and deployment processes, resulting in faster time to market and reduced operational costs. Kubernetes also provides a high degree of scalability, allowing organizations to scale their applications as their business grows and evolves easily.
Additionally, Kubernetes offers robust security features, ensuring that applications are protected against potential threats and vulnerabilities. With its active community and extensive ecosystem, Kubernetes provides organizations with access to a wealth of resources, tools, and services that can help them to improve and enhance their applications continuously. Overall, the importance of using Kubernetes lies in its ability to provide a flexible, scalable, and secure platform for managing modern applications and enabling organizations to stay ahead in a rapidly evolving digital landscape.
Here's a basic overview of how to use Kubernetes:
- Set up a cluster:
To use Kubernetes, you need to set up a cluster, which is a set of machines that run the Kubernetes control plane and the containers. You can set up a cluster on your own infrastructure or use a cloud provider such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure.
- Package your application into containers:
To run your application on Kubernetes, you need to package it into one or more containers. A container is a standalone executable package that includes everything needed to run your application, including the code, runtime, system tools, libraries, and settings.
- Define the desired state of your application using manifests:
Kubernetes uses manifests, which are files that describe the desired state of your application, to manage the deployment and scaling of your containers. The manifests specify the number of replicas of each container, how they should be updated, and how they should communicate with each other.
- Push your code to an SCM platform:
Push your application code to an SCM platform such as GitHub.
- Use a CI/CD tool to automate:
Use a specialised CI/CD platform such as Harness to automate the deployment of your application. Once you set it up, done; you can easily and often deploy your application code in chunks whenever a new code gets pushed to the project repository.
- Expose the application:
Once you deploy your application, you need to expose the application to the outside world by creating a Service with a type of LoadBalancer or ExternalName. This allows users to access the application through a stable IP address or hostname.
- Monitor and manage your application:
After your application is deployed, you can use the kubectl tool to monitor the status of your containers, make changes to the desired state, and scale your application up or down.
These are the general steps to deploy an application on Kubernetes. Depending on the application's complexity, additional steps may be required, such as configuring storage, network policies, or security. However, this should give you a good starting point for deploying your application on Kubernetes.
Today, we will see how to automate simple application deployment on Kubernetes using Harness.
Setting Up the Environment
To start, I used Ubuntu OS on t2.medium instances. Make sure to have sudo privileges and internet access. I opted for kubeadm
to set up the Kubernetes cluster.
# Update and install prerequisites
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo apt update
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo apt-get install -y apt-transport-https ca-certificates curl
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo apt install docker.io -y
# Enable and start Docker
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo systemctl enable --now docker
# Add GPG keys and Kubernetes repository
ubuntu@ip-172-31-11-56:~/projects/todo-app$ curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg
ubuntu@ip-172-31-11-56:~/projects/todo-app$ echo 'deb https://packages.cloud.google.com/apt kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo apt update
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
Initializing the Kubernetes Master Node
Next, I initialized the Kubernetes master node and set up the local kubeconfig.
# Initialize master node
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo kubeadm init
# Set up local kubeconfig
ubuntu@ip-172-31-11-56:~/projects/todo-app$ mkdir -p $HOME/.kube
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Applying Weave Network and Joining Worker Nodes
Applied Weave network for pod communication and generated a token for worker nodes to join.
# Apply Weave network
ubuntu@ip-172-31-11-56:~/projects/todo-app$ kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
# Generate token for worker nodes
ubuntu@ip-172-31-11-56:~/projects/todo-app$ sudo kubeadm token create --print-join-command
Deploying NGINX and Creating a Todo Application
On the worker node, I deployed NGINX using the following command:
ubuntu@ip-172-31-11-56:~/projects/todo-app$ kubectl run mera-nginx --image=nginx --port=80
For a more complex application, I created a todo application with pods, deployments, and services. Check out the YAML files for the configurations.
Scaling Deployments and Autohealing
I experimented with scaling deployments and witnessed the magic of autohealing. Deleting a pod didn't affect the application due to Kubernetes' self-healing capabilities.
# Delete a pod
ubuntu@ip-172-31-11-56:~/projects/todo-app$ kubectl delete pod todo-deployment-69bccb75d4-nfvzs -n todo-app
pod "todo-deployment-69bccb75d4-nfvzs" deleted
# Scale deployment
ubuntu@ip-172-31-11-56:~/projects/todo-app$ kubectl scale deployment todo-deployment --replicas=3 -n todo-app
deployment.apps/todo-deployment scaled
ubuntu@ip-172-31-11-56:~/projects/todo-app$ kubectl get deployment -n todo-app
NAME READY UP-TO-DATE AVAILABLE AGE
todo-deployment 3/3 3 3 31m
Accessing the Application
After setting up services and allowing the necessary ports, I accessed my application using the worker node's public IP and the assigned NodePort.
# Access application
curl 172-31-1-9:30007
Congratulations! We successfully deployed our application successfully on Kubernetes using Harness. Now, we can easily automate the deployment using the Harness CD module.
You can automate your CD process by adding Triggers. When any authorised person pushes any new code to your repository, your pipeline should get triggered and do CD. Letβs see how to do that.
In the pipeline studio, you can click the βTriggersβ tab and add your desired trigger. π©βπ»
Happy Kubernetes-ing! π
GitHub: https://github.com/Trushid
Trushid Hatmode