Introduction

In this article we will dive into creating a simple voting app using Python, deploying it in a k8s cluster with helm, and create one important bash script to create the automation process. To create the python app we will use Flask + HTML + CSS for the frontend and mongodb for the backend, we will use kind for our k8s environment. Kind demonstrates us a full k8s cluster environment. So let’s get started.

Prerequisites

Note: everything in this article is suitable for Linux machines, especially Red Hat.

Python Voting App

For this demo, our app will be a simple voting app. The app will contain three pages: home, questions and results.
In the home page the voting takes place, along with the user’s name, the the user answers the questions about the two presidents, and then see the results. We will use MongoDB for our backend. The db will store the voting results, and the answers of the users.

Setting up a virtual environment

Before diving into our source code, it is best practice to develop the app inside a virtual environment, so it will be easy for us to deal with our app’s dependicies later.

# To create a new environment run the command:
$ python3 -m venv .venv

# To activate and enter the environment run:
$ source .venv/bin/activate

The requirements for the app are:

# For developing the app
 $ pip install Flask

# For the connection between the app and MongoDB
$ pip install pymongo

Source Code

Our source code will contain two parts. First we will run a script that connects to the db and creates the necessary tables, for voting and user’s answers. Add the HTML+CSS as you like.

import os
from pymongo import MongoClient

def initialize_database():
# Connection to MongoDB
    username = os.getenv('MONGO_ROOT_USERNAME')
    password = os.getenv('MONGO_ROOT_PASSWORD')
    database = os.getenv('MONGO_DATABASE')
    ip = os.getenv('MONGO_IP')

    mongo_uri = f'mongodb://{username}:{password}@{ip}:27017/'
    client = MongoClient(mongo_uri)
    db = client[database]

  # Check if the collections already exist
    if 'votes' not in db.list_collection_names():
        # Create a collection for votes
        votes_collection = db['votes']

  # Insert initial data for Trump and Biden
        initial_data = [
            {'candidate': 'Trump', 'count':  0},
            {'candidate': 'Biden', 'count':  0}
        ]
        votes_collection.insert_many(initial_data)

    if 'answers' not in db.list_collection_names():
  # Create a collection for answers
        answers_collection = db['answers']

 # Insert initial data for test_user
        initial_answers_data = [
            {'username': 'test_user', 'right_answers': 0, 'wrong_answers': 0}
        ]
        answers_collection.insert_many(initial_answers_data)

    print("Database initialized successfully.")

if __name__ == "__main__":
    initialize_database()

And for our main app code:

import os
from flask import Flask, render_template, request, redirect, url_for, abort
from pymongo import MongoClient

# Getting the environment variables from the secrets file
mongo_username = os.getenv('MONGO_ROOT_USERNAME')
mongo_password = os.getenv('MONGO_ROOT_PASSWORD')
mongo_ip = os.getenv('MONGO_IP')
mongo_db = os.getenv('MONGO_DATABASE')

# Establishing connection to mongo database
mongo_uri = f'mongodb://{mongo_username}:{mongo_password}@{mongo_ip}:27017/{mongo_db}?authSource=admin'
client = MongoClient(mongo_uri)
db = client[mongo_db]

app = Flask(__name__)

# Home page
@app.route('/')
def home():
    return render_template('home.html')

# Collecting the username and the vote and adding the vote to the database
@app.route('/vote', methods=['POST'])
def vote():
    candidate = request.json['candidate']
    data = request.get_json()
    username = data['username']
    if candidate not in ['Trump', 'Biden']:
        abort(400)
    db.votes.update_one({'candidate': candidate}, {'$inc': {'votes': 1}}, upsert=True)
    return redirect(url_for('questions', username=username))

# Displaying the questions page according to the username
@app.route('/questions', methods=['GET'])
def questions():
    username = request.args.get('username')
    return render_template('questions.html', username=username)

# Submitting the answers and adding the answers and the username to the database
@app.route('/submit-answers', methods=['POST'])
def submit_answers():
# actual correct answers
    correct_answers = ['Trump', 'Trump', 'Trump', 'Trump', 'Biden', 'Trump', 'Biden', 'Trump', 'Biden', 'Biden']  
    user_answers = request.form.to_dict()
    username = user_answers.pop('username')  # retrieve and remove the username from the answers
    right_answers = sum(user_answer == correct_answer for user_answer, correct_answer in zip(user_answers.values(), correct_answers))
    wrong_answers = len(user_answers) - right_answers
    db.answers.insert_one({'username': username, 'right_answers': right_answers, 'wrong_answers': wrong_answers})
    return redirect(url_for('results'))

# Getting the results of the votes and the answers
@app.route('/results', methods=['GET'])
def results():
    trump_doc = db.votes.find_one({'candidate': 'Trump'})
    biden_doc = db.votes.find_one({'candidate': 'Biden'})

    # Getting the votes for each candidate or setting it to 0 if it is the first vote
    trump_votes = trump_doc['votes'] if trump_doc else 0
    biden_votes = biden_doc['votes'] if biden_doc else 0

    users_data = list(db.answers.find({'username': {'$ne': 'test_user'}}))
    return render_template('results.html', trump_votes=trump_votes, biden_votes=biden_votes, users_data=users_data)

if __name__ == '__main__':
    app.run(debug=True)

Docker

Docker Engine is an open source containerization technology for building and containerizing your applications.
In our project we will use docker to containerazie our app and make use of it later in our k8s environment. We will use docker hub to save our image.

Creating Dockerfile

FROM python:slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

ENV MONGO_ROOT_USERNAME=# your username 
ENV MONGO_ROOT_PASSWORD=# your password 
ENV MONGO_IP=# your mongodb ip 
ENV MONGO_DATABASE=# your db name

EXPOSE 5000

CMD ["python", "./app.py"]

Note: Entering the env in the dockerfile isn’t best practice, but to simplify the demonstration we used it here.

Now we need to build the image.

# Run the command:
$ docker build -t vote-app-image .

# To check the new image run:
$ docker images

To push the image, run:

$ docker tag vote-app-image:latest dshwartzman5/vote-app
$ docker push dshwartzman5/vote-app

Configuring K8S and Helm

Helm is a package manager for k8s. This is our way to package all the yaml files we need to run our app, insert everything to one place, and install it. Our helm chart will contain both the mongo’s config files and the app’s as well. First of all let’s start with the files for our mongodb.

MongoDB Config Files

Namespace

This file creates as a namespace. Namespace is a logical environment that we can use to grant rules and centerlize our k8s services that are related.

apiVersion: v1
kind: Namespace
metadata:
  name: {{ .Values.db.namespace }}

Storage

For our db we will need to use PV (Persistant Volume) and PVC (Persistant Volume Claim).
PVs are volume plugins like Volumes but have a lifecycle independent of any individual Pod that uses the PV.
A PVC is a request for storage by a user.

# Creating a PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongodb-pv
  namespace: {{ .Values.db.namespace }}
  labels:
    type: local
spec:
  capacity:
    storage: {{ .Values.storage.capacity }} # Storage capacity
  accessModes:
    - {{ .Values.storage.accessMode }} # This specifies that the volume can be mounted as read-write by a single node
  hostPath:
    path: "/data/db" # specifies a directory on the host node's filesystem to use for the storage (inside the container)
  persistentVolumeReclaimPolicy: Retain # This specifies that the volume should not be deleted when the claim is deleted
  storageClassName: {{ .Values.storage.storageClassName }}
  volumeMode: Filesystem # specifies that the volume should be mounted to the Pod as a filesystem
---
# Creating a PersistentVolumeClaim
# The info should match the PersistentVolume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-pvc
  namespace: {{ .Values.db.namespace }}
spec:
  storageClassName: {{ .Values.storage.storageClassName }}
  accessModes:
    - {{ .Values.storage.accessMode }}
  resources:
    requests:
      storage: {{ .Values.storage.capacity }}

ConfigMap

ConfigMaps enable you to change your application’s configuration without having to rebuild your application’s image and without exposing sensitive data in your application code. In our case our configmap creates us a new user so we will be able to login to the db from the app.

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Values.cm.name }}
  namespace: {{ .Values.db.namespace }}

# The section to mention the key:value pare, in this file it is the name of a js script and the content of the file
data:
  init.js: |
    db = db.getSiblingDB('admin');
    db.createUser({
      user: 'root',
      pwd: 'root',
      roles: [{ role: 'readWriteAnyDatabase', db: 'admin' }],
    });

Secrets

Secrets in Kubernetes are a way to store sensitive information like passwords, but, they are not entirely secured, because the data should be stored in a base64 encode. In our case, we use secrets so our app can get the values for our db from there.

apiVersion: v1
kind: Secret
metadata:
  name: {{ .Values.secrets.name }}
  namespace: {{ .Values.db.namespace }}
type: Opaque
data:
  MONGO_ROOT_USERNAME: # your base64 encoded username
  MONGO_ROOT_PASSWORD: # your base64 encoded password
  MONGO_DATABASE: # your base64 encoded db name
  MONGO_IP: # your base64 encoded db ip address

Service

A service in Kubernetes is enabling network communication, holds one static IP address to all the pods with the same label. Our service type is load balancer so it can get the IP from metallb. MetalLB assigns IP addresses to services (applications) within Kubernetes, providing load balancing functionality similar to what cloud providers offer.

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.db.service.name }}
  namespace: {{ .Values.db.namespace }}
  labels:
    app: {{ .Values.db.label }}
spec:
  type: LoadBalancer
  ports:
  - port: {{ .Values.db.service.port }}
    name: {{ .Values.db.label }}
  selector:
    app: {{ .Values.db.label }}

Statesfulset

There are different deployment workloads in k8s, for example: Deployment, Statefulset, DaemonSet. In our case we are using Statefulset as the kind of deployment. The StatefulSet ensures that each Pod gets a unique and stable hostname. If we delete a pod, it will return to the same place with the same id.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ .Values.db.label }}
  namespace: {{ .Values.db.namespace }}
spec:
  serviceName: {{ .Values.db.service.name }}
  replicas: {{ .Values.db.replicas }}
  selector:
    matchLabels:
      app: {{ .Values.db.label }}
  template:
    metadata:
      labels:
        app: {{ .Values.db.label }}
    spec:
      containers:
      - name: {{ .Values.db.label }}
        image: {{ .Values.db.image }}
        ports:
        - containerPort: {{ .Values.db.service.port }}
          name: {{ .Values.db.label }}
        env: # This section is defining environment variables for the containers in the pods(Taken from secrets.yaml)
        - name: {{ .Values.secrets.username }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.username }}
        - name: {{ .Values.secrets.password }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.password }}
        - name: {{ .Values.secrets.database }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.database }}
        - name: {{ .Values.secrets.ip }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.ip }}
        volumeMounts: # This section is defining where volumes (persistent storage) should be mounted within the container's filesystem
        - name: mongodb-data
          mountPath: /data/db
        - name: {{ .Values.cm.name }}
          mountPath: /docker-entrypoint-initdb.d # This is a special directory that  Docker checks at startup for scripts
      volumes: # This section is defining the volumes that can be mounted by containers in the pod
      - name: {{ .Values.cm.name }}
        configMap:
          name: {{ .Values.cm.name }}
  volumeClaimTemplates: # This section is used to define the pvc for each pod
  - metadata:
      name: mongodb-data
    spec:
      accessModes: [ "{{ .Values.storage.accessMode }}" ]
      storageClassName: {{ .Values.storage.storageClassName }}
      resources:
        requests:
          storage: {{ .Values.db.storage.capacity }}

Application Config Files

Namespace

apiVersion: v1
kind: Namespace
metadata:
  name: {{ .Values.app.namespace }}

Service

apiVersion: v1 # Setting the version of the API
kind: Service # The kind of resource we are creating
metadata:
  namespace: {{ .Values.app.namespace }} # The namespace where the service will be created (from the values.yaml file)
  name: {{ .Values.app.service.name }} # The name of the service
  labels:
    app: {{ .Values.app.label }} # The label of the service (used to match the service with the deployment)
spec:
  ports:
  - port: {{ .Values.app.service.port }} # The port that the service will listen on
    targetPort: {{ .Values.app.service.targetPort }}  # The port that the service will forward requests to  
  selector:
    app: {{ .Values.app.label }} # The label of the service (used to match the service with the deployment)

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: vote-app-deployment
  namespace: {{ .Values.app.namespace }}
  labels:
    app: {{ .Values.app.label }}
spec:
  replicas: {{ .Values.app.replicas }}
  selector:
    matchLabels: 
      app: {{ .Values.app.label }}
  template:
    metadata:
      labels:
        app: {{ .Values.app.label }}
    spec:
      containers:
      - name: {{ .Values.app.label }}
        image: "{{ .Values.app.image }}:{{ .Values.app.tag }}"
        ports:
        - containerPort: {{ .Values.app.service.targetPort }}
        env: # Define environment variables
        - name: {{ .Values.secrets.username }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.username }}
              optional: true
        - name: {{ .Values.secrets.password }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.password }}
              optional: true
        - name: {{ .Values.secrets.ip }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.ip }}
              optional: true
        - name: {{ .Values.secrets.database }}
          valueFrom:
            secretKeyRef:
              name: {{ .Values.secrets.name }}
              key: {{ .Values.secrets.database }}
              optional: true

Creating Ingress

This file helps us to define the rules for the ingress. For example: the accessable domain, the port the traffic will be sent to and etc.

apiVersion: networking.k8s.io/v1 # Setting the API version
kind: Ingress # Setting the kind
metadata:
  namespace: {{ .Values.app.namespace }} # Setting the namespace
  name: vote-app-ingress # Setting the name of the ingress
  annotations:
    kubernetes.io/ingress.class: "nginx" # Setting the ingress class
spec:
  rules:
  - host: ds-vote-app.octopus.lab # Setting the domain (taken from /etc/hosts)
    http:
      paths:
      - path: / # Setting the path(homepage)
        pathType: Prefix
        backend:
          service:
            name: {{ .Values.app.service.name }} # Ensuring the service matches the service name
            port:
              number: {{ .Values.app.service.port }} # Setting the port number traffic will be sent to

Values.yaml

This file helps us to store repeated values along the files and we can access those values from one central place.
You can modify this file as needed.

storage:
  capacity: 5Gi
  storageClassName: standard
  accessMode: ReadWriteOnce
cm:
  name: mongo-init-db
secrets:
  name: mongodb-credentials
  username: MONGO_ROOT_USERNAME
  password: MONGO_ROOT_PASSWORD
  database: MONGO_DATABASE
  ip: MONGO_IP
db:
  namespace: mongo
  image: mongo:4.4.6
  label: mongodb
  replicas: 1
  storage:
    capacity: 5Gi
  service:
    name: mongodb-service
    port: 27017
app:
  namespace: vote-app
  image: dshwartzman5/vote-app
  tag: latest
  label: vote-app
  replicas: 1
  service:
    name: vote-app-service
    port: 80
    targetPort: 5000

Bash Script

Bash is a specific type of shell that is widely used in Unix/Linux systems, this is where the automation part begins. To deploy our app, and make use of our helm chart, we need an a few things to be set and installed first. We need to make sure we have kubectl and kind for our cluster environment. Our script will make all of that happen, so let’s dive into the script and explain each part.

Installation Functions

In this part, first we create a log file so we will send every error into there. Next, we move into the functions that install our dependencies. We start with a function that updates and upgrades the system, then we move to install kubectl, kind, helm, mail and the yq command that we will use later on. In each function we first check if the package exists and if not we perform an installation.

#!/bin/bash

# Creating log file
log_file="errors.log"

# System Update && Upgrade
system_update() {
    echo "Updating system..."
    sudo dnf -y update > /dev/null 2> $log_file
    echo "Upgrading system..."
    sudo dnf -y upgrade > /dev/null 2> $log_file
}

# Kubectl installation
kubectl_install() {
    # Check if kubectl is installed and if the user wants to install it
    if ! which kubectl &> /dev/null; then
        read -r -p "kubectl is not installed. Do you want to install it? [y/n] " answer
        if [ "$answer" == "y" ]; then
        echo "Installing kubectl..."
        curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" > /dev/null 2> $log_file
        curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" > /dev/null 2> $log_file

# Checking for successful installation
        if echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check --quiet; then
            sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
            if ! which kubectl &> /dev/null; then
                echo "kubectl installation failed." | tee -a "$log_file"
                exit 1
            else
                echo "kubectl installed successfully."
            fi
        else
            echo "sha256sum check failed for kubectl. Not installing." | tee -a "$log_file"
            exit 1
        fi

        else
            echo "Exiting..."
            exit 1
        fi
    fi
}

# Kind installation
kind_install() {
    # Check if kind is installed and if the user wants to install it
    if ! which kind &> /dev/null; then
        read -r -p "kind is not installed. Do you want to install it? [y/n] " answer
        if [ "$answer" == "y" ]; then
            echo "Installing kind..."
            [ "$(uname -m)" = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64 > /dev/null 2> $log_file
            chmod +x ./kind
            sudo mv ./kind /usr/local/bin/kind

# Checking for successful installation
            if ! which kind &> /dev/null; then
                echo "Kind installation failed." | tee -a "$log_file"
                exit 1
            else
                echo "Kind installed successfully."
            fi
        else
            echo "Exiting..."
            exit 1
        fi
    fi
}

# Helm installation
helm_install() {
    # Check if helm is installed and if the user wants to install it
    if ! which helm &> /dev/null; then
        read -r -p "helm is not installed. Do you want to install it? [y/n] " answer
        if [ "$answer" == "y" ]; then
            echo "Installing helm..."
            sudo dnf -y install helm > /dev/null 2> $log_file

# Checking for successful installation
            if ! which helm &> /dev/null; then
                echo "Helm installation failed." | tee -a "$log_file"
                exit 1
            else
                echo "Helm installed successfully."
            fi

        else
            echo "Exiting..."
            exit 1
        fi
    fi
}

# Yq command installation
yq_install() {
    # Check if yq is installed
    if ! which yq &> /dev/null; then
        pip install yq > /dev/null 2> $log_file

# Checking for successful installation
        if ! which yq &> /dev/null; then
            echo "Yq installation failed." >> "$log_file"
            exit 1
        fi
    fi
}
# Mail installation
mail_install() {
# Check if mail is installed
    if ! which mail &> /dev/null; then
        sudo dnf -y install mailx sendmail > /dev/null 2> $log_file

# Checking for successful installation
        if ! which mail &> /dev/null; then
            echo "Mail installation failed." >> "$log_file"
            exit 1
        fi
    fi
}

Creating the cluster

In this part we will create the k8s cluster, which contains one master node and one worker node. Then, we install MetalLB and Nginx ingress controller. An ingress controller is the thing that enforces ingress rules and helps to manage the traffic, ensuring traffic get smoothly to the right destination.

# Creating Cluster
create_cluster() {
    echo "Creating cluster..."

    # Check if the cluster already exists
    if kind get clusters 2>/dev/null | grep -q 'cluster1'; then
        echo "Cluster 'cluster1' already exists."
        return
    fi

# Create the kind config file
    cat <<EOF > ~/kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: cluster1 
nodes:
- role: control-plane
  image: kindest/node:v1.25.3@sha256:f52781bc0d7a19fb6c405c2af83abfeb311f130707a0e219175677e366cc45d1
- role: worker
  image: kindest/node:v1.25.3@sha256:f52781bc0d7a19fb6c405c2af83abfeb311f130707a0e219175677e366cc45d1
EOF

# Creating the cluster
    kind create cluster --config ~/kind-config.yaml > /dev/null 2> $log_file
    sleep 5
    
# Checking for successful creation
    if ! kubectl get nodes &> /dev/null; then
        echo "Cluster creation failed." | tee -a "$log_file"
        exit 1
    else
        echo "Cluster created successfully."
    fi
}

# Installing MetalLB
metalLB_install() {
    echo "Installing MetalLB..."
    kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml > /dev/null 2> $log_file
    sleep 90

# Getting the subnet of the kind network for the MetalLB config (ipv4)
    subnet=$(docker network inspect -f '{{range .IPAM.Config}}{{.Subnet}}{{end}}' kind | grep -oP '(\d{1,3}\.){3}\d{1,3}/\d{1,2}')

 # Extract the network base (First two octates) and define the IP range for MetalLB
    network_base=$(echo "$subnet" | cut -d'.' -f1-2)
    ip_range="$network_base.255.200-$network_base.255.250"

# Create the MetalLB config file
    cat <<EOF > metallb-config.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - $ip_range
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system
EOF

 # Apply the MetalLB config
    kubectl apply -f metallb-config.yaml > /dev/null 2> $log_file
    sleep 5

# Checking for successful installation
    if ! kubectl get pods -n metallb-system &> /dev/null; then
        echo "MetalLB installation failed." | tee -a "$log_file"
        exit 1
    else
        echo "MetalLB installed successfully."
    fi
}

# Installing Nginx Ingress Controller
nginx_ingress_install() {
    echo "Installing Nginx Ingress Controller..."
    helm pull oci://ghcr.io/nginxinc/charts/nginx-ingress --untar --version 1.1.3 &> /dev/null
    cd nginx-ingress || exit 1
    kubectl apply -f crds/ > /dev/null 2> $log_file
    helm install my-release oci://ghcr.io/nginxinc/charts/nginx-ingress --version 1.1.3 > /dev/null 2> $log_file
    sleep 20

  # Checking for successful installation
    if ! helm list -f my-release &> /dev/null; then
        echo "Nginx Ingress Controller installation failed." | tee -a "$log_file"
        exit 1
    else
        echo "Nginx Ingress Controller installed successfully."
    fi
}

System check

This is the function that calles and activates all the installation process above. In addition, it checks if the scipt is running by root user or not.

# Prequsites check
system_check() {
    # Check if the script is running as root
    if [ "$USER" != "root" ]; then
        echo "Please run the script as root."
        exit 1
    fi

# Preparing For Installations
    system_update

 # Checking for kubectl
    kubectl_install

 # Checking for kind
    kind_install

# Checking for yq
    yq_install

# Checking for helm
    helm_install

# Creating cluster
    create_cluster

 # Installing MetalLB
    metalLB_install

 # Installing Nginx Ingress Controller
    nginx_ingress_install

 # Installing mail
    mail_install
}

Dealing with replicas

In our script, we are giving the user the opportunity to choose how many replicas of the app and the db he wants. To make the necessary changes, we are using two functions, one for each replica, the app and the db.
Here we are making use of the yq command, which enables us to modify the values file in the helm chart, and change the number of replicas accordingly.

# Changing mongo replicas
change_mongo() {
    # Check if the app is already installed
    # grep -q '^app\s' is a regex that checks if the output of helm list starts with "app"
    if helm list | grep -q '^app\s'; then
        echo "App is already installed. Cannot install the app with the same name."
        exit 1
    fi

 # Checking if the passong arg isn't another flag or empty
    file="/home/ortuser19/Desktop/ex5/k8s/app/values.yaml"
    answer=$1
    if [[ -z "$answer" || "$answer" == -* ]]; then
        echo "Option -m requires an argument."
        exit 1
    
 # Changing the number of replicas
    else
        echo "Changing mongo replicas to $answer..."
        sleep 2
        yq -Y -i ".db.replicas = $answer" "$file"
    fi
}

# Changing app replicas
change_app() {
    # Check if the app is already installed
    if helm list | grep -q '^app\s'; then
        echo "App is already installed. Cannot install the app with the same name."
        exit 1
    fi

 # Checking if the passong arg isn't another flag or empty
    file="/home/ortuser19/Desktop/ex5/k8s/app/values.yaml"
    answer=$1
    if [[ -z "$answer" || "$answer" == -* ]]; then
        echo "Option -a requires an argument."
        exit 1
    
 # Changing the number of replicas
    else
        echo "Changing app replicas to $answer..."
        sleep 2
        yq -Y -i ".app.replicas = $answer" "$file"
    fi
}

First check on the running pods

During the process of creating the app we want to make sure the app pods and the mongo pods are up, and the application in total is ready for first usage.

# Checking for running app and mongo pods
check_namespace() {
    namespace=$1
    pods=$(kubectl get pods -n "$namespace")

# Checking if the pods are running
    if echo "$pods" | grep -q "Running"; then
        if [ "$namespace" == "mongo" ]; then
            echo "MongoDB is initialized."
        elif [ "$namespace" == "vote-app" ]; then
            echo "App is initialized."
        fi
    else
        if [ "$namespace" == "mongo" ]; then
            echo "MongoDB is not initialized." >> "$log_file"
        elif [ "$namespace" == "vote-app" ]; then
            echo "App is not initialized." >> "$log_file"
        fi
    fi
}

Creating the app

In this function we bring it all together and deploy our application. In this function we are using the functions above to deploy our app according to the user’s choice and make sure everything was ok.

# Creating app
create_app() {
    # Check if the app is already installed
    if helm list | grep -q '^app\s'; then
        echo "App is already installed. Cannot install the app with the same name."
        exit 1
    fi

    helm_chart="/home/ortuser19/Desktop/ex5/k8s/app/"
    echo "Creating app..."
    helm install app "$helm_chart" > /dev/null 2> $log_file
    sleep 20
    # Checking mongo
    check_namespace mongo
    sleep 20
    #Checking app
    check_namespace vote-app
    sleep 3

    ingress_ip=$(kubectl get ing -n vote-app | awk '{print $4}' | tail -n1)
    echo "$ingress_ip ds-vote-app.octopus.lab" | sudo tee -a /etc/hosts > /dev/null
    echo "App created successfully."; echo "You can access the app at http://ds-vote-app.octopus.lab."
}

Monitoring

We need to have a functions, that runs once in a while, in our case every hour, and check the logs within the pods to see if there are any errors accured. In addition, we want to be notified if we see any errors, for that we are using a mail funcion that sends an email if the error file isn’t empty.

# Sending Mail With The Erros
send_mail() {
    address="dshwartzman5@gmail.com"
    # Sends mail only if the log file is not empty
    if [ -s "$log_file" ]; then
        echo "Sending mail with the errors..."
        mail -s "Errors" "$address" < "$log_file"
    fi
}

# Creating monitoring function
monitor() {
    while true; do
        # Check if the application is running
        if ! kubectl get deployment vote-app -n vote-app > /dev/null 2>&1; then
            echo "The application is not running. Stopping the monitoring function."
            break
        fi

 # Get the names of the pods
        pods=$(kubectl get pods -n vote-app -o jsonpath='{.items[*].metadata.name}')

# Loop over the pods and get their logs
        for pod in $pods; do
            kubectl logs "$pod" -n vote-app --since=1h 2>> "$log_file"
        done

 # Check the log file for new entries
        send_mail

# Clear the log file
        echo "" > "$log_file"

        sleep 3600  # wait for 1 hour
    done
}

Help

It is very important to have in any bash script an help function that expalin the user how to use the script, what are the flags that can be used and general usage.

# Help
help() {
    echo "Usage: $0 [ -m <mongo_replicas> ] [ -a <app_replicas> ] [ -d ]"
    echo "Options:"
    echo "  -m <mongo_replicas>  Change the number of mongo replicas."
    echo "  -a <app_replicas>    Change the number of app replicas."
    echo "  -d                   Delete the app."
    exit 1

}

Cleanup

Let’s say that after a while that we used the app, we want to delete it. For that case, we have the cleanup function that deletes the cluster and everything in it.

# Cleanup
cleanup() {
    # Delete the cluster
    kind delete cluster --name cluster1
    sleep 5
    echo "All components uninstalled successfully."
    exit 0
}

Command line args and flags

This is the part in the script that will deal with the user’s input. We check the input he delivers as with falgs and arguments. To deal with flags we are using the getopts method. This a very convenient way to deal with flags in bash scripts. We have a while loop, that runs as long the input is flags, and uses a switch case methodology to activate different commands or functions according to the given flag. In addition, in this part we are calling the create_app, monitor and system_check functions.

# main

# Checking for help flag or empty input
if [ "$1" == "--help" ] || [ -z "$1" ]; then
    help
fi

# Check if the first argument is -d for cleanup
if [ "$1" == "-d" ]; then
    cleanup
fi

# Running system check
system_check

# Parsing the flags for other options
while getopts ":m:a:" opt; do
  case ${opt} in
    m)
        change_mongo "$OPTARG"     
        ;;
    a)
        change_app "$OPTARG"
        ;;
    *)
      echo "Invalid option: -$OPTARG"
      help
      ;;
  esac
done

# Running app
create_app
monitor &

To run the script, enter:

# Give permmisions
$ sudo chmod 700 ./app-creation

# Run the script
$ sudo ./app-creation

Expected Output

$ sudo ./app-creation.sh -m 1 -a 1
Updating system...
Upgrading system...
Creating cluster...
Cluster created successfully.
Installing MetalLB...
MetalLB installed successfully.
Installing Nginx Ingress Controller...
Nginx Ingress Controller installed successfully.
Changing mongo replicas to 1...
Changing app replicas to 1...
Creating app...
App is initialized.
App created successfully.
You can access the app at http://ds-vote-app.octopus.lab.

Summary

In this article, you learned how to take an app, deploy it with kubernetes and helm, and automate the whole process using a bash script. I hope this article was very helpful to you, I wish you as less bugs as possible and happy coding ๐Ÿ™‚