Skip to main content

Project 01: Employee Attrition Prediction

1.9 - Model Deployment

After successfully developing and testing our employee attrition prediction model locally, let's explore how to deploy.

πŸ”–
Deploy the model into a production environment using tools like Docker, Kubernetes, or cloud platforms. Expose the model as a REST API, microservice, or integrate it directly into the application.

Docker - Containerise the Application

To containerise the application make sure all the required files are available.

With the above application files add the following Dockerfile, .dockerignore and requirements.txt file.

a. Dockerfile

Create a Dockerfile and copy the below content into it.

FROM python:3.11.6-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . /app

RUN python3 model.py

EXPOSE 5000

ENV FLASK_APP=app.py

CMD ["flask", "run", "--host=0.0.0.0"]

Here the Dockerfile uses a minimal Python 3.11.6 as the base image. Set the working directory inside the container to /app and copies the requirements.txt file into it.

Installs the Python packages listed in requirements.txt using pip. Then it copies all files and folders from your local directory into the container's /app directory.

Then executes the Python script model.py during the image build process and expose the container on port 5000. The CMD instruction specifies the command to run when the container starts.

b. requirements.txt

Create a file requirements.txt and copy the below contents into it.

scikit-learn
pandas
flask_cors

c. .dockerignore

This file ignores the files mentioned during the Docker build process. Create a file .dockerignore and copy the below contents.

// .dockerignore file
Dockerfile

In our project we use it to ignore copying Dockerfile into the Docker image while building the application image.

And the final directory structure will look like this:

.
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ app.py
β”œβ”€β”€ employee_attrition_test.csv
β”œβ”€β”€ employee_attrition_train.csv
β”œβ”€β”€ model.py
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ .dockerignore
β”œβ”€β”€ static
└── templates

Once every files are ready run the following command to build the Docker image.

docker build -t employee-attrition-app:1.0 .

To test the image run it locally using the following command.

docker run -p 5000:5000 employee-attrition-app:1.0

If it runs without any issue, tag it and push it to a Docker registry, so that we can deploy it on Kubernetes.

docker tag employee-attrition-app:1.0 techiescamp/employee-attrition-app:1.0

docker push techiescamp/employee-attrition-app:1.0

Deploy it on Kubernetes

Create a manifest file deploy.yaml and copy the below content.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee-attrition-app
  labels:
    app: employee-attrition
spec:
  replicas: 1
  selector:
    matchLabels:
      app: employee-attrition
  template:
    metadata:
      labels:
        app: employee-attrition
    spec:
      containers:
      - name: employee-attrition
        image: techiescamp/employee-attrition-app:1.0
        ports:
        - containerPort: 5000

To expose the application as a service, create a file service.yaml and copy the below contents into it.

apiVersion: v1
kind: Service
metadata:
  name: employee-attrition-service
spec:
  selector:
    app: employee-attrition
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5000
  type: ClusterIP

Apply the manifest file by running the following command.

kubectl apply -f deploy.yaml

kubectl apply -f service.yaml

To check if the pods are up and running use the following command.

kubectl get po

Thus Employee Attrition Prediciton project is up and available to all.