Skip to main content

Project 02: LLM using Hugging Face for Beginners

LLM and Hugging Face: Project Overview

Imagine creating a web application where users can interact with cutting edge AI models to perform tasks like text summarization, sentiment-analysis, question-answering and even text-generation.

In this blog, we'll walk you through the process of building a LLM web application from scratch.

Project Overview

Our goal is to develop a web application where users interact with an LLM powered backend to perform the following tasks.

  • Text Summarization: Breaks up the long articles or documents into concise summaries.
  • Sentiment Analysis: Analyze the sentiment of a given text (positive, negative, or neutral).
  • Question Answering: Answer user questions based on a provided context.
  • Text Generation: Generate creative or informative text based on user prompts.
  • Table Question Answering: Extract answers from structured data (e.g., tables).

Technology stack used

  • Frontend: React.js (for building a dynamic and responsive user interface).
  • Backend: Node.js and Express.js (to create API endpoints for interacting with LLMs).
  • ML Integration: Hugging Face Transformers (to leverage pre-trained LLMs for NLP tasks)
  • Database(Optional): MongoDB (to store user data, queries, and results).
  • Hosting(Optional): Vercel/Netlify for frontend, and Render/Heroku for backend.

Step - 1: Setting Up MERN stack

Before integrating Hugging Face models, let's set up the basic structure of our MERN application.

1.1 Create Backend Setup

This project is done using Visual Studio Code editor. Open the editor of your choice and in your terminal write the below code.

mkdir llm-project && cd llm-project
npm init -y
npm install express cors body-parser dotenv huggingface/inference

Here huggingface/inference is the JavaScript library of hugging face.

Setup Express server by creating server.js file

// server.js file

const express = require('express')
const cors = require('cors')
const bodyParser = require('body-parser')
require('dotenv').config

const PORT = process.env.PORT || 5000;
const app = express()

app.use(cors)
app.use(bodyParser.json())

// define summarization
app.post('/api/summarize', (req, res) => {})

// define text-generation
app.post('/api/text-generation', (req, res) => {})

// define sentiment-analysis
app.post('/api/setiment-analysis', (req, res) => {})

// define question-answer
app.post('/api/question-answer', (req, res) => {})

// define table-question-answer
app.post('/api/table-question-answer', (req, res) => {})


app.listen(PORT, () => {
    console.log(`Server is running at ${PORT}`)
});

(Optional) - Connect to database MongoDB

If you want to store these data in database, copy your MongoDB url and save it in .env file

// .env

MONGODB_URL = your_mongod_url

Add connection to server.js

const mongoose = require('mongoose')

const mongo_url = process.env.MONGODB_URL

mongoose.connect(mongo_url, { useNewUrlParser: true, useUnifiedTopology: true })
  .then(() => console.log('MongoDB is connected')
  .catch(err => console.log(err));

1.2 Build Frontend

Initialize react app and install react-router-dom for routing the components

// on terminal

cd .. // to come out of llm-backend directory
npx create-react-app llm-cient
cd llm-client
npm install axios react-router-dom

Design a simple UI with forms and buttons for each task (e.g., text summarization, sentiment analysis). Use libraries like axios to make API calls to the backend. Route your components as shown in below code App.js file.

import { Routes, Route, Link } from 'react-router-dom'
import './App.css'
import Summarize from "./components/Summarize";
import SentimentAnalysis from './components/SentimentAnalysis';
import TextGeneration from './components/TextGeneration';
import QuestionAnswer from './components/QuestionAnswer';
import TableQA from './components/TableQA';

function App() {
  return (
    <div className="App">
      <div id='header'>
        <h1>LLM Hugging Face Starter Projects</h1>
      </div>

      <div className="tabBar">
        <ul>
          <li><Link to='/summarize'>Summarize</Link></li>
          <li><Link to='/sentiment-analysis'>Sentiment Analysis</Link></li>
          <li><Link to='/text-generation'>Text Generation</Link></li>
          <li><Link to='/question-answer'>Question Answer</Link></li>
          <li><Link to='/table-question-answer'>Table Question Answer</Link></li>
        </ul>
      </div>

      <div className="main">
        <Routes>
          <Route path='/summarize' Component={Summarize} />
          <Route path='/sentiment-analysis' Component={SentimentAnalysis} />
          <Route path='/text-generation' Component={TextGeneration} />
          <Route path='/question-answer' Component={QuestionAnswer} />
          <Route path='/table-question-answer' Component={TableQA} />
        </Routes>
      </div>
    </div>
  );
}

export default App;

Style components as per your styling or you can follow this link to copy paste the css styles.

1.3 Integrating Hugging Face API Models

Here comes the exciting part - integrating hugging face models into your backend! Now install hugging face transformers library in your backend.

  • For transformer-pipeline based integration:
npm install @huggingface/transformers
  • For API Inference based integration:
npm install @huggingface/inference

Signup for Hugging Face and get an API token. Add it to .env

// .env file
HF_API_KEY = your_hugging_face_token_key

Continue with the above backend

require('dotenv').config()
// route for HF
const { HfInference } = require('@huggingface/inference')

Then we'll create route for Hugging Face Inference API for each project in upcoming sections.

💡
We can create hugging face using pipeline as discussed previously or using API Inferences. Here I used API Inference instead of pipeline to integrate model in another way.