DocuMancer Workflow
When user sends a query request to FastAPI backend, the backend will call the RAG pipeline along with query and generates response back to user.
π§ Under the Hood: The DocuMancer Workflow

Hereβs how it flows:
- You ask a question in the chatbot.
- The backend:
- Loads your
.md
files. - Breaks them into small chunks (for better LLM understanding).
- Stores them in a vector database (FAISS).
- Loads your
- When a query comes in:
- It retrieves the top relevant document chunks.
- Feeds them to the LLM with your question.
- LLM generates an accurate, context-based response.
- You get the answer + source files + token cost.