This directory contains the RAG (Retrieval-Augmented Generation) Chatbot example for the Atomic Agents project. This example demonstrates how to build an intelligent chatbot that uses document retrieval to provide context-aware responses using the Atomic Agents framework.
- Document Chunking: Automatically splits documents into manageable chunks with configurable overlap
- Vector Storage: Uses ChromaDB for efficient storage and retrieval of document chunks
- Semantic Search: Generates and executes semantic search queries to find relevant context
- Context-Aware Responses: Provides detailed answers based on retrieved document chunks
- Interactive UI: Rich console interface with progress indicators and formatted output
To get started with the RAG Chatbot:
-
Clone the main Atomic Agents repository:
git clone https://github.com/BrainBlend-AI/atomic-agents
-
Navigate to the RAG Chatbot directory:
cd atomic-agents/atomic-examples/rag-chatbot
-
Install the dependencies using Poetry:
poetry install
-
Set up environment variables: Create a
.env
file in therag-chatbot
directory with the following content:OPENAI_API_KEY=your_openai_api_key
Replace
your_openai_api_key
with your actual OpenAI API key. -
Run the RAG Chatbot:
poetry run python rag_chatbot/main.py
Generates semantic search queries based on user questions to find relevant document chunks.
Analyzes retrieved chunks and generates comprehensive answers to user questions.
Manages the vector database for storing and retrieving document chunks.
Provides retrieved document chunks as context to the agents.
Orchestrates the entire process, from document processing to user interaction.
-
The system initializes by:
- Downloading a sample document (State of the Union address)
- Splitting it into chunks with configurable overlap
- Storing chunks in ChromaDB with vector embeddings
-
For each user question:
- The Query Agent generates an optimized semantic search query
- Relevant chunks are retrieved from ChromaDB
- The QA Agent analyzes the chunks and generates a detailed answer
- The system displays the thought process and final answer
You can customize the RAG Chatbot by:
- Modifying chunk size and overlap in
config.py
- Adjusting the number of chunks to retrieve for each query
- Using different documents as the knowledge base
- Customizing the system prompts for both agents
The chatbot can answer questions about the loaded document, such as:
- "What were the main points about the economy?"
- "What did the president say about healthcare?"
- "How did he address foreign policy?"
Contributions are welcome! Please fork the repository and submit a pull request with your enhancements or bug fixes.
This project is licensed under the MIT License. See the LICENSE file for details.