This repository contains a web application for Medical Q&A . The frontend is built with React, and the backend is powered by FastAPI.
- Frontend: React
- Backend: FastAPI
- LLM: llama3(Ollama)
- Python 3.10+
- venv (for creating virtual environments)
-
Clone the repository:
git clone https://github.com/Mubashirshariq/MEDIC_CHAT.git
-
Navigate to the
frontend
directory:cd frontend
-
Install the dependencies:
npm install
-
Start the frontend development server:
npm run dev
Now create one more terminal to run the backend
-
Navigate to the
backend
directory:cd backend
-
Create a virtual environment:
python -m venv myenv
-
Activate the virtual environment:
- On Windows:
myenv\Scripts\activate
- On macOS and Linux:
source myenv/bin/activate
- On Windows:
-
Install the dependencies:
pip install -r requirements.txt
-
Run the FastAPI server:
python -m app.main
Load and process medical documents (clinical guidelines, patient education materials).
- Reads
.txt
files from predefined directories (clinical_guidelines
,patient_education
). - Splits the text into manageable chunks using
CharacterTextSplitter
. - Extracts metadata (title, source, file path) for reference.
- Converts the chunks into embeddings using
HuggingFaceEmbeddings
. - Stores them in a FAISS vector database for efficient retrieval.
self.vectorstore = FAISS.from_texts(
texts=texts,
embedding=HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2"),
metadatas=metadatas
)
Retrieve the most relevant medical information based on user queries.
- Uses FAISS to fetch the top K (default = 5) most relevant text chunks.
- Retrieves metadata (title, author, link) for citations.
- Sends retrieved context to Ollama LLM for answer generation.
retriever = self.vectorstore.as_retriever(search_kwargs={"k": self.TOP_K_RESULTS})
Generate human-like responses based on retrieved medical data.
- Uses Ollama (llama3 model) for text generation.
- Processes user questions and contextual information.
- Ensures the answer follows predefined guidelines:
- Uses only the provided context.
- If uncertain, acknowledges limitations.
- Provides relevant citations.
llm = Ollama(model="llama3")
response = self.conversation_chain({'query': question})
Provide sources for medical responses to ensure credibility.
- Extracts metadata from retrieved documents.
- Formats citations with: Title Author Link (if available) Page number (if applicable) Content snippet
- Displays citations in the API response.
citations = [
Source(
title=source.metadata.get("title", "Unknown"),
author=source.metadata.get("author", "Unknown"),
link=source.metadata.get("link", None),
page=source.metadata.get("page", None),
content=source.page_content
) for source in sources
]
Ensure users understand the system's limitations.
- Includes a medical disclaimer in every response.
- Assesses confidence level based on: No sources found → Low confidence 1-2 sources → Medium confidence 3+ sources → High confidence
def assess_confidence(self, sources: List[Source]) -> str:
if not sources:
return "Low - No sources found"
elif len(sources) >= 3:
return "High - Multiple consistent sources"
else:
return "Medium - Limited sources"


Once both the frontend and backend servers are running, you can access the application by navigating to http://localhost:5173
in your web browser.