Skip to content

Mubashirshariq/MEDIC_CHAT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Medical Q&A

This repository contains a web application for Medical Q&A . The frontend is built with React, and the backend is powered by FastAPI.

Tech Stack

  • Frontend: React
  • Backend: FastAPI
  • LLM: llama3(Ollama)

Setup Instructions

Prerequisites

Frontend Setup

  1. Clone the repository:

    git clone https://github.com/Mubashirshariq/MEDIC_CHAT.git
  2. Navigate to the frontend directory:

    cd frontend
  3. Install the dependencies:

    npm install
  4. Start the frontend development server:

    npm run dev

Backend Setup

Now create one more terminal to run the backend

  1. Navigate to the backend directory:

    cd backend
  2. Create a virtual environment:

    python -m venv myenv
  3. Activate the virtual environment:

    • On Windows:
      myenv\Scripts\activate
    • On macOS and Linux:
      source myenv/bin/activate
  4. Install the dependencies:

    pip install -r requirements.txt
  5. Run the FastAPI server:

     python -m app.main

documentation

1. Data Ingestion

Purpose:

Load and process medical documents (clinical guidelines, patient education materials).

Steps:

  1. Reads .txt files from predefined directories (clinical_guidelines, patient_education).
  2. Splits the text into manageable chunks using CharacterTextSplitter.
  3. Extracts metadata (title, source, file path) for reference.
  4. Converts the chunks into embeddings using HuggingFaceEmbeddings.
  5. Stores them in a FAISS vector database for efficient retrieval.

Key Code:

self.vectorstore = FAISS.from_texts(
    texts=texts,
    embedding=HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2"),
    metadatas=metadatas
)   

2. Retrieval System

Purpose:

Retrieve the most relevant medical information based on user queries.

Steps:

  1. Uses FAISS to fetch the top K (default = 5) most relevant text chunks.
  2. Retrieves metadata (title, author, link) for citations.
  3. Sends retrieved context to Ollama LLM for answer generation.

Key Code:

retriever = self.vectorstore.as_retriever(search_kwargs={"k": self.TOP_K_RESULTS})

3. LLM Integration

Purpose:

Generate human-like responses based on retrieved medical data.

Steps:

  1. Uses Ollama (llama3 model) for text generation.
  2. Processes user questions and contextual information.
  3. Ensures the answer follows predefined guidelines:
  4. Uses only the provided context.
  5. If uncertain, acknowledges limitations.
  6. Provides relevant citations.

Key Code:

llm = Ollama(model="llama3")
response = self.conversation_chain({'query': question})

4. References & Citations

Purpose:

Provide sources for medical responses to ensure credibility.

Steps:

  1. Extracts metadata from retrieved documents.
  2. Formats citations with: Title Author Link (if available) Page number (if applicable) Content snippet
  3. Displays citations in the API response.

Key Code:

citations = [
    Source(
        title=source.metadata.get("title", "Unknown"),
        author=source.metadata.get("author", "Unknown"),
        link=source.metadata.get("link", None),
        page=source.metadata.get("page", None),
        content=source.page_content
    ) for source in sources
]

5. Disclaimers & Confidence Level

Purpose:

Ensure users understand the system's limitations.

Steps:

  1. Includes a medical disclaimer in every response.
  2. Assesses confidence level based on: No sources found → Low confidence 1-2 sources → Medium confidence 3+ sources → High confidence

Key Code:

def assess_confidence(self, sources: List[Source]) -> str:
    if not sources:
        return "Low - No sources found"
    elif len(sources) >= 3:
        return "High - Multiple consistent sources"
    else:
        return "Medium - Limited sources"

Usage

Example Response

Screenshot 2025-02-24 at 2 50 56 PM

Example Citations

Screenshot 2025-02-24 at 2 51 09 PM

###Example FollowUps Screenshot 2025-02-24 at 3 02 57 PM

Once both the frontend and backend servers are running, you can access the application by navigating to http://localhost:5173 in your web browser.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published