-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi Modal Prompt Management #166
Multi Modal Prompt Management #166
Conversation
WalkthroughThe pull request introduces modifications to the Changes
Suggested reviewers
Possibly related PRs
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
athina/steps/llm.py (1)
Line range hint
116-196
: Reduce complexity of PromptTemplate.resolve method.The method's complexity (score: 20) exceeds the maximum allowed (10). Consider breaking it down into smaller, focused methods.
Here's a suggested refactoring approach:
class PromptTemplate(BaseModel): def _setup_environment(self) -> Environment: """Initialize and return Jinja2 environment.""" return Environment( variable_start_string="{{", variable_end_string="}}", undefined=PreserveUndefined, ) def _handle_import_message(self, message: PromptMessage, kwargs: dict) -> List[PromptMessage]: """Process import messages and return resulting messages.""" import_key = message.content.strip("{}") if import_key not in kwargs: return [] value = kwargs[import_key] if not isinstance(value, list): return [] return [self._create_message_from_item(item, kwargs) for item in value] def _create_message_from_item(self, item: dict, kwargs: dict) -> PromptMessage: """Create a PromptMessage from a dictionary item.""" if not isinstance(item, dict): return None if "tool_call" in item: return self._create_tool_call_message(item, kwargs) return PromptMessage(**item) def _create_tool_call_message(self, item: dict, kwargs: dict) -> PromptMessage: """Create a PromptMessage with tool_call.""" try: return PromptMessage( role=item["role"], tool_call=self.env.from_string(item.get("tool_call")).render(**kwargs), ) except Exception as e: print(f"Error parsing tool_call: {e}") return None def _resolve_content(self, message: PromptMessage, kwargs: dict) -> PromptMessage: """Resolve message content based on its type.""" if message.content is None: return message if isinstance(message.content, str): return self._resolve_string_content(message, kwargs) if isinstance(message.content, list): return self._resolve_list_content(message, kwargs) return message def resolve(self, **kwargs) -> List[PromptMessage]: """Render the template with given variables.""" self.env = self._setup_environment() # Process messages final_messages = [] for message in self.messages: if message.role == "import": final_messages.extend(self._handle_import_message(message, kwargs)) else: final_messages.append(message) # Resolve content return [msg for msg in (self._resolve_content(message, kwargs) for message in final_messages) if msg]This refactoring:
- Breaks down the complex method into smaller, focused methods
- Improves readability and maintainability
- Makes testing individual components easier
- Reduces the overall complexity score
🧹 Nitpick comments (3)
athina/steps/llm.py (3)
55-69
: Consider adding error logging for invalid image URLs.The validation logic for image URLs is robust, but silently skips invalid URLs. Consider logging these cases to help with debugging and monitoring.
if image_data.get("image_url") and ( isinstance(image_data["image_url"], str) or (isinstance(image_data["image_url"], dict) and image_data["image_url"].get("url")) ): formatted_content.append(image_data) + else: + print(f"Warning: Skipping invalid image URL in content: {image_data}")
79-80
: Consider consolidating streaming options.Having both
stream
andstreaming
fields might lead to confusion. Consider deprecating one field to maintain a single source of truth.- stream: Optional[bool] = None - streaming: Optional[bool] = None + stream: Optional[bool] = None + # TODO: Remove in next major version + streaming: Optional[bool] = None # Deprecated: Use 'stream' instead
82-99
: Simplify streaming logic in model_dump.The current implementation could be simplified while maintaining the same behavior.
def model_dump(self, *args, **kwargs): data = super().model_dump(*args, **kwargs) - - # If stream is explicitly set (True or False), use it - if self.stream is not None: - data['stream'] = self.stream - # If streaming is set, use its value for stream - elif self.streaming is not None: - data['stream'] = self.streaming - # If neither is set, default stream to False - else: - data['stream'] = False - - # Always remove streaming from output - if 'streaming' in data: - del data['streaming'] - + # Use stream if set, fall back to streaming, default to False + data['stream'] = self.stream if self.stream is not None else (self.streaming if self.streaming is not None else False) + data.pop('streaming', None) # Remove streaming field if present return {k: v for k, v in data.items() if v is not None}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
athina/steps/llm.py
(3 hunks)
🧰 Additional context used
🪛 GitHub Actions: Python Linter
athina/steps/llm.py
[error] 120-120: Method 'PromptTemplate.resolve' is too complex (complexity score: 20, max allowed: 10)
@@ -223,6 +258,7 @@ def execute(self, input_data: dict, **kwargs) -> str: | |||
messages = self.template.resolve(**input_data) | |||
# Convert messages to API format | |||
api_formatted_messages = [msg.to_api_format() for msg in messages] | |||
print(f"**self.model_options.model_dump(): {self.model_options.model_dump()}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace print statement with proper logging.
Debug print statements should use proper logging mechanisms for better control and formatting in production environments.
- print(f"**self.model_options.model_dump(): {self.model_options.model_dump()}")
+ import logging
+ logging.debug("Model options: %s", self.model_options.model_dump())
Committable suggestion skipped: line range outside the PR's diff.
THIS REQUIRES: https://github.com/athina-ai/athina-dashboard/pull/987
THIS REQUIRES: https://github.com/athina-ai/gateway/pull/230
REQUIRES: https://github.com/athina-ai/athina-api/pull/1157
Summary by CodeRabbit
New Features
stream
andstreaming
attributes for model optionsRefactor