Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Bug Report: Whole message history is stringified into a single 'user prompt' #2513

Open
1 task done
alexmojaki opened this issue Jan 14, 2025 · 3 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@alexmojaki
Copy link

Which component is this bug for?

VertexAI Instrumentation

📜 Description

Passing a list of messages to generate_content combines them all into a single string and stores/presents them as a single user prompt instead of keeping them separate.

👟 Reproduction steps

import vertexai
from traceloop.sdk import Traceloop
from vertexai.generative_models import Content, GenerativeModel, Part

Traceloop.init()
vertexai.init()
model = GenerativeModel('gemini-1.5-flash-002')
response = model.generate_content(
    [
        Content(
            role='user',
            parts=[
                Part.from_text("What's 2+2?"),
            ],
        ),
        Content(
            role='assistant',
            parts=[
                Part.from_text('5'),
            ],
        ),
        Content(
            role='user',
            parts=[
                Part.from_text('really?'),
            ],
        ),
    ]
)

👍 Expected behavior

Attributes look something like:

{
  "gen_ai.prompt.0.role": "user",
  "gen_ai.prompt.0.content": "What's 2+2?\n",
  "gen_ai.prompt.1.role": "assistant",
  "gen_ai.prompt.1.content": "5\n",
  "gen_ai.prompt.2.role": "user",
  "gen_ai.prompt.2.content": "really?\n",
  "gen_ai.completion.3.role": "assistant",
  "gen_ai.completion.3.content": "Oops!  My apologies. 2 + 2 = 4.  I'm still under development and learning to perform these calculations correctly.\n"
}

👎 Actual Behavior with Screenshots

Attributes look like:

{
  "gen_ai.prompt.0.user": "role: \"user\"\nparts {\n  text: \"What\\'s 2+2?\"\n}\n\nrole: \"assistant\"\nparts {\n  text: \"5\"\n}\n\nrole: \"user\"\nparts {\n  text: \"really?\"\n}\n\n",
  "gen_ai.completion.0.role": "assistant",
  "gen_ai.completion.0.content": "Oops!  My apologies. 2 + 2 = 4.  I'm still under development and learning to perform these calculations correctly.\n"
}
Screenshot 2025-01-14 at 16 50 21

🤖 Python Version

No response

📃 Provide any additional context for the Bug.

def _set_input_attributes(span, args, kwargs, llm_model):
if should_send_prompts() and args is not None and len(args) > 0:
prompt = ""
for arg in args:
if isinstance(arg, str):
prompt = f"{prompt}{arg}\n"
elif isinstance(arg, list):
for subarg in arg:
prompt = f"{prompt}{subarg}\n"
_set_span_attribute(
span,
f"{SpanAttributes.LLM_PROMPTS}.0.user",
prompt,
)

👀 Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

None

Copy link

dosubot bot commented Jan 14, 2025

Hey there, @alexmojaki! I'm Dosu, a bot here to help you with bug fixes, questions, and contributions. I'll assist you while we wait for a human maintainer.

The issue you're encountering is due to the _set_input_attributes function in the VertexAI Instrumentation, which combines multiple messages into a single string. This function iterates over the args parameter, concatenating each message or sub-message with a newline character, resulting in a single prompt string. This is why the attributes are not separated as expected [1].

To address this, you would need to modify the logic in _set_input_attributes to handle each message separately, ensuring that each message is stored as a distinct attribute rather than being combined into one.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the bug Something isn't working label Jan 14, 2025
@galkleinman
Copy link
Contributor

Thanks for reporting @alexmojaki! We'll prioritize the fix internally.

@kowshik24
Copy link
Contributor

@galkleinman I have created a PR could you please check?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants