Skip to content

Conversation

@KrshnKush
Copy link

@KrshnKush KrshnKush commented Jan 29, 2026

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

  1. Link to an existing issue (if applicable):
  1. Or, if no issue exists, describe the change:

Problem:
When using LiteLLM with OpenAI/Azure models, after tool calls the fallback user message "Handle the requests as specified in the System Instruction." is injected into the conversation. This phrasing triggers Azure OpenAI's content management policy (jailbreak detection), resulting in:

openai.BadRequestError: ... 'code':'content_filter','innererror': {'code':'ResponsibleAIPolicyViolation','content_filter_result': {'jailbreak': {'filtered': True,'detected': True} ...

The flow is: user message → model tool call → tool response (function_response) → _append_fallback_user_content_if_missing adds the fallback text → that text is sent as a user message and is flagged as prompt injection.

Solution:
Replace the fallback text with neutral wording that does not trigger content filters: "Handle the incoming request according to the provided requirements." This keeps the fallback behavior for backends that need a user message with content, while avoiding the jailbreak/self-referential phrasing that causes OpenAI/Azure to reject the request. Changes are limited to the two occurrences in _append_fallback_user_content_if_missing() in src/google/adk/models/lite_llm.py (Part.from_text inline append and the new Content role="user" branch).

Testing Plan

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Please include a summary of passed pytest results.

Manual End-to-End (E2E) Tests:

  1. Install google-adk (e.g. 1.22.1 or later) with this change.
  2. Create an agent with at least one tool using LiteLLM with an OpenAI or Azure OpenAI model.
  3. Run a conversation that triggers a tool call (e.g. ask a math question if the tool is a calculator).
  4. Verify that no ContentPolicyViolationError / content_filter error occurs.
  5. Optionally inspect the request payload to confirm the fallback user message now uses "Handle the incoming request according to the provided requirements." instead of the previous text.

Please provide logs or screenshots to help reviewers better understand the fix.

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context


Code changes (for reference)

File: src/google/adk/models/lite_llm.py

  1. Around line 491–494 (inline append):
    Replace:
    text="Handle the requests as specified in the System Instruction."
    With:
    text="Handle the incoming request according to the provided requirements."

  2. Around lines 502–505 (new Content branch):
    Replace:
    text=(
    "Handle the requests as specified in the System"
    " Instruction."
    )
    With:
    text="Handle the incoming request according to the provided requirements."

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @KrshnKush, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical issue where a default fallback user message, injected by LiteLLM when interacting with OpenAI/Azure models after tool calls, was erroneously triggering content filter violations. By replacing the original, problematic phrasing with a more neutral alternative, the change ensures that tool-enabled conversations can proceed without interruption from jailbreak detection, thereby improving the robustness and reliability of the system.

Highlights

  • Content Filter Bypass: Replaced a specific fallback user message in LiteLLM that triggered Azure OpenAI's content filters (jailbreak detection) with a neutral phrase.
  • Backward Compatibility: The change is minimal and backward-compatible, only updating the problematic string without altering API or control flow.
  • Test Update: Updated unit tests to reflect the new fallback message, ensuring the fix is correctly verified.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the models [Component] Issues related to model support label Jan 29, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue where a fallback user message was triggering content filters in Azure OpenAI. The fix involves replacing the problematic text with a more neutral phrase. The changes are applied consistently in src/google/adk/models/lite_llm.py and the corresponding unit tests are updated.

My main feedback is to define the new fallback message as a constant to improve maintainability and avoid duplicating the string literal in both the implementation and the tests. This will make future changes easier and less error-prone.

@adk-bot
Copy link
Collaborator

adk-bot commented Jan 29, 2026

Response from ADK Triaging Agent

Hello @KrshnKush, thank you for your contribution!

Could you please provide the logs or screenshots from your manual end-to-end tests? This will help the reviewers to better understand and verify the fix.

Thank you!

@KrshnKush
Copy link
Author

Response from ADK Triaging Agent

Hello @KrshnKush, thank you for your contribution!

Could you please provide the logs or screenshots from your manual end-to-end tests? This will help the reviewers to better understand and verify the fix.

Thank you!

Issue:
image

Working fine:
image

@ryanaiagent ryanaiagent self-assigned this Jan 29, 2026
@ryanaiagent
Copy link
Collaborator

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an important issue where a fallback user message was triggering content filters. The approach of replacing the problematic text with a more neutral phrase and extracting it into a constant _FALLBACK_USER_CONTENT_TEXT is a good one. The changes in lite_llm.py and the corresponding test updates are well-implemented for the cases they cover. However, the fix appears to be incomplete. As detailed in my comment, the BaseLlm._maybe_append_user_content method, which is called by LiteLlm, still contains the old problematic string. This means the bug will persist when a request is made with an empty contents list. I've suggested adding a test case that would expose this bug and help ensure a complete solution.

@ryanaiagent ryanaiagent added the needs review [Status] The PR/issue is awaiting review from the maintainer label Jan 29, 2026
@ryanaiagent
Copy link
Collaborator

Hi @KrshnKush , Thank you for your contribution! We appreciate you taking the time to submit this pull request. Your PR has been received by the team and is currently under review. We will provide feedback as soon as we have an update to share.

@ryanaiagent
Copy link
Collaborator

Hi @wukath , can you please review this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support needs review [Status] The PR/issue is awaiting review from the maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unintended user message injection breaks tool calling with LiteLLM + OpenAI/Azure

3 participants