Skip to content

A2a Orchestrator agent fails with Aws Bedrock #6

@servicerpavanguard-eth

Description

@servicerpavanguard-eth

Hello,

This is my first post on this forum, but I’m not new to DLAI. The A2A Protocol short course is clearly a winner in my book! I’ve never had such a clearer understanding of A2A in a “short course”. So, Thank you!

I’ve worked on all the exercises by myself writing agents that instead of invoking LLMs, invoke fully functional Langflow and Flowise workflows that are powered by SLMs on the Ollama backend. I’m happy to report that each of these agents are accessible via their individual A2a server containers.

I see SLMs have their limits and therefore, I have been trying to build the Orchestrator agent using AWS Bedrock and this is where I’m hitting a wall.

I’m able to use both, the ChatModel and the AwsBedrockChatModel to instantiate and run inference. However, when I use the llm object within a RequirementAgent I run into this error “unsupported model or your request does not allow prompt caching”.

Clearly, I’ve confirmed the models are accessible and supported. From Bedrock documentation it seems that prompt caching (aptly named) needs to be part of the prompt. Therefore, I'm unsure if implementing UnlimitedCache and SlidingCache as ChatModelOptions, will make things better.

Question(s)

  1. How do I package prompt caching instructions in the prompt of a RequirementAgent?
  2. And if not, what is the fix for this issue?
  3. I looked for AWS Bedrock examples for BeeAI framework, but did not find any beyond what I’ve already done. Am I missing something?

Thank you!

Andy

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions