Research artifact, paper, and frozen evaluation outputs for selective revocation and replay after persistent indirect prompt injection in memory-augmented LLM agents.
-
Updated
Mar 11, 2026 - Python
Research artifact, paper, and frozen evaluation outputs for selective revocation and replay after persistent indirect prompt injection in memory-augmented LLM agents.
Modular API for a Recurrent Language Model (RLM) with LangGraph, supporting text preprocessing, sequential inference, structured REST responses, and vector store integration for scalable, context-aware, multi-step text generation pipelines.
Add a description, image, and links to the memory-augmented-llms topic page so that developers can more easily learn about it.
To associate your repository with the memory-augmented-llms topic, visit your repo's landing page and select "manage topics."