Problem
According to this comment, CrateDB's llms-full.txt weighs in rather heavy, with a token usage on OpenAI of:
{"input": 212817, "output": 1139, "total": 213956}
Solution
Instead of feeding the full uncompressed knowledge context, either serve individual elements on demand per MCP, or try to compress it.