feat: add instrument-app skill for orq.ai observability#12
Conversation
New skill that guides users through instrumenting LLM applications with orq.ai tracing — covering AI Router proxy, OpenTelemetry integrations, the @Traced decorator, and trace enrichment with metadata. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
RES-545 Create instrument-app skill for orq.ai observability
GoalCreate an Inspired by Langfuse's instrumentation skill, adapted for orq.ai's two integration modes. What Was DoneCreated 4 files:
Updated README.md with Key Design Decisions
What's Left
|
The claude-plugins repo is still a work in progress, deferring the mention until it's ready. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Rename skill to better reflect what it does. Update README skills table and add "Instrument an Existing App" workflow example. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Code Review FindingsBugs
|
- Fix import: opentelemetry.instrumentation.openai → openinference.instrumentation.openai - Rename heading from "Instrument App" to "Setup Observability" Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
All good now, can be merged. |
PR Review SummaryPR: feat: add instrument-app skill for orq.ai observability Critical Issues (3)
Important Issues (5)
Suggestions (6)
Strengths
Recommended Action
🤖 Generated with Claude Code |
- Fix @Traced import path: orq_ai_sdk.tracing → orq_ai_sdk.traced (verified against official docs) - Fix LangChain model format: gpt-4o → openai/gpt-4o (provider/model format) - Replace hardcoded service.name=my-app with <your-app-name> placeholder - Soften unsubstantiated "10x more metadata" claim - Add warning about overwriting existing OTEL config (Datadog, Jaeger, etc.) - Add auto-formatter guidance (isort/noqa) for critical import ordering Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Review Finding:
|
Baukebrenninkmeijer
left a comment
There was a problem hiding this comment.
Some tools don't have the right name, but looks good otherwise imo.
Review: Context Gaps for AI Coding AssistantsEvaluated whether the skill files provide enough context for an AI coding assistant to generate working code without hallucinating. The AI Router happy path is solid, but several gaps would cause failures: 1.
|
Review Finding: Node.js/TypeScript coverage gap in
|
… context - Replace non-existent `get_evaluator_llm`/`get_evaluator_python` with `evaluator_get` across 4 skills - Add SDK init prerequisite to @Traced guide (silent failure without Orq client) - Document capture_input/capture_output defaults as True (PII risk) - Add missing `import os` to framework-integrations code snippets - Explain Control Tower column in framework integrations table - Scope @Traced and OTEL examples as Python-only, add Node.js pointers Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Re-review: Remaining Issues (after latest commits)Most feedback from earlier comments has been addressed — Three items still open: 1.
|
- Replace non-existent get_evaluator_llm/get_evaluator_python with evaluator_get in mcp-tools tests - Remove compare-agents test scenarios (should ship with compare-agents PR, not this one) - Remove compare-agents from Critical Files list Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
setup-observabilityskill that guides users through instrumenting LLM applications with orq.ai tracing@traceddecorator guide, baseline checklistCloses RES-545
Test plan
claude --plugin-dir .and confirmsetup-observabilityskill appears@tracedcode examples against latest orq.ai Python SDK