| marp | true |
|---|---|
| theme | tessl-cheatsheet |
| paginate | false |
| size | 1400px 1200px |
AI-powered tool that autonomously writes, edits, and executes code. Unlike chatbots, agents take actions: read files, run commands, and iterate on solutions.
The AI brain powering the agent. Trained on code/text to understand and generate code.
Specific version of an LLM with defined capabilities.
claude-sonnet-4-20250514
gpt-4o-2024-08-06
gemini-2.5-pro
Max tokens the model processes at once. Larger = more code context. Ranges 8K to 1M+ tokens.
How LLMs measure text. ~4 characters = 1 token. Code uses more tokens than prose.
| Capability | Description |
|---|---|
| Read | Access files in codebase |
| Write | Create and modify files |
| Execute | Run shell commands, tests |
| Search | Find code patterns |
| Browse | Fetch web documentation |
| MCP | Connect to external tools |
Hidden instructions defining agent behavior. Sets tone, capabilities, constraints.
How agents take actions. Model outputs structured calls that runtime executes.
The core cycle that powers autonomous work:
think → act → observe → repeat
Fitting relevant code into limited context via summarization, chunking, semantic search.
- Be specific about what you want
- Provide relevant context upfront
- Break complex tasks into steps
- Let the agent ask clarifying questions
Project-specific instructions that persist across sessions:
CLAUDE.md # Claude Code
GEMINI.md # Gemini CLI
AGENTS.md # OpenAI Codex
OPENCODE.md # OpenCode
Standard for connecting agents to external data sources and tools (databases, APIs, services).
| Pattern | Description |
|---|---|
| Plan-then-Execute | Create plan, get approval, implement |
| Iterative Refinement | Change, test, fix, repeat |
| Code Review Mode | Analyze without modifying |
| Model | Provider | Strengths |
|---|---|---|
| Claude Sonnet/Opus | Anthropic | Coding, reasoning |
| GPT-4o / o1 | OpenAI | Broad knowledge |
| Gemini 2.5 | Long context | |
| DeepSeek R1 | DeepSeek | Open weights |
- Input tokens: What you send to the model
- Output tokens: What the model generates
- Cached tokens: Reused context (cheaper)