A Project Scaffold for AI Vibecoding
"You do not need to be a professional software engineer to keep AI work durable, traceable, and handoff-friendly."
English README • Architecture • Remember Boundary • Usage Guide • 中文介绍
A daemonless, project-local scaffold for AI vibecoding.
Evo-Lite is a project-local scaffold for Agentic Workflows, especially useful for people who come from automation, controls, hardware, testing, ops, or other non-pure-software backgrounds but still want to build real projects with AI. Instead of asking you to run an external RAG stack, it keeps rules, explicit context, implicit memory, and CLI tooling inside the repo, so an AI agent inherits not only code, but also discipline.
Important
Current structure: Evo-Lite uses a two-layer model.
AGENTS.md/CLAUDE.mdare generated at the project root as host adapters for Codex and Claude Code..claude/commands/provides thin Claude Code-native command wrappers without replacing the canonical workflow semantics in.agents/workflows/.- Generated asset rule: these host adapter files are generated Evo-Lite assets and may be overwritten during template upgrades; the canonical long-term semantic source of truth remains
.agents/and.evo-lite/. - Workflow protocols such as
/commit,/mem, and/washlive in.agents/workflows/. - Executable behavior lives in
.evo-lite/cli/viamemory.jsand the generatedmemwrappers (./.evo-lite/memon Unix/Bash,.\.evo-lite\mem.cmdon Windows PowerShell/CMD). After upgrading an existing project, runnode .evo-lite/cli/memory.js verifybefore continuing work.
Tip
Codex menu expectation:
In Codex, Evo-Lite currently provides semantic workflows, not native commands automatically registered into the navigation menu or slash-command picker.
So you should usually not expect to see /evo, /commit, /mem, or /wash appear as built-in menu items.
The normal Codex usage is to ask for the workflow explicitly in natural language, for example:
- "Run Evo-Lite's
/evoworkflow" - "Close this change using the
/commitprotocol" - "Run the lightweight
/memflow and only write the next focus" Claude Code's.claude/commands/wrappers do not imply that Codex will expose the same native menu entries.
Evo-Lite now uses a canonical semantics layer + host adapter layer model:
- Canonical semantics layer:
.agents/and.evo-lite/This is where the actual workflows, rules, state machine, and long-term memory flow are defined. - Codex host adapter layer: root-level
AGENTS.mdThis is the Codex-facing entry summary, not a second canonical rule tree. - Claude Code host adapter layer: root-level
CLAUDE.mdplus.claude/commands/These are Claude-facing entrypoints and thin wrappers, not a replacement for the canonical Evo-Lite source.
The intended mental model is:
AGENTS.md/CLAUDE.md/.claude/commands/: the host-facing navigation surface.agents//.evo-lite/: the actual Evo-Lite semantics and runtime truth
That is why the host adapter assets are allowed to be regenerated during upgrades, while .agents/ and .evo-lite/ remain the canonical sources of truth.
- Codex: primarily uses
AGENTS.md+.agents/workflows/+ the local CLI. By default,/evo,/commit,/mem, and/washshould be treated as semantic workflow names, not guaranteed host-native menu commands. - Claude Code: in addition to
CLAUDE.md, it may also consume.claude/commands/as thin command wrappers. That makes Claude Code closer to a command-file model, but it still does not replace.agents/workflows/as the canonical semantic source.
If you are not a full-time software engineer and rely on AI to turn ideas, domain knowledge, and field problems into small tools or products quickly, the hardest part is usually not shipping version one. It is making sure the AI can still continue the project tomorrow without losing the thread.
As AI coding assistants become increasingly powerful, we often encounter these engineering-level pain points:
- Long-Tail Memory Loss: AI loses context during long conversations, forgetting critical bug fixes from yesterday.
- People-Pleasing Personality: AI lack professional opinion—adding 5 random npm dependencies just because you asked for a simple feature, or mixing CommonJS and ESModules randomly.
- Heavy Management Costs: Most RAG solutions for memory require running Docker or microservices. We need it simple!
- Host Pollution: No one wants to pollute a clean Java or Rust project root directory just for an AI script.
Evo-Lite is not trying to turn you into a full software team. It is trying to give you a project skeleton that AI can keep working with over time.
- 🏗️ Governance via Rules (
.agents/rules) Protocols are no longer just suggestions in a chat. They live as project assets, can be versioned, reviewed, and upgraded, and serve as durable constraints for the next agent taking over. - 🌐 In-Tree RAG (Pure Local Vector Engine)
Built on
sqlite-vec, with the whole runtime living under.evo-lite/. No daemon, no separate memory service, no extra deployment tier. - 🧠 Dual-Stage Retrieval
- Embedding for coarse candidate retrieval
- Reranker for better semantic ordering Both are designed around local ONNX inference with downgrade paths when the environment is constrained.
- 🛡️ Explicit + Implicit Memory
- Explicit state machine (
active_context.md) for focus, backlog, and trajectory handover - Implicit memory store (
memory.db,raw_memory,vect_memory) for long-term searchable recall and rebuildable archives
- Explicit state machine (
- 🛠️ Rebuildable Archive Pipeline
archive,sync, andrebuildmake memory maintainable over time, instead of turning it into a one-shot write-only cache. - ⚓ Space-Time Traceability (Git Anchoring)
rememberwrites are stamped with[Time]and Git[Commit Hash], whilearchive/trackartifacts keep their traceability in frontmatter plus structured Markdown sections. The goal is durable provenance, without pretending every memory path uses the exact same envelope. - 🔄 Upgradeable Runtime Existing projects can be re-initialized and verified without treating the first scaffold as the only valid moment of setup.
- ⚡ Workflow Protocols + CLI Commands
- Workflow layer:
/evo,/commit,/mem,/wash - Execution layer:
remember,recall,export,import,archive,sync,rebuild,context
- Workflow layer:
Evo-Lite currently uses an explicit dual-lane memory model:
active_context.md: the live state panel, only forMETA,FOCUS,BACKLOG,TRAJECTORY, and other “what is happening right now” signals.archive/track: long-lived structured assets for closed-loop bug reviews, implementation conclusions, architecture decisions, and reusable project knowledge.remember: a lightweight implicit recall cache for quick searchable hints, but not the primary rebuild-guaranteed closure path.
The intended mental model is:
active_context: cockpitarchive: black boxcontext track: the only compliant transition bridge
- Work in progress lives in
active_context.md - Closed-loop progress is persisted through
.\.evo-lite\mem.cmd context track ... - Long-term experience belongs in structured archives under
raw_memory/ - Lightweight searchable hints may use
remember
The default main lane is:
active_context -> context track -> archive
This means:
- do not keep large retrospectives inside
active_context.md - do not manually duplicate records from
active_context.mdinto archive - if
trackdid not succeed, the loop is not considered reliably closed
This is a Node.js CLI tool. You can install it in any empty directory or the same level as an existing project:
Choose between a one-time execution or global installation.
Option A: Temporary Run (Ideal for sharing)
npx create-evo-lite ./MyAwesomeProjectOption B: Global Installation (Recommended for daily use)
# 1. Clone the source and link it globally
cd create-evo-lite
npm link
# 2. Use it as a native command anywhere!
create-evo-lite ./MyAwesomeProjectDuring setup, Evo-Lite initializes a local ONNX-based runtime and keeps the memory stack inside .evo-lite/. No separate service tier is required.
Tip
Built-in Dual-Core Engines:
- Embedding:
Xenova/bge-small-zh-v1.5(Millisecond inference even on pure CPU) - Reranker:
Xenova/bge-reranker-base(Quantized for minimal memory footprint)
After setup, the first thing to run is:
node .evo-lite/cli/memory.js verifyThis checks the memory runtime, model availability, context freshness, offline-memory residue, and whether the current workspace is still safe to hand over.
If this is your first time using AI as a real project partner, do not try to learn every command at once. Start with this minimal loop:
- Run
/evoso the agent loads context and performs a self-check. - Tell the AI your single immediate goal in plain language.
- After finishing one small closed loop, run
/commitso the result becomes trajectory plus archive. - When you are ready to stop, run
/memfor low-frequency handover.
If verify reports archive or rebuild issues, follow the concrete next-step commands printed by the CLI instead of guessing.
In a healthy setup, the /evo first response should immediately tell you: health status, current focus, current risks, and the most actionable next step.
Likewise, when you run /wash or rebuild, the closing message should clearly tell you whether damaged archives remain, what was actually rebuilt, which memories are outside the rebuild guarantee, and whether you should continue coding or first repair the remaining issues.
When a small feature or bug fix is complete, enter the command:
/commit
/commit is a workflow contract, not magic by itself. In practice it should drive the agent to:
- complete the real
git commit - run
.\.evo-lite\mem.cmd context track --mechanism="..." --details="..." [--resolve="xxxx"] - convert the code action into trajectory, archive, and backlog updates
- and end with an explicit closure summary: whether the commit is done, whether closure is complete, whether backlog was resolved, and what the next step should be
When the iteration is complete, and you need to end the session:
/mem
/mem is the low-frequency handover protocol for version bumps, release tagging, and explicit session suspension.
Ideally, /mem should also end in a stable summary: whether backlog is truly clear, whether the next-session focus was written, whether a version snapshot was created, and whether the user should rest, sync, or first finish the remaining handover work.
Whenever you need, you or your AI agent can query the memory directly:
# Recall historical struggle
./.evo-lite/mem recall "Why did the login API integration fail last time?"
# Imprint a new memory
./.evo-lite/mem remember "The user verification relies on XYZ header, do not use ABC cookie anymore, and this only broke in CI after the proxy layer was introduced."
# Create a structured archive stub
./.evo-lite/mem archive "Core conclusions from the login pipeline refactor"
# Add a new backlog item into active_context.md
./.evo-lite/mem context add "Tighten the upgrade notes in README"
# Run a self-check to see if the model is actually loaded
./.evo-lite/mem verify
# Rebuild the structured archive path when raw_memory needs to be re-indexed
# Note: this does not guarantee preservation of remember-only cache entries stored only in memory.db
node .evo-lite/cli/memory.js rebuildWhen Evo-Lite releases a new version (e.g., introducing new memory.js skills), simply run the following in your existing project's root directory:
npx create-evo-lite@latest ./ --yesThe upgrade flow will:
- preserve the existing
active_context.md - refresh
.agents/and.evo-lite/cli/templates - attempt migration / washing paths when an older memory store is detected
After upgrading, run:
node .evo-lite/cli/memory.js verifyMyAwesomeProject/ <-- (Your Project)
├── .agents/ <-- (Agent Governance Area)
│ ├── rules/ <-- Hard Constraints (Core Rules)
│ │ ├── evo-lite.md - Boot Sequence Interceptor
│ │ ├── project-archive.md - Archiving Protocol
│ │ └── memory-distillation.md - Quality Gatekeeper
│ └── workflows/ <-- Slash Commands
│ ├── evo.md - /evo Script
│ ├── commit.md - /commit Script
│ ├── mem.md - /mem Script
│ └── wash.md - /wash Script
│
└── .evo-lite/ <-- (Memory & Dependency Sandbox)
├── cli/ - Vector DB CLI scripts
├── mem.cmd - CLI Entry (Win)
├── mem - CLI Entry (Unix)
├── active_context.md - Explicit Progress Sheet
├── memory.db - Implicit Vector Database
├── raw_memory/ - Structured source archives
├── vect_memory/ - Vectorized archive markers
└── .cache/ - Local model cache
Why keep this as project-local infrastructure instead of turning it into another heavyweight external system?
In the age of AI, context is expensive, and mental clarity is fragile. Traditional RAG solutions tend to be "heavy," requiring Docker, microservices, and complex sync logic. This not only clutters your project but also adds a significant maintenance burden.
The core philosophy of Evo-Lite is "Use project-local order to resist AI context drift.":
- Zero-Intrusion is True Respect: A good tool should be like a ghost—existing only when summoned. That's why we insist on a
Daemonlessarchitecture. - Sandboxing as the Last Line of Defense: We'd rather increase the scaffolding size slightly (with offline fallbacks) than let a developer's memory fail just because they lack a C++ compiler.
- Memory must be rebuildable, not merely writable: durable AI memory is not about recording one event once; it is about being able to migrate, re-vectorize, verify, and keep using it later.
"Humans hold reverence for business and code assets; Evo-Lite is the golden thread that places the necessary constraints on AI."