CoreLink AI is a modular reasoning engine for evidence-grounded analytical tasks. It is designed for workflows where correctness depends on retrieving the right evidence, applying the right strategy, and producing auditable outputs instead of unconstrained model guesses.
The runtime combines adaptive retrieval, structured evidence extraction, deterministic compute, bounded model-controlled recovery, and explicit answerability policy. It is built for tool-using agents that need to search, compute, validate, and stop safely.
Most agent systems break in one of two ways:
- they over-trust model recall and answer without enough evidence
- they add tools, but lack policy around when to search, retry, compute, recover, or stop
CoreLink AI is built to close that gap. It favors evidence over recall, deterministic execution over free-form math, and typed recovery paths over unbounded loops.
- Adaptive retrieval strategies: The runtime selects and rotates retrieval strategies such as table-first, text-first, hybrid, and multi-document search based on question shape and runtime feedback.
- LLM-guided evidence arbitration: Models are used as bounded selectors over shortlisted candidates instead of as unconstrained answer generators.
- Structured evidence extraction: Retrieved material is normalized into typed evidence that can be validated and computed over.
- Deterministic compute first: Numeric answers are produced through deterministic logic whenever the evidence supports it.
- Lightweight capability acquisition for compute: When native deterministic compute is insufficient, the runtime can synthesize a small deterministic function, validate it in a constrained sandbox, and use it as a bounded fallback.
- Cross-task strategy journal: The runtime records strategy outcomes and uses recent success and failure patterns as priors for later tasks in the same process.
- Repair and regime mutation: When a path fails, the system can rotate strategies, widen the search regime, or restart from a different evidence path instead of repeating the same loop.
- Answerability policy: Failure answers are treated as a controlled terminal state, not as a casual fallback.
At a high level, CoreLink AI follows this flow:
- Plan the task
- Parse the query into a semantic contract: what needs to be found, computed, compared, or aggregated.
- Select a retrieval strategy
- Choose the best initial strategy based on question shape, constraints, and prior journal outcomes.
- Generate and shortlist candidates
- Search for relevant documents, tables, or evidence units.
- Arbitrate the evidence
- Use bounded model selection to choose the best visible candidate set.
- Extract structured evidence
- Normalize the chosen material into a form suitable for deterministic reasoning.
- Compute
- Run native deterministic compute when supported.
- If needed, synthesize a constrained compute function and validate it before use.
- Validate
- Check evidence fit, compute validity, and answerability policy.
- Recover or finish
- If the result is weak, rotate strategy or mutate the retrieval regime.
- If the result is strong, emit the final answer.
The runtime is organized around five implementation boundaries:
- Problem formalization extracts the answer contract, task intent, benchmark overrides, and source bundle.
- Capability and evidence planning builds the tool registry, resolves a safe tool plan, and derives retrieval intent.
- Grounded reasoning curates context, invokes deterministic compute or tools, and normalizes evidence into auditable facts.
- Verification and repair reviews answer quality, detects missing evidence or contract gaps, and routes bounded revise cycles.
- Learning and observability records memory, execution traces, budget/cost summaries, and failure diagnostics.
The editable diagram source is available at assets/architecture/architecture.mmd.
- Python 3.13+
uv- Git
- an OpenAI-compatible API key
git clone https://github.com/krishna-dhulipalla/CoreLink-AI.git
cd CoreLink-AIuv syncCreate a local environment file:
cp .env.example .envAt minimum, set your provider credentials:
OPENAI_API_KEY=your_key_here
OPENAI_BASE_URL=your_base_url_if_neededuv run python -m engine.a2a.server --host 127.0.0.1 --port 9009uv run python scripts/run_live_engine_smoke.pyuv run pytest tests -q -p no:cacheproviderCoreLink AI is configured through environment variables in .env.
Common settings include:
- Provider credentials
OPENAI_API_KEYOPENAI_BASE_URL
- Model routing
- solver, reviewer, arbitration, and compute-capability model overrides
- Runtime limits
- tool-call budgets
- revise budgets
- context limits
- Optional behavior flags
- policy switches for retrieval, repair, and bounded fallback paths
The repository includes benchmark and smoke-test harnesses used to harden the runtime under document-heavy analytical tasks.
Example:
uv run python scripts/run_officeqa_regression.py --smokeThese evaluations are useful for regression testing, but CoreLink AI is not tied to a single benchmark. The architecture is intended to generalize to other evidence-grounded reasoning tasks.
CoreLink-AI/
├── src/
│ ├── engine/
│ │ ├── a2a/ # Server and messaging layer
│ │ ├── mcp/ # MCP integration
│ │ ├── runtime/ # Shared runtime utilities
│ │ └── agent/ # Core reasoning engine
├── scripts/ # Smoke, eval, and maintenance scripts
├── tests/ # Unit and regression tests
├── assets/ # Static assets
├── pyproject.toml
└── Dockerfile
See LICENSE.