Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
215 changes: 185 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,214 @@
# Acorn Node
# MEE Node

Acorn node is the main node in the MEE protocol. Learn more:
MEE Node is the main node in the [Modular Execution Environment (MEE)](https://www.biconomy.io/post/modular-execution-environment-supertransactions) protocol. It issues cryptographically signed quotes for supertransactions and executes them across multiple chains.

https://www.biconomy.io/post/modular-execution-environment-supertransactions
## Table of contents

## Development setup
- [Overview](#overview)
- [Architecture](#architecture)
- [Prerequisites](#prerequisites)
- [Dependencies](#dependencies)
- [Redis](#redis)
- [Token Storage Detection Service](#token-storage-detection-service)
- [Quick start](#quick-start)
- [Configuration](#configuration)
- [Running the node](#running-the-node)
- [Docker](#docker)
- [API](#api)
- [Health and operations](#health-and-operations)
- [Further documentation](#further-documentation)
- [Contact](#contact)

### Prerequisites
## Overview

* [bun](https://bun.sh)
* [docker](https://www.docker.com)
The node:

### Installation
- **Quotes** user intents (supertransactions) and returns signed quotes with gas limits, deadlines, and fees.
- **Executes** signed quotes on-chain: it simulates, batches, and submits transactions via worker processes.
- Uses **Redis** for job queues (BullMQ), quote/userOp storage, and caching.
- Uses a **Token Storage Detection** service to resolve ERC20 balance storage slots for simulation.

```bash
bun i
```
## Architecture

- **Master process**: Initializes chains, RPC manager, gas manager, batcher, health checks, and spawns workers.
- **API workers** (cluster): Serve HTTP API (quote, execute, info, explorer).
- **Simulator workers** (threads, per chain): Process simulation jobs from the queue.
- **Executor workers** (threads, per chain): Process execution jobs from the queue.

Quote flow: **Quote API** → **Storage (Redis)** → **Simulator queue** → **Batcher** → **Executor queue** → **Chain RPC**.

See [docs/architecture.md](docs/architecture.md) for details.

## Prerequisites

- [Bun](https://bun.sh) (runtime and package manager)
- [Docker](https://www.docker.com) (optional, for Redis and token-storage service)
- [Rust toolchain](https://rustup.rs) (only if you build the token-storage-detection service from source)

## Dependencies

The node requires two external services to run.

### Redis

Redis is used for:

- **Job queues** (BullMQ): simulator and executor queues per chain
- **Storage**: quotes and userOps (by hash), and custom fields
- **Caching**: e.g. token slot detection, price feeds

### Configuration
**Configuration** (see [.env.example](.env.example)):

To see a full node config options, check [.env.example](./.env.example) file.
- `REDIS_HOST` (default: `localhost`)
- `REDIS_PORT` (default: `6379`)

To run a Node, prepare your `.env` file and spin up the node:
**Run Redis locally (Docker):**

```bash
bun run start # ... or `bun run start:dev` to start a node in dev environment
docker run -d --name redis -p 6379:6379 redis:7-alpine
```

## Live instance
Or use the project’s Compose file (includes Redis Stack):

```bash
docker compose up -d redis-stack
```

**Eviction**: Quote and userOp keys are not set with TTL, so Redis can grow over time. For production, configure an eviction policy (e.g. `maxmemory` + `maxmemory-policy allkeys-lru`). See [docs/dependencies.md](docs/dependencies.md#eviction-policy-recommended) for details.

See [docs/dependencies.md](docs/dependencies.md#redis) for more detail.

### Token Storage Detection Service

A separate HTTP service that returns the **ERC20 balance storage slot** for a given token and chain. The node calls it during simulation to build correct state overrides (e.g. for `balanceOf`).

**Configuration:**

- `TOKEN_SLOT_DETECTION_SERVER_BASE_URL` (default: `http://127.0.0.1:5000`)

The service is implemented in Rust in `apps/token-storage-detection`. It exposes:

- `GET /{chainId}/{tokenAddress}` → `{ success, msg: { slot } }`

You can run your own instance or use a hosted one. See [docs/dependencies.md](docs/dependencies.md#token-storage-detection-service) and [apps/token-storage-detection/README.md](apps/token-storage-detection/README.md).

At the moment of updating these docs, there's a single node running at:
## Quick start

https://mee-node.biconomy.io
1. **Clone and install**

There's a roadmap in plan to decentralize the node and let anyone operate their node & provide infra for others to use.
```bash
git clone <repo-url>
cd mee-node
bun i
```

2. **Start Redis**

```bash
docker run -d --name redis -p 6379:6379 redis:7-alpine
```

3. **Start Token Storage Detection** (see [apps/token-storage-detection](apps/token-storage-detection))

```bash
cd apps/token-storage-detection
cp .env.example .env # set RPC URLs for chains you need
cargo run --release --bin token-storage-detection
```

Default: `http://127.0.0.1:5000`. Adjust port in the app’s `.env` if needed (e.g. `SERVER_PORT`).

4. **Configure the node**

```bash
cp .env.example .env
# Set at least:
# - NODE_ID (required)
# - NODE_PRIVATE_KEY (required)
# - REDIS_HOST / REDIS_PORT if not localhost:6379
# - TOKEN_SLOT_DETECTION_SERVER_BASE_URL if not http://127.0.0.1:5000
# - CUSTOM_CHAINS_CONFIG_PATH or use built-in chains
```

5. **Run the node**

```bash
bun run start # production
bun run start:dev # development (watch mode)
```

API listens on `PORT` (default `4000`). Check [http://localhost:4000/v1/info](http://localhost:4000/v1/info) (or your `PORT`) for version and health.

## Configuration

All options are documented in [.env.example](.env.example). Key groups:

| Area | Main variables |
|------|-----------------|
| **Server** | `PORT`, `NODE_ENV`, `ENV_ENC_PASSWORD` (production/staging secrets) |
| **Node identity** | `NODE_ID`, `NODE_PRIVATE_KEY`, `NODE_NAME`, `NODE_FEE_BENEFICIARY` |
| **Chains** | `CUSTOM_CHAINS_CONFIG_PATH`, batch gas limits, simulator/executor concurrency |
| **Redis** | `REDIS_HOST`, `REDIS_PORT` |
| **Token slot service** | `TOKEN_SLOT_DETECTION_SERVER_BASE_URL` |
| **Workers** | `NUM_CLUSTER_WORKERS`, `MAX_EXTRA_WORKERS`, queue attempts/backoff |
| **Logging** | `LOG_LEVEL`, `PRETTY_LOGS` |

For production/staging, the node can load encrypted secrets from `keystore/key.enc` (see `ENV_ENC_PASSWORD` and [src/common/setup.ts](src/common/setup.ts)).

## Running the node

| Command | Description |
|--------|--------------|
| `bun run start` | Run with Bun (uses `src/main.ts`); cluster + workers. |
| `bun run start:dev` | Watch mode; single process, all modules loaded. |
| `bun run build && bun run start:prod` | Build to `dist/` and run `dist/main.js`. |

Ensure Redis and the token-storage-detection service are up and reachable; otherwise quote/execute and health may fail. See [docs/operations.md](docs/operations.md) for runbooks.

## Docker

- **Node image**: [bcnmy/mee-node](https://hub.docker.com/r/bcnmy/mee-node). Use with your own Redis and token-storage service.
- **Token Storage Detection**: See [apps/token-storage-detection/Dockerfile](apps/token-storage-detection/Dockerfile). Build and run with the same env vars as the Rust app (RPC URLs, optional Redis, etc.).

Example (node only):

```bash
docker run -e NODE_ID=... -e NODE_PRIVATE_KEY=... \
-e REDIS_HOST=host.docker.internal \
-e TOKEN_SLOT_DETECTION_SERVER_BASE_URL=http://host.docker.internal:5000 \
-p 4000:4000 bcnmy/mee-node
```

One can check the node version at any time by accessing `/info` endpoint, as shown [here](https://mee-node.biconomy.io/info).
## API

## Docker image
Public HTTP API (see also live [docs](https://mee-node.biconomy.io/docs)):

Docker repository can be found [here](https://hub.docker.com/r/bcnmy/mee-node).
| Method | Path | Description |
|--------|------|-------------|
| GET | `/v1/info` | Node version, supported chains, health (Redis, token-slot, queues, etc.) |
| GET | `/v1/explorer/:hash` | Get quote by hash (optional `confirmations`) |
| POST | `/v1/quote` | Request a quote (intent → signed quote) |
| POST | `/v1/quote-permit` | Request a quote with permit flow |
| POST | `/v1/exec` | Execute a signed quote |

## API Docs
The **quote** endpoint returns a signed quote (node’s commitment). The **execute** endpoint accepts the user-signed quote, validates it, and runs the intent on the configured chains.

Acorn Node exposes two public endpoints:
## Health and operations

- /v1/quote (POST)
- /v1/execute (POST)
- **`/v1/info`**: Returns node info and health for Redis, token-slot detection, chains, simulator, executor, and workers.
- **Logs**: Structured (e.g. Pino). Level via `LOG_LEVEL`; `PRETTY_LOGS=1` for development.
- **Graceful shutdown**: Use SIGTERM; the process uses `tini` in Docker.

Find request and response examples [here](https://mee-node.biconomy.io/docs).
See [docs/operations.md](docs/operations.md) for runbooks (startup, dependency checks, scaling, troubleshooting).

The quote endpoint accepts user's intent & returns a valid quote cryptographically signed by the Node's private key. This is node's commitment to execute user's intent under certain conditions (gas price, gas limit, execution deadline, etc).
## Further documentation

The execute endpoint accepts quote signed by the end-user. The node will validate that the quote was really issued from its side, and check other parameters (execution deadline). If all match, the node proceeds to fully the intent on all chains, & returns the transaction hash to the user.
- [docs/architecture.md](docs/architecture.md) — Process model, queues, and data flow
- [docs/dependencies.md](docs/dependencies.md) — Redis (including eviction) and Token Storage Detection in detail
- [docs/chain-configuration.md](docs/chain-configuration.md) — Adding and configuring chains: all config fields, price oracles (native + payment), and requirement that any chain referenced by an oracle must be in chain config
- [docs/operations.md](docs/operations.md) — Runbooks and operations
- [.env.example](.env.example) — All configuration options

## Contact

Reach out to us at: connect@biconomy.io
Reach out: connect@biconomy.io
59 changes: 53 additions & 6 deletions apps/token-storage-detection/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,58 @@
# Token Storage Detection server
# Token Storage Detection service

## Usage
HTTP service that returns the **ERC20 balance storage slot** for a given token contract and chain. Used by the MEE Node during simulation to build correct state overrides (e.g. for `balanceOf`).

### Env variables
See `.env.example` for a list of required environment variables.
## API

- **GET /{chainId}/{tokenAddress}**
- `chainId`: chain id (e.g. `1`, `8453`)
- `tokenAddress`: ERC20 contract address
- Response: `{ success: true, msg: { slot: "0x3" } }` or `{ success: false, error: "..." }`

## Configuration

See `.env.example` in this directory. Main options:

- **Server**: `SERVER_HOST` (default `127.0.0.1`), `SERVER_PORT` (default `3000` in code; `.env.example` uses `5000` to match the node’s default)
- **Chains / RPCs**: For each chain you need, set either `{CHAIN}_RPC` (primary RPC with debug/trace) or `{CHAIN}_FORK_RPC` (e.g. for Anvil fork). Examples: `ETHEREUM_RPC`, `BASE_RPC`, `ETHEREUM_FORK_RPC`, etc. If the RPC does not support the debug/trace APIs required for token detection, use **fork mode** (`{CHAIN}_FORK_RPC`); **Anvil** is a good choice for such chains.
- **Redis** (optional): `REDIS_ENABLED=1`, `REDIS_HOST`, `REDIS_PORT`, `REDIS_PASSWORD`, `REDIS_IS_TLS` for response caching
- **Anvil** (optional): `ANVIL_ENABLED=1` and related options when using fork RPCs
- **Timeouts**: `TIMEOUT_MS`, `LOGGING_ENABLED`

## Adding a new chain

Unlike the MEE Node (where adding a standard EVM chain is usually configuration-only), this service requires **code changes**:

1. Add a variant to the **`Chain` enum** in `src/state.rs`.
2. Extend **`FromStr`** in the same file so the chain id (e.g. `"8453"`) parses to that variant.
3. Set the corresponding **RPC env var** (e.g. `BASE_RPC` or `BASE_FORK_RPC`) in `.env`.

See the main repo’s [Chain configuration](../../docs/chain-configuration.md) and [Dependencies — Token Storage](../../docs/dependencies.md#adding-new-chains-token-storage-service) for the full picture.

## Run locally

### Run your own
```bash
$ cargo run --release --bin token-storage-detection
cp .env.example .env
# Set at least one chain RPC, e.g. ETHEREUM_RPC or ETHEREUM_FORK_RPC
cargo run --release --bin token-storage-detection
```

By default the node expects this service at `http://127.0.0.1:5000`. Either set `SERVER_PORT=5000` in `.env` or set the node’s `TOKEN_SLOT_DETECTION_SERVER_BASE_URL` to your URL (e.g. `http://127.0.0.1:3000`).

## Docker

Build and run with the same env vars (RPCs, optional Redis, `SERVER_PORT`, etc.):

```bash
docker build -t token-storage-detection .
docker run -p 5000:5000 -e SERVER_PORT=5000 -e ETHEREUM_RPC=... token-storage-detection
```

## Operational notes

- **RPC at boot**: The service builds one RPC provider per configured chain at startup. If **any** chain’s RPC (or Anvil fork) fails during init, the process can **exit** and may restart in a loop until the RPC is fixed. Use stable RPCs; for exotic or unreliable chains, consider a minimal instance with only the chains you need. See [Dependencies — RPC and boot behavior](../../docs/dependencies.md#rpc-and-boot-behavior).
- **When this service fails**: The MEE Node still executes supertransactions using the **default gas limit from the SDK**, which is sufficient for many flows. Complex flows may fail with insufficient gas. When the service is unavailable, the node can fall back to a **Redis-backed cache** of balance storage slots: tokens that were successfully resolved at least once (to detect their storage slot) are cached. This cache is **persistent** (stored in Redis). See [Dependencies — Impact on execution](../../docs/dependencies.md#impact-on-execution-when-the-token-service-fails).

## Relation to MEE Node

The MEE Node calls this service when simulating userOps that involve ERC20 balances. If the service is down or returns errors, those simulations can fail. See the main repo’s [docs/dependencies.md](../../docs/dependencies.md#token-storage-detection-service) for details.
77 changes: 77 additions & 0 deletions docs/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# MEE Node architecture

This document describes how the MEE Node is structured and how data flows from quote to execution.

## Process model

The node uses Node.js **cluster** and **worker threads**:

1. **Primary (master)**
- Runs once.
- Initializes: chains config, RPC manager, gas manager, batcher, health checks.
- Spawns API workers (cluster) and, per chain, simulator and executor workers (threads).
- Pushes config and health results to workers via IPC.

2. **API workers (cluster)**
- One or more HTTP server processes.
- Handle `/v1/quote`, `/v1/quote-permit`, `/v1/exec`, `/v1/info`, `/v1/explorer/:hash`.
- Receive chain settings, RPC config, gas info, and health results from the master.

3. **Simulator workers (threads, per chain)**
- Consume jobs from the **simulator queue** for that chain (async batch simulation after quote).
- Run **execution simulation**: they simulate userOps against **on-chain state** (no state overrides). Their role is to confirm that on-chain conditions are met before the execution phase.

4. **Executor workers (threads, per chain)**
- Consume jobs from the **executor queue** (BullMQ) for that chain.
- Submit signed transactions to the chain RPC.
- Use node-owned EOA wallets (master + optional extra workers from mnemonic/keys).

Entry points:

- **Master**: `src/master/bootstrap.ts`
- **API**: `src/api/bootstrap.ts`
- **Simulator**: `src/workers/simulator/main.ts`
- **Executor**: `src/workers/executor/main.ts`

All started from `src/main.ts` (cluster primary runs master, workers run API).

## Data flow

### Quote → storage → simulation → batching → execution

1. **Quote** — Request comes in. The API runs **pre-simulation** for gas estimation and calldata validity: it fills the on-chain state gap using **state overrides** (e.g. ERC20 balances) and uses the **Token Storage Detection** service to get balance storage slots when needed. Pre-simulation produces gas estimates and validates the batch. The node then stores quote and userOps in Redis and enqueues simulator jobs per chain.

2. **Simulator** — Workers process simulator jobs: they run **execution simulation** against current on-chain state (no state overrides), so that execution only runs when on-chain conditions are satisfied. They do not use the Token Storage Detection service. The batcher listens for completed jobs.

3. **Batcher** — Groups simulated userOps per chain into batches under the chain's batch gas limit and enqueues executor jobs.

4. **Executor** — Workers pick executor jobs, sign and send batch transactions using the node's RPC and EOA, then complete the job.

5. **Execute** — Client sends the signed quote; node loads from Redis, validates, and execution is driven by the same queues until the execution job is done.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This point is irrelevant here when the intention is to explain how the end to end pipeline works.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this 5th point or reorder this ?


**Redis** backs queues, quote/userOp storage, and cache. **Token Storage Detection** is used only in the **pre-simulation and gas estimation phase** (in the API during quote), to build state overrides for ERC20 balances; it is not used by simulator workers.

## Redis usage

Redis is used for job queues (simulator and executor per chain), quote and userOp storage, and caching. Connection is configured via `REDIS_HOST` and `REDIS_PORT`.

## Health checks

The **HealthCheckService** (master) periodically runs:

- **Redis**: e.g. `CLIENT LIST` to ensure connectivity.
- **Chains / RPC**: per-chain checks.
- **Simulator / Executor**: per-chain queue presence/job counts.
- **Node**: wallet/account status per chain.
- **Token Slot Detection**: per-chain request to the token-storage service (soft: does not mark chain unhealthy).

Results are sent to API workers. `/v1/info` aggregates them so operators can see status of Redis, token-slot, queues, and chains.

## Configuration flow

- **Chains**: Loaded from config (or `CUSTOM_CHAINS_CONFIG_PATH`). Master initializes `ChainsService` and passes chain settings to API workers.
- **RPC**: Master builds RPC chain configs, calls `RpcManagerService.setup()`, then pushes config to API and thread workers (simulator/executor).
- **Gas**: Gas manager runs in master; gas info is synced to API and thread workers.
- **Node wallets**: Node service runs in master; wallet states are pushed to executor workers for signing.

All of this ensures API and workers see the same chains, RPCs, gas, and wallet state.
Loading
Loading