Skip to content

Releases: hyperspaceai/agi

v1.7.7 — Mysticeti force-commit-at-frontier fix

29 Apr 03:18

Choose a tag to compare

v1.7.7 — Sui-style txpool admission control

Fixes a fundamental BFT safety violation that caused the chain to fork into 2-2 validator groups under any non-trivial load.

Root cause

hyperspace-edge/consensus/mysticeti/engine.go had a 2s wall-clock fallback that force-committed an empty block based on each validator's local DAG frontier when 2f+1 certs had not been reached. Each validator's frontier differs under network async, so the unilateral commits produced divergent block headers across validators → fork.

Same anti-pattern as 2026-04-09 four-way fork postmortem lesson #3: stalling is the correct BFT behavior; silently producing divergent blocks is not.

Fix

Replaced the force-commit branch with a no-op stuck-leader-tracking block. Mysticeti now waits indefinitely for 2f+1 certs (matching Sui reference). Liveness becomes a network/load problem to solve at the propagation layer, not a consensus shortcut.

Verification

7-node testnet (4 validators + 3 bootnodes), 100+ on-chain txs including 8-tx funding burst, Woppal deploy, simulator at 152 TPS, 56-channel offchain driver — all 7 nodes unanimous at H-25 throughout. Previously this load forked the chain in under 60 seconds.

Source: hyperspaceai/a1-blockchain@b7a6f57

v1.7.6 — Mysticeti force-commit-at-frontier fix

29 Apr 02:34

Choose a tag to compare

v1.7.6 — Priority-lane consensus broadcasts (TPS scaling per hyperpaper)

Fixes a fundamental BFT safety violation that caused the chain to fork into 2-2 validator groups under any non-trivial load.

Root cause

hyperspace-edge/consensus/mysticeti/engine.go had a 2s wall-clock fallback that force-committed an empty block based on each validator's local DAG frontier when 2f+1 certs had not been reached. Each validator's frontier differs under network async, so the unilateral commits produced divergent block headers across validators → fork.

Same anti-pattern as 2026-04-09 four-way fork postmortem lesson #3: stalling is the correct BFT behavior; silently producing divergent blocks is not.

Fix

Replaced the force-commit branch with a no-op stuck-leader-tracking block. Mysticeti now waits indefinitely for 2f+1 certs (matching Sui reference). Liveness becomes a network/load problem to solve at the propagation layer, not a consensus shortcut.

Verification

7-node testnet (4 validators + 3 bootnodes), 100+ on-chain txs including 8-tx funding burst, Woppal deploy, simulator at 152 TPS, 56-channel offchain driver — all 7 nodes unanimous at H-25 throughout. Previously this load forked the chain in under 60 seconds.

Source: hyperspaceai/a1-blockchain@cee27a7

v1.7.5 — Mysticeti force-commit-at-frontier fix

28 Apr 17:18

Choose a tag to compare

v1.7.5 — Mysticeti force-commit-at-frontier safety fix

Fixes a fundamental BFT safety violation that caused the chain to fork into 2-2 validator groups under any non-trivial load.

Root cause

hyperspace-edge/consensus/mysticeti/engine.go had a 2s wall-clock fallback that force-committed an empty block based on each validator's local DAG frontier when 2f+1 certs had not been reached. Each validator's frontier differs under network async, so the unilateral commits produced divergent block headers across validators → fork.

Same anti-pattern as 2026-04-09 four-way fork postmortem lesson #3: stalling is the correct BFT behavior; silently producing divergent blocks is not.

Fix

Replaced the force-commit branch with a no-op stuck-leader-tracking block. Mysticeti now waits indefinitely for 2f+1 certs (matching Sui reference). Liveness becomes a network/load problem to solve at the propagation layer, not a consensus shortcut.

Verification

7-node testnet (4 validators + 3 bootnodes), 100+ on-chain txs including 8-tx funding burst, Woppal deploy, simulator at 152 TPS, 56-channel offchain driver — all 7 nodes unanimous at H-25 throughout. Previously this load forked the chain in under 60 seconds.

Source: hyperspaceai/a1-blockchain@72c6b40

v1.7.4 — Full Static Binary + All Fixes

27 Apr 13:33

Choose a tag to compare

Complete binary with all crypto verification. NetworkID 808080, default bootnodes, port 443 fallback, Mysticeti 1.5s finality, BlockSTM fix.

Linux: rpath binary + .so files (no LD_LIBRARY_PATH needed)
Mac: standalone static binary

v1.7.2 — Full Binary (alias for v1.7.4)

27 Apr 13:53

Choose a tag to compare

Same binary as v1.7.4. This tag exists for CLI nodes with EDGE_BINARY_VERSION pinned to chain-v1.7.2.

v1.4.0 — Mysticeti Consensus (Sui uncertified DAG, 9 blocks/sec)

13 Apr 00:42

Choose a tag to compare

Hyperspace switches from Narwhal/Bullshark to Sui Mysticeti consensus. 5,282 lines of Sui production Rust code via CGO FFI. 4.5x block rate improvement, zero stalls. Includes libmysticeti_consensus.so in lib/ directory.

v1.3.8 — Strict round-quorum (rolling-restart safety)

10 Apr 00:43

Choose a tag to compare

v1.3.8 — Strict round-quorum (rolling-restart safety)

Eliminates the rolling-restart fork cascade documented in the
2026-04-09 postmortem under "follow-up: rolling restart safety". After
v1.3.7 deployed via auto-update, the chain repeatedly fragmented into
2-validator pairs every time a node restarted. This release fixes the
underlying cause.

Root cause

narwhal.advanceRound() had three "emergency recovery" code paths that
let a validator advance to the next consensus round with 0 or 1
parent certs
after a wall-clock timeout:

// Bootstrap (rounds 2-10):
if stuckDuration > 15*time.Second && prevRoundCerts == 0 {
    minCertsForRecovery = 0  // <-- dangerous
}

// Round 11+:
if stuckDuration > 60*time.Second && prevRoundCerts >= 1 {
    minCertsForRecovery = 1  // <-- dangerous
}
if stuckDuration > 120*time.Second && prevRoundCerts == 0 {
    minCertsForRecovery = 0  // <-- very dangerous
}

When a validator restarted (auto-update, manual restart, crash) and
hit the bootstrap delay alone — even briefly — these paths would let
it advance through rounds with empty parent sets, producing proposals
that the rest of the network could not link to. From the network's
view, the restarted validator looked like a divergent fork.

The fix

narwhal.go advanceRound(): removed all three emergency-recovery
paths. The new rule is one line:

minCertsForRecovery := f + 1     // 2 of 4 for n=4

For n=4 / f=1 that's 2 certs (the BFT minimum for "real progress").
There is no fallback. If quorum cannot form, the validator stalls —
which is the correct BFT behaviour. Liveness without quorum is
impossible by definition; producing rounds in isolation isn't liveness,
it's silent forking.

The bootstrap peer-wait in runRounds() is also tightened: max wait
60s → 120s, with a 500ms poll interval, so a validator coming up after
its peers gets more time to discover them before consensus starts.

Trade-off

Liveness is now strictly bounded by quorum. If 2 of 4 validators are
down at the same time, the chain stalls until at least one comes back
(needs 2/4 active validators for n=4 quorum=3 since the local node
counts as one). This is the correct BFT behaviour.

The previous "advance with 0 certs" path traded safety for liveness in
the wrong direction — it produced a chain that appeared to make
progress but was actually four parallel single-validator chains. This
release restores the safety guarantee.

Upgrade notes

Drop-in upgrade. No chain data wipe required. Auto-updater pulls
this within 5 minutes.

Important: after deploying, do a coordinated restart so that all 4
validators come up with fresh DAG state at roughly the same time. The
strict-quorum rule means a single validator can't bootstrap alone; if
3 of 4 validators come up before the 4th, they'll start producing
together, and the 4th will join cleanly when it's ready.

Verification on testnet

Will be verified on the public 4-validator Hyperspace A1 testnet
immediately after publication. See the post-deploy report in the
release thread.

v1.3.7 — Developer Experience (SDKs, devnet, typed errors)

09 Apr 23:26

Choose a tag to compare

v1.3.7 — Developer Experience

Biggest DX release since genesis. No consensus changes; drop-in upgrade
from v1.3.6 with no chain data wipe required.

New: typed HSPACE-xxx error taxonomy

Every hspace_* RPC error now returns a structured error.data.code
field with a stable HSPACE-xxx code, not just a loose message string.
Clients can branch on the code instead of substring-matching — this
fixes the exact pain point that turned the 2026-04-09 fork recovery
into a multi-hour debugging session.

{
  "jsonrpc": "2.0", "id": 1,
  "error": {
    "code": -32000,
    "message": "HSPACE-101: channel expired",
    "data": {
      "code": "HSPACE-101",
      "detail": "channel abc123... expired at wall-clock 1775749200"
    }
  }
}

Codes:

  • HSPACE-100..106 — payment channel lifecycle errors
  • HSPACE-200..203 — proof-carrying transaction errors
  • HSPACE-300..301 — agent registry errors
  • HSPACE-900..999 — invalid params / unsupported / internal

New: hyperspace devnet subcommand

One command to spin up a local multi-validator chain — the Hyperspace
equivalent of anvil / hardhat node.

hyperspace devnet
# 4 validators start on 127.0.0.1 ports 8545-8548
# chain ID 31337, deterministic keys, Ctrl-C to stop

Flags: --validators N, --datadir PATH, --chain-id N, --http-port N,
--p2p-port N, --reset. Deterministic keys mean every run produces the
same validator addresses — convenient for integration tests that want
to reference a known validator.

New: enhanced hyperspace status subcommand

Rich operator-facing node health report:

Hyperspace node status — 2026-04-09T23:08:33Z

Local node:
  http://64.227.23.54:8545   block=4189  peers=16  chain=808080
    head hash: 0xa4d7ec0bf69eec98...
    consensus: NarwhalTusk (4 validators)

Drift:
  range:     0 blocks  (min 4189, max 4189)

Consensus health:
  block:     4189
  status:    CONSISTENT

Supports --json for machine parsing, --peer URL to compare arbitrary
validators, and HSPACE_STATUS_PEERS env var for scripting. The
cross-validator hash consistency check is the same one that would have
caught the 2026-04-09 four-way fork on day one.

New: official SDKs

First-class TypeScript and Python SDKs in sdk/hyperspace-js/ and sdk/hyperspace-py/:

// TypeScript
import { HyperspaceClient, PaymentChannel } from '@hyperspace/sdk'
const client = new HyperspaceClient({ network: 'testnet' })
const channel = await PaymentChannel.open(client, { sender, recipient, deposit: 50_000_000 })
for (let i = 0; i < 1000; i++) await channel.pay(100)
await channel.close()
# Python
from hyperspace import HyperspaceClient, PaymentChannel
client = HyperspaceClient(network="testnet")
channel = PaymentChannel.open(client, sender=sender, recipient=recipient, deposit=50_000_000)
for i in range(1000):
    channel.pay(100)
channel.close()

Both SDKs:

  • Wrap every hspace_* method
  • Parse typed HSPACE-xxx errors
  • Auto-reopen channels on HSPACE-100 / 101 / 104
  • Expose client.consensusHealth() for cross-validator fork detection
  • Include MetaMask integration helpers (JS only)

New: example contracts

Three opinionated reference contracts at contracts/:

  • SkillRegistry.sol — discoverable catalog of skills an agent offers
  • TaskEscrow2.sol — post/claim/deliver escrow with 2% protocol fee
  • ReputationOracle.sol — composite score + tier gating for TaskEscrow2

All three follow the existing Governable pattern and ship with a
scripts/deploy-examples.js deploy script.

New: Docker image + Grafana stack

docker/fullnode/Dockerfile — multi-stage production build for
hyperspaceai/node:latest. docker/monitoring/ brings up Prometheus

  • Grafana with a 9-panel dashboard including a cross-validator
    hash-consistency widget. Full docs at docker/README.md.

New: developer docs

Upgrade notes

Drop-in upgrade. No chain data wipe required. Auto-updater will
pick this up automatically; operators manually upgrading can swap the
binary in place and restart:

curl -L https://github.com/hyperspaceai/agi/releases/download/chain-v1.3.7/hyperspace-agentic-blockchain-linux-amd64.tar.gz -o hs.tar.gz
tar xzf hs.tar.gz
sudo cp hyperspace-agentic-blockchain-linux-amd64/hyperspace-agentic-blockchain /usr/local/bin/
sudo cp hyperspace-agentic-blockchain-linux-amd64/lib/* /usr/local/bin/lib/
sudo systemctl restart hyperspace-agentic-blockchain

Existing clients that substring-match error messages will keep working
(the SDKs' parseRpcError has a legacy fallback branch), but should
migrate to HyperspaceErrorCode comparisons over the next release.

v1.3.6 — Remove catch-up wave skipping (determinism fix pt.4)

09 Apr 21:41

Choose a tag to compare

Remove catch-up wave skipping (determinism fix pt.4)

The final piece of the cross-validator determinism pipeline: the
isCatchingUp code paths in bullshark.tryCommitLocked() that let each
validator unilaterally skip waves once its local DAG advanced 8+ rounds
past the leader round.

Because the DAG advances asynchronously across validators, different
nodes entered catch-up mode at different rounds and skipped different
waves, producing divergent committed sequences — exactly the same class
of fork the v1.3.1–v1.3.5 fixes were addressing, just via a path I
missed in earlier iterations.

Fixed paths

bullshark.go — three former isCatchingUp branches all removed:

  1. leaderCert == nil + catching up → unconditional skip
    → now falls through to checkLeaderTimeoutLocked (DAG-based view
    change, identical across validators).

  2. support >= QuorumSize + catching up → commit bs.committedRound
    WITHOUT emitting a CommitDecision, effectively dropping the
    corresponding block on catching-up validators
    → now emits the same commit decision whether catching up or not.

  3. support < QuorumSize + catching up → unconditional skip
    → now falls through to checkInsufficientSupportTimeoutLocked
    (DAG-based, identical across validators).

Testnet verification

Coordinated restart on the 4-validator Digital Ocean testnet. Chain
expected to remain in full consensus indefinitely across all validators.

Upgrade notes

Chain data MUST be wiped (consensus semantics change — existing chains
may have committed waves under the old catch-up logic that don't replay
cleanly under the new one).

v1.3.5 — Deterministic epoch-boundary gas limit

09 Apr 21:27

Choose a tag to compare

Deterministic epoch-boundary gas limit

Follow-up from v1.3.3, which temporarily froze the block gas limit at the
parent's value because nt.dynamicScaler.GetCurrentGasLimit() was
producing divergent headers across validators.

Gas limit can now grow and shrink under load — but only at epoch boundaries,
and only via a pure function of parent header fields (block number, parent
gas limit, parent gas used). The computation is identical on every validator
because every validator shares the same parent.

Rule

blockNumber % EpochLength == 0 (epoch boundary):
    if parent.GasUsed * 2 > parent.GasLimit:   raise by parent.GasLimit / 1024
    if parent.GasUsed * 2 < parent.GasLimit:   lower by parent.GasLimit / 1024
    (clamped to [MinBlockGas, MaxBlockGas])
otherwise:
    inherit parent.GasLimit verbatim

This is EIP-1559-style elasticity, restricted to epoch boundaries
(default: every 100 blocks). Under sustained load the gas limit climbs
~0.1%/epoch until it hits the ceiling; under light load it drops similarly
until the floor.

Regression tests

  • TestComputeDeterministicGasLimit_NonEpochInheritsParent
  • TestComputeDeterministicGasLimit_EpochBoundaryRaises
  • TestComputeDeterministicGasLimit_EpochBoundaryLowers
  • TestComputeDeterministicGasLimit_Clamps
  • TestComputeDeterministicGasLimit_DeterministicAcrossNodes
  • TestComputeDeterministicGasLimit_NilParent

Upgrade notes

Drop-in upgrade. No chain data wipe needed. The new formula uses only
parent fields so existing chains continue seamlessly.