Skip to content

[alchemy dev] ReactRouter dev server not torn down on alchemy.run.ts changes, causing port conflicts and recreations #1101

@agcty

Description

@agcty

When running alchemy dev with a ReactRouter resource, saving the alchemy.run.ts file triggers a hot-reload that does not properly tear down the previous dev server instance, causing port conflicts and accumulating background processes.

Environment:

  • Alchemy version: v0.71.0
  • Node.js version: 23.10.0
  • OS: macOS Tahoe 26.0.1
  • Package manager: pnpm

Steps to Reproduce:

  1. Create an alchemy.run.ts file with a ReactRouter resource and upstream dependencies (Hyperdrive, R2Bucket, etc.)
  2. Run alchemy dev --stage local --watch
  3. Wait for the dev server to start (e.g., on port 5173)
  4. Save the alchemy.run.ts file without making any changes
  5. Observe the terminal output

Expected Behavior:

  • The old dev server instance should be torn down before starting a new one
  • The new dev server should start on the same port (e.g., 5173)
  • Only one dev server process should be running at a time

Actual Behavior:

  • Old dev server instance is not torn down
  • New dev server tries to use the same port but finds it in use: Port 5173 is in use, trying another one...
  • Port increments on each reload (5173 → 5174 → 5175, etc.)
  • Multiple dev server processes accumulate in the background
  • Eventually requires manually killing orphaned processes

Terminal Output:

Restarting '/path/to/project/alchemy.run.ts --stage local --watch --dev --adopt --app example-app --root-dir /path/to/project'
Exiting...
Alchemy (v0.71.0)
App: example-app
Phase: up
Stage: local
WARN Development mode is in beta. Please report any issues to https://github.com/sam-goodwin/alchemy/issues.
[skipped]   my-hyperdrive Skipped Resource (no changes)
[skipped]   my-r2-bucket Skipped Resource (no changes)
[skipped]   my-cached-hyperdrive Skipped Resource (no changes)
Default inspector port 9229 not available, using 9230 instead

Using vars defined in .dev.vars
Port 5173 is in use, trying another one...
  ➜  Local:   http://localhost:5174/
  ➜  Network: http://10.0.0.2:5174/
  ➜  Network: http://192.168.64.1:5174/
  ➜  Debug:   http://localhost:5174/__debug
  ➜  press h + enter to show help
[updating]  website Updating Resource...
[skipped]   website > url Skipped Resource (no changes)
[updated]   website Updated Resource
{ url: 'http://localhost:5174/' }

Note: The "Exiting..." message suggests cleanup is attempted, but the old dev server clearly isn't being killed before the new one starts.

Code Example:

import alchemy from "alchemy"
import { Hyperdrive, ReactRouter, R2Bucket } from "alchemy/cloudflare"

const app = await alchemy("example-app", {
  password: "long_dev_secret_password_here",
})

const masterKey = alchemy.secret("example_secret_key_hash")

const db = await Hyperdrive("my-hyperdrive", {
  name: "my-postgres-db",
  origin: "postgresql://user:password@example-host.com/mydb?sslmode=require",
  dev: {
    origin: "postgres://user:password@127.0.0.1:5432/devdb",
  },
  caching: { disabled: true },
})

const r2 = await R2Bucket("my-r2-bucket", {
  name: "my-bucket",
  jurisdiction: "eu",
})

const cachedDb = await Hyperdrive("my-cached-hyperdrive", {
  name: "my-postgres-db-cached",
  origin: "postgresql://user:password@example-host.com/mydb?sslmode=require",
  dev: {
    origin: "postgres://user:password@127.0.0.1:5432/devdb",
  },
})

export const worker = await ReactRouter("website", {
  bindings: {
    DB: db,
    CACHED_DB: cachedDb,
    ENCRYPTION_KEY: masterKey,
    STORAGE: r2,
  },
})

console.log({ url: worker.url })

await app.finalize()

Impact:

  • Forces developers to manually kill orphaned dev server processes
  • Causes confusion about which port the dev server is actually running on
  • Port conflicts with other services if ports keep incrementing
  • Accumulates resource usage (memory, CPU) from multiple running instances

Possible Root Cause:

The dev server cleanup/teardown handler may not be properly awaiting process termination before starting the new instance, or cleanup may not be happening at all despite the "Exiting..." message.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions