Skip to content

feat(cli): locks for agents running dev and build commands#1265

Open
harlan-zw wants to merge 5 commits intomainfrom
feat/dev-server-lockfile
Open

feat(cli): locks for agents running dev and build commands#1265
harlan-zw wants to merge 5 commits intomainfrom
feat/dev-server-lockfile

Conversation

@harlan-zw
Copy link
Copy Markdown
Contributor

@harlan-zw harlan-zw commented Mar 27, 2026

🔗 Linked issue

nuxt/nuxt#34629

❓ Type of change

  • 📖 Documentation
  • 🐞 Bug fix
  • 👌 Enhancement
  • ✨ New feature
  • 🧹 Chore
  • ⚠️ Breaking change

📚 Description

Inspired by Next.js 16.2's dev server lock file & nuxt/nuxt#34629, this adds a nuxt.lock file to the build directory during nuxi dev and nuxi build. The lock file contains process info (PID, port, URL, command) so that a second invocation can detect the existing process and print an actionable error message with the running server's URL and a kill command.

This only runs for agents (using std-env agent detection). Agents can set NUXT_IGNORE_LOCK=1 to bypass the check if they have a good reason to.

Stale locks from crashed processes are automatically cleaned up on the next startup via PID liveness checking (process.kill(pid, 0)). Pure Node.js, zero new dependencies.

Lock file format (<buildDir>/nuxt.lock)

{
  "pid": 12345,
  "command": "dev",
  "port": 3000,
  "hostname": "127.0.0.1",
  "url": "http://127.0.0.1:3000",
  "startedAt": 1711540800000
}

Error output when a dev server is already running

Another Nuxt dev server is already running:

  URL:     http://127.0.0.1:3000
  PID:     12345
  Dir:     /path/to/project
  Started: 3/27/2026, 2:00:00 PM

Run `kill 12345` to stop it, or connect to http://127.0.0.1:3000
Set NUXT_IGNORE_LOCK=1 to bypass this check.

Limitations

  • These aren't OS-level file locks - we can't support these without native bindings. There are edge cases around force process exits and hards crashes but there is recover logic

Future considerations

Rust bindings for native OS file locking.

When running inside an AI agent environment, write a lock file to
`<buildDir>/nuxt.lock` containing process info (PID, port, URL, command).
On startup, check for existing locks and show actionable error messages
with the running server URL and kill command.

Stale locks from crashed processes are auto-cleaned via PID liveness
checking. Gated behind `isAgent` from std-env, completely invisible
to normal users.
@harlan-zw harlan-zw requested a review from danielroe as a code owner March 27, 2026 04:05
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 27, 2026

📦 Bundle Size Comparison

📈 nuxi

Metric Base Head Diff
Rendered 3385.35 KB 3389.35 KB +4.00 KB (+0.12%)

📈 nuxt-cli

Metric Base Head Diff
Rendered 149.88 KB 153.83 KB +3.95 KB (+2.63%)

➡️ create-nuxt

Metric Base Head Diff
Rendered 1640.37 KB 1640.37 KB 0.00 KB (0.00%)

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 27, 2026

  • nuxt-cli-playground

    npm i https://pkg.pr.new/create-nuxt@1265
    
    npm i https://pkg.pr.new/nuxi@1265
    
    npm i https://pkg.pr.new/@nuxt/cli@1265
    

commit: d70a76d

@codspeed-hq
Copy link
Copy Markdown

codspeed-hq bot commented Mar 27, 2026

Merging this PR will not alter performance

✅ 2 untouched benchmarks


Comparing feat/dev-server-lockfile (e34082b) with main (1c2ef0c)

Open in CodSpeed

@harlan-zw harlan-zw changed the title feat(cli): add lock file for dev and build commands feat(cli): locks for agents running dev and build commands Mar 27, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 27, 2026

📝 Walkthrough

Walkthrough

Adds a lockfile-based mechanism (nuxt.lock) to prevent concurrent Nuxt dev/build runs. A new utility module exports LockInfo, checkLock, writeLock, and formatLockError. The build command now checks and writes locks around execution and ensures cleanup in a finally block. The dev server checks for existing locks on init, writes a lock with server metadata on readiness, stores a cleanup handler, and invokes it on close. New unit tests cover lock behavior, writing, cleanup, and error formatting.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely describes the main feature: introducing locks for agents running dev and build commands, matching the substantial changes across build.ts, dev/utils.ts, and the new lockfile.ts utility.
Description check ✅ Passed The pull request description clearly describes the feature: adding a nuxt.lock file for dev and build commands to prevent concurrent execution in agent environments, with detailed lock file format, error output examples, and implementation details.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/dev-server-lockfile

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/nuxi/src/commands/build.ts`:
- Around line 84-94: The build lock written by writeLock(...) is removed by
clearBuildDir() because clearDir(...) only preserves
['cache','analyze','nuxt.json'] and omits 'nuxt.lock', so add 'nuxt.lock' to the
preservation list inside clearBuildDir()/clearDir call or alternatively move the
call to writeLock(...) to after clearBuildDir() runs; update references in this
file to use checkLock(...)/formatLockError(...) as-is to detect existing locks
before writing, and ensure lockCleanup remains set to the writeLock(...) return
value when you move it.

In `@packages/nuxi/src/dev/utils.ts`:
- Around line 488-490: The close() method removes nuxt.lock too early allowing
new dev/build to start while the current process is still reloading; modify
NuxtDevServer.close() so it does not remove the lock up-front but instead waits
until the server/listeners and reload path are fully stopped (i.e., stop/await
the HTTP server, remove any file watchers/listeners and run `#lockCleanup` only
after those shutdown steps complete). Specifically, in NuxtDevServer.close() and
the reload path used by NuxtDevServer.#load(), reorder shutdown: first stop
listeners/servers and await their completion, then invoke this.#lockCleanup() to
remove nuxt.lock so no concurrent nuxi dev/build can slip past during reload.

In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 95-112: The exit/signal handlers added in writeLock()
(process.on('exit', cleanup) and process.once for SIGTERM/SIGINT/SIGQUIT/SIGHUP)
are never removed, causing listener accumulation; update cleanup() (and/or
writeLock()) to detach those handlers when the lock is released by calling
process.off/process.removeListener for the exact bound listener functions (store
the cleanup handler and each bound signal callback so they can be removed),
ensure unlinkSync(lockPath) still runs inside cleanup, and only add the process
listeners once per active lock to prevent duplicates when writeLock() is invoked
multiple times.
- Around line 92-93: The current writeLock flow is racy because callers do
checkLock() then writeLock() using a plain writeFileSync; change writeLock() to
acquire the lock atomically (e.g., write the JSON to the lockPath with an
exclusive flag such as 'wx' or write to a temp file and atomically rename) so
the write fails if the lock file already exists rather than relying on a prior
check; keep using dirname(lockPath) mkdir(..., { recursive: true }) before the
atomic write. Also fix cleanup() so it removes the previously registered
process.on('exit') handler instead of leaving handlers to accumulate: store the
exit handler reference when you register it in writeLock()/registerCleanup and
call process.removeListener('exit', handler) inside cleanup() (and ensure you
don't register duplicate handlers). Reference functions/idents: writeLock,
checkLock, cleanup, lockPath, and the exit handler registered via
process.on('exit').
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3c7c5c90-d9b5-4897-be94-cf65f5648f65

📥 Commits

Reviewing files that changed from the base of the PR and between 1c2ef0c and c545543.

📒 Files selected for processing (4)
  • packages/nuxi/src/commands/build.ts
  • packages/nuxi/src/dev/utils.ts
  • packages/nuxi/src/utils/lockfile.ts
  • packages/nuxi/test/unit/lockfile.spec.ts

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
packages/nuxi/src/utils/lockfile.ts (2)

97-99: ⚠️ Potential issue | 🔴 Critical

Use atomic lock creation to prevent concurrent writers.

Line 98 does a normal write after a separate pre-check, so two processes can pass checkLock() and both write nuxt.lock. Acquire the lock atomically with exclusive create (flag: 'wx') and treat EEXIST as lock contention.

Proposed fix
   await mkdir(dirname(lockPath), { recursive: true })
-  writeFileSync(lockPath, JSON.stringify(info, null, 2))
+  try {
+    writeFileSync(lockPath, JSON.stringify(info, null, 2), { flag: 'wx' })
+  }
+  catch (error) {
+    if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
+      throw new Error(`Lock file already exists: ${lockPath}`)
+    }
+    throw error
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nuxi/src/utils/lockfile.ts` around lines 97 - 99, The current
non-atomic writeFileSync(lockPath, ...) after mkdir allows race conditions;
replace the plain write with an exclusive-create atomic write (e.g. use
write/open with flag: 'wx' or equivalent) so the file is only created if it does
not already exist, and catch the EEXIST error to treat it as lock contention
(handle/retry/propagate accordingly). Keep the existing mkdir(dirname(lockPath),
{ recursive: true }) before attempting the atomic create and reference lockPath,
writeFileSync/open/write/close (or the chosen atomic API) and EEXIST handling in
your changes.

100-117: ⚠️ Potential issue | 🟠 Major

Detach exit/signal handlers in cleanup() to avoid listener accumulation.

Lines 111-116 add process listeners on every writeLock() call, but cleanup() (Lines 101-109) only unlinks the file. In repeated dev reload flows, this can accumulate listeners and eventually trigger MaxListenersExceededWarning.

Proposed fix
   let cleaned = false
+  const onExit = () => cleanup()
+  const signalHandlers: Array<[NodeJS.Signals, () => void]> = []
   function cleanup() {
     if (cleaned)
       return
     cleaned = true
+    process.off('exit', onExit)
+    for (const [signal, handler] of signalHandlers) {
+      process.off(signal, handler)
+    }
     try {
       unlinkSync(lockPath)
     }
     catch {}
   }

-  process.on('exit', cleanup)
+  process.on('exit', onExit)
   for (const signal of ['SIGTERM', 'SIGINT', 'SIGQUIT', 'SIGHUP'] as const) {
-    process.once(signal, () => {
+    const handler = () => {
       cleanup()
       process.exit()
-    })
+    }
+    signalHandlers.push([signal, handler])
+    process.once(signal, handler)
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/nuxi/src/utils/lockfile.ts` around lines 100 - 117, The cleanup
logic only unlinks the lock file but does not detach the process listeners added
in writeLock(), causing listener accumulation; modify writeLock() to register
signal handlers using named/bound functions (e.g., keep references for the exit
handler and each signal handler) and in cleanup() call
process.off/process.removeListener for 'exit' and the same signals
('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before
unlinking; alternatively ensure listeners are only added once (guarded by a
module-level flag) and still remove them in cleanup() to prevent
MaxListenersExceededWarning—refer to the existing cleanup() function and the
code that calls process.on('exit', cleanup) and process.once(signal, ...) to
implement this.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 97-99: The current non-atomic writeFileSync(lockPath, ...) after
mkdir allows race conditions; replace the plain write with an exclusive-create
atomic write (e.g. use write/open with flag: 'wx' or equivalent) so the file is
only created if it does not already exist, and catch the EEXIST error to treat
it as lock contention (handle/retry/propagate accordingly). Keep the existing
mkdir(dirname(lockPath), { recursive: true }) before attempting the atomic
create and reference lockPath, writeFileSync/open/write/close (or the chosen
atomic API) and EEXIST handling in your changes.
- Around line 100-117: The cleanup logic only unlinks the lock file but does not
detach the process listeners added in writeLock(), causing listener
accumulation; modify writeLock() to register signal handlers using named/bound
functions (e.g., keep references for the exit handler and each signal handler)
and in cleanup() call process.off/process.removeListener for 'exit' and the same
signals ('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before
unlinking; alternatively ensure listeners are only added once (guarded by a
module-level flag) and still remove them in cleanup() to prevent
MaxListenersExceededWarning—refer to the existing cleanup() function and the
code that calls process.on('exit', cleanup) and process.once(signal, ...) to
implement this.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e20f6c9d-6c34-4247-8da7-c7b4be5ce9de

📥 Commits

Reviewing files that changed from the base of the PR and between c545543 and fabedc5.

📒 Files selected for processing (2)
  • packages/nuxi/src/utils/lockfile.ts
  • packages/nuxi/test/unit/lockfile.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/nuxi/test/unit/lockfile.spec.ts

- Use atomic file creation (`wx` flag) to prevent race conditions
  between concurrent checkLock/writeLock calls
- Detach process exit/signal handlers in cleanup() to prevent
  listener accumulation during dev server reloads
- Move writeLock() after clearBuildDir() in build command so the
  lock file isn't deleted immediately after creation
- Don't remove lock during dev server reloads (only on final
  shutdown via releaseLock())
@atinux
Copy link
Copy Markdown
Member

atinux commented Mar 31, 2026

I love the idea @harlan-zw

Do you think it would also be possible somehow, and would it be useful to also provide a way to read the logs for the current process?

EDIT: seems that Next.js has a path to a log file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants