feat(cli): locks for agents running dev and build commands#1265
feat(cli): locks for agents running dev and build commands#1265
Conversation
When running inside an AI agent environment, write a lock file to `<buildDir>/nuxt.lock` containing process info (PID, port, URL, command). On startup, check for existing locks and show actionable error messages with the running server URL and kill command. Stale locks from crashed processes are auto-cleaned via PID liveness checking. Gated behind `isAgent` from std-env, completely invisible to normal users.
📦 Bundle Size Comparison📈 nuxi
📈 nuxt-cli
➡️ create-nuxt
|
commit: |
📝 WalkthroughWalkthroughAdds a lockfile-based mechanism ( Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/nuxi/src/commands/build.ts`:
- Around line 84-94: The build lock written by writeLock(...) is removed by
clearBuildDir() because clearDir(...) only preserves
['cache','analyze','nuxt.json'] and omits 'nuxt.lock', so add 'nuxt.lock' to the
preservation list inside clearBuildDir()/clearDir call or alternatively move the
call to writeLock(...) to after clearBuildDir() runs; update references in this
file to use checkLock(...)/formatLockError(...) as-is to detect existing locks
before writing, and ensure lockCleanup remains set to the writeLock(...) return
value when you move it.
In `@packages/nuxi/src/dev/utils.ts`:
- Around line 488-490: The close() method removes nuxt.lock too early allowing
new dev/build to start while the current process is still reloading; modify
NuxtDevServer.close() so it does not remove the lock up-front but instead waits
until the server/listeners and reload path are fully stopped (i.e., stop/await
the HTTP server, remove any file watchers/listeners and run `#lockCleanup` only
after those shutdown steps complete). Specifically, in NuxtDevServer.close() and
the reload path used by NuxtDevServer.#load(), reorder shutdown: first stop
listeners/servers and await their completion, then invoke this.#lockCleanup() to
remove nuxt.lock so no concurrent nuxi dev/build can slip past during reload.
In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 95-112: The exit/signal handlers added in writeLock()
(process.on('exit', cleanup) and process.once for SIGTERM/SIGINT/SIGQUIT/SIGHUP)
are never removed, causing listener accumulation; update cleanup() (and/or
writeLock()) to detach those handlers when the lock is released by calling
process.off/process.removeListener for the exact bound listener functions (store
the cleanup handler and each bound signal callback so they can be removed),
ensure unlinkSync(lockPath) still runs inside cleanup, and only add the process
listeners once per active lock to prevent duplicates when writeLock() is invoked
multiple times.
- Around line 92-93: The current writeLock flow is racy because callers do
checkLock() then writeLock() using a plain writeFileSync; change writeLock() to
acquire the lock atomically (e.g., write the JSON to the lockPath with an
exclusive flag such as 'wx' or write to a temp file and atomically rename) so
the write fails if the lock file already exists rather than relying on a prior
check; keep using dirname(lockPath) mkdir(..., { recursive: true }) before the
atomic write. Also fix cleanup() so it removes the previously registered
process.on('exit') handler instead of leaving handlers to accumulate: store the
exit handler reference when you register it in writeLock()/registerCleanup and
call process.removeListener('exit', handler) inside cleanup() (and ensure you
don't register duplicate handlers). Reference functions/idents: writeLock,
checkLock, cleanup, lockPath, and the exit handler registered via
process.on('exit').
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3c7c5c90-d9b5-4897-be94-cf65f5648f65
📒 Files selected for processing (4)
packages/nuxi/src/commands/build.tspackages/nuxi/src/dev/utils.tspackages/nuxi/src/utils/lockfile.tspackages/nuxi/test/unit/lockfile.spec.ts
There was a problem hiding this comment.
♻️ Duplicate comments (2)
packages/nuxi/src/utils/lockfile.ts (2)
97-99:⚠️ Potential issue | 🔴 CriticalUse atomic lock creation to prevent concurrent writers.
Line 98 does a normal write after a separate pre-check, so two processes can pass
checkLock()and both writenuxt.lock. Acquire the lock atomically with exclusive create (flag: 'wx') and treatEEXISTas lock contention.Proposed fix
await mkdir(dirname(lockPath), { recursive: true }) - writeFileSync(lockPath, JSON.stringify(info, null, 2)) + try { + writeFileSync(lockPath, JSON.stringify(info, null, 2), { flag: 'wx' }) + } + catch (error) { + if ((error as NodeJS.ErrnoException).code === 'EEXIST') { + throw new Error(`Lock file already exists: ${lockPath}`) + } + throw error + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nuxi/src/utils/lockfile.ts` around lines 97 - 99, The current non-atomic writeFileSync(lockPath, ...) after mkdir allows race conditions; replace the plain write with an exclusive-create atomic write (e.g. use write/open with flag: 'wx' or equivalent) so the file is only created if it does not already exist, and catch the EEXIST error to treat it as lock contention (handle/retry/propagate accordingly). Keep the existing mkdir(dirname(lockPath), { recursive: true }) before attempting the atomic create and reference lockPath, writeFileSync/open/write/close (or the chosen atomic API) and EEXIST handling in your changes.
100-117:⚠️ Potential issue | 🟠 MajorDetach
exit/signal handlers incleanup()to avoid listener accumulation.Lines 111-116 add process listeners on every
writeLock()call, butcleanup()(Lines 101-109) only unlinks the file. In repeated dev reload flows, this can accumulate listeners and eventually triggerMaxListenersExceededWarning.Proposed fix
let cleaned = false + const onExit = () => cleanup() + const signalHandlers: Array<[NodeJS.Signals, () => void]> = [] function cleanup() { if (cleaned) return cleaned = true + process.off('exit', onExit) + for (const [signal, handler] of signalHandlers) { + process.off(signal, handler) + } try { unlinkSync(lockPath) } catch {} } - process.on('exit', cleanup) + process.on('exit', onExit) for (const signal of ['SIGTERM', 'SIGINT', 'SIGQUIT', 'SIGHUP'] as const) { - process.once(signal, () => { + const handler = () => { cleanup() process.exit() - }) + } + signalHandlers.push([signal, handler]) + process.once(signal, handler) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/nuxi/src/utils/lockfile.ts` around lines 100 - 117, The cleanup logic only unlinks the lock file but does not detach the process listeners added in writeLock(), causing listener accumulation; modify writeLock() to register signal handlers using named/bound functions (e.g., keep references for the exit handler and each signal handler) and in cleanup() call process.off/process.removeListener for 'exit' and the same signals ('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before unlinking; alternatively ensure listeners are only added once (guarded by a module-level flag) and still remove them in cleanup() to prevent MaxListenersExceededWarning—refer to the existing cleanup() function and the code that calls process.on('exit', cleanup) and process.once(signal, ...) to implement this.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/nuxi/src/utils/lockfile.ts`:
- Around line 97-99: The current non-atomic writeFileSync(lockPath, ...) after
mkdir allows race conditions; replace the plain write with an exclusive-create
atomic write (e.g. use write/open with flag: 'wx' or equivalent) so the file is
only created if it does not already exist, and catch the EEXIST error to treat
it as lock contention (handle/retry/propagate accordingly). Keep the existing
mkdir(dirname(lockPath), { recursive: true }) before attempting the atomic
create and reference lockPath, writeFileSync/open/write/close (or the chosen
atomic API) and EEXIST handling in your changes.
- Around line 100-117: The cleanup logic only unlinks the lock file but does not
detach the process listeners added in writeLock(), causing listener
accumulation; modify writeLock() to register signal handlers using named/bound
functions (e.g., keep references for the exit handler and each signal handler)
and in cleanup() call process.off/process.removeListener for 'exit' and the same
signals ('SIGTERM','SIGINT','SIGQUIT','SIGHUP') to remove those handlers before
unlinking; alternatively ensure listeners are only added once (guarded by a
module-level flag) and still remove them in cleanup() to prevent
MaxListenersExceededWarning—refer to the existing cleanup() function and the
code that calls process.on('exit', cleanup) and process.once(signal, ...) to
implement this.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: e20f6c9d-6c34-4247-8da7-c7b4be5ce9de
📒 Files selected for processing (2)
packages/nuxi/src/utils/lockfile.tspackages/nuxi/test/unit/lockfile.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/nuxi/test/unit/lockfile.spec.ts
- Use atomic file creation (`wx` flag) to prevent race conditions between concurrent checkLock/writeLock calls - Detach process exit/signal handlers in cleanup() to prevent listener accumulation during dev server reloads - Move writeLock() after clearBuildDir() in build command so the lock file isn't deleted immediately after creation - Don't remove lock during dev server reloads (only on final shutdown via releaseLock())
|
I love the idea @harlan-zw Do you think it would also be possible somehow, and would it be useful to also provide a way to read the logs for the current process? EDIT: seems that Next.js has a path to a log file |
🔗 Linked issue
nuxt/nuxt#34629
❓ Type of change
📚 Description
Inspired by Next.js 16.2's dev server lock file & nuxt/nuxt#34629, this adds a
nuxt.lockfile to the build directory duringnuxi devandnuxi build. The lock file contains process info (PID, port, URL, command) so that a second invocation can detect the existing process and print an actionable error message with the running server's URL and a kill command.This only runs for agents (using
std-envagent detection). Agents can setNUXT_IGNORE_LOCK=1to bypass the check if they have a good reason to.Stale locks from crashed processes are automatically cleaned up on the next startup via PID liveness checking (
process.kill(pid, 0)). Pure Node.js, zero new dependencies.Lock file format (
<buildDir>/nuxt.lock){ "pid": 12345, "command": "dev", "port": 3000, "hostname": "127.0.0.1", "url": "http://127.0.0.1:3000", "startedAt": 1711540800000 }Error output when a dev server is already running
Limitations
Future considerations
Rust bindings for native OS file locking.