Static analysis tool for detecting Go concurrency bugs at compile time.
Go ships two concurrency safety tools: go vet (limited scope) and the race detector (-race, runtime-only, data races only). Neither detects deadlocks, livelocks, starvation, missing unlocks, lock-order inversions, or WaitGroup misuse. gostall catches these at build time, before they reach production.
Built on go/analysis — works with go vet, CI pipelines, and editor integrations out of the box.
go install github.com/erfanmomeniii/gostall/cmd/gostall@latestgostall ./... # analyze all packages
gostall -json ./... # JSON output for CI
gostall -help # all flags and analyzersWhen two functions acquire the same locks in opposite orders, concurrent execution can deadlock:
func transfer() { mu1.Lock(); mu2.Lock(); /* ... */ } // mu1 → mu2
func rollback() { mu2.Lock(); mu1.Lock(); /* ... */ } // mu2 → mu1 — cycletransfer.go:5:2: potential deadlock: inconsistent lock ordering [mu1 -> mu2, mu2 -> mu1]
The analyzer builds a directed lock-acquisition graph per function and detects cycles via DFS.
A Lock() with no Unlock() anywhere in the function — the most common mutex leak:
func fetch() {
mu.Lock()
if err != nil {
return // mu never unlocked
}
mu.Unlock()
}fetch.go:2:5: mu.Lock() is never unlocked in this function; consider adding defer mu.Unlock() immediately after locking
The analyzer is smart about common patterns: it won't false-positive when the unlock is inside a goroutine spawned by the same function (mu.Lock(); go func() { defer mu.Unlock() }()).
Same mutex locked twice without unlock in between — guaranteed deadlock on sync.Mutex:
func broken() {
mu.Lock()
mu.Lock() // deadlock: mu is not reentrant
}broken.go:3:5: mu is locked again without unlocking first (previously locked at broken.go:2:5); this will deadlock on sync.Mutex
Unbuffered channel with send and receive in the same goroutine:
ch := make(chan int)
ch <- 1 // blocks forever
fmt.Println(<-ch) // unreachablemain.go:3:16: unbuffered channel send and receive on ch in the same goroutine will deadlock; use a goroutine for one side or add a buffer
The analyzer tracks individual channels — if a function spawns goroutines, only channels not touched by those goroutines are checked, instead of skipping the entire function.
Channel receive in main() or init() with no goroutine sending on that channel — blocks forever:
func main() {
ch := make(chan int)
v := <-ch // no goroutine sends on ch
}main.go:3:7: channel receive in main() with no goroutine sending on this channel; this will block forever
Goroutines send on an unbuffered channel and call wg.Done(), but the main goroutine calls wg.Wait() before reading — workers block on send, can never call Done(), and Wait() hangs forever:
ch := make(chan int)
wg.Add(1)
go func() {
defer wg.Done()
ch <- result // blocks: nobody reading yet
}()
wg.Wait() // deadlock: worker can't finish
for r := range ch { ... } // never reachedmain.go:7:1: WaitGroup.Wait() blocks before receiving from unbuffered channel ch; goroutines sending on ch call Done() but block on send because nobody is receiving yet — move the receive before Wait() or buffer the channel
This is the exact pattern Go's runtime catches with "all goroutines are asleep — deadlock!" but gostall finds it at compile time.
Calling Add() inside a goroutine races with Wait():
go func() {
wg.Add(1) // race: may execute after Wait() returns
defer wg.Done()
}()
wg.Wait()main.go:2:5: WaitGroup.Add() called inside goroutine; call Add() before the go statement to avoid racing with Wait()
Goroutine references a WaitGroup but never calls Done() — Wait() blocks forever:
wg.Add(1)
go func() {
doWork() // forgot wg.Done()
}()
wg.Wait() // blocks forevermain.go:2:1: goroutine references WaitGroup "wg" but never calls Done(); this will cause Wait() to block forever
More Done() calls than Add() — panics at runtime:
wg.Add(1)
wg.Done()
wg.Done() // panic: negative WaitGroup countermain.go:3:1: WaitGroup "wg" counter goes negative (more Done() calls than Add()); this will panic at runtime
Goroutine that calls both Wait() and Done() on the same WaitGroup — Wait() blocks before Done() can run:
wg.Add(1)
go func() {
wg.Wait() // blocks forever
wg.Done() // unreachable
}()main.go:2:1: goroutine calls both Wait() and Done() on WaitGroup "wg"; Wait() will block before Done() can execute
Infinite loop with atomic CAS or TryLock but no backoff — burns CPU without progress:
for {
if atomic.CompareAndSwapInt32(&val, 0, 1) {
break
}
// missing: time.Sleep, runtime.Gosched, or select
}main.go:2:2: spin loop with atomic CAS or TryLock but no backoff (time.Sleep, runtime.Gosched, or select); this may livelock under contention
Add time.Sleep or runtime.Gosched() in the retry path to yield CPU.
Lock/Unlock in a tight loop without meaningful work — a polling anti-pattern:
for {
mu.Lock()
mu.Unlock()
}main.go:1:1: busy-wait loop: Lock()/Unlock() in a tight loop without meaningful work; use sync.Cond, a channel, or time.Sleep instead
Holding a mutex while performing a blocking operation starves all goroutines waiting for that lock:
mu.Lock()
ch <- data // if ch blocks, every goroutine waiting for mu is starved
mu.Unlock()main.go:2:1: channel send while holding mu (locked at main.go:1:1); this starves other goroutines waiting for the lock
Detected blocking operations: channel send/receive, time.Sleep, select without default.
# .github/workflows/gostall.yml
jobs:
concurrency-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: stable
- run: go install github.com/erfanmomeniii/gostall/cmd/gostall@latest
- run: gostall ./...Exits non-zero when diagnostics are found. Use -json for machine-parseable output.
import (
"github.com/erfanmomeniii/gostall/pkg/analyzer/deadlock"
"github.com/erfanmomeniii/gostall/pkg/analyzer/livelock"
"github.com/erfanmomeniii/gostall/pkg/analyzer/starvation"
)
// All deadlock analyzers
multichecker.Main(deadlock.Analyzers()...)
// Individual analyzers
singlechecker.Main(livelock.Analyzer)
singlechecker.Main(starvation.Analyzer)Analyzers are standard *analysis.Analyzer values.
gostall performs intra-procedural (single-function) analysis. This is an intentional trade-off: it keeps the tool fast and deterministic, but means some patterns are out of scope.
| Pattern | Why It's Missed | Workaround |
|---|---|---|
| Lock in function A, unlock in function B | No cross-function tracking | Use defer mu.Unlock() immediately after Lock() |
arr[0].mu vs arr[1].mu ordering |
Index expressions collapsed to single ID | Use named mutex fields instead of arrays |
| Deadlock via stdlib internal locks | External code not analyzed | Use -race for runtime detection |
| Channels with dynamic buffer size | Can't resolve non-constant capacity | Use constant buffer sizes |
| Cross-goroutine deadlocks (beyond pipeline pattern) | Would require call-graph + SSA analysis | Use -race and integration tests |
When gostall says there's a bug, there almost certainly is one. When it says nothing, it doesn't mean the code is free of concurrency bugs — it means the bugs it can detect statically weren't found. Use gostall alongside -race, code review, and integration testing for comprehensive coverage.