This document describes the internal design of dooing-sync.nvim — how it synchronizes dooing todo lists across machines via Google Drive.
- System Overview
- Module Map
- Data Model
- Google Drive API Usage
- Synchronization Flow
- Three-Way Merge Algorithm
- File Watching & Push-on-Save
- Token Management
- Error Handling & Offline Mode
- Concurrency Protection
- Testing Strategy
┌──────────────────────────────────────────────────────────────────┐
│ Neovim │
│ │
│ ┌──────────────┐ ┌──────────────────┐ │
│ │ dooing.nvim │ setup() │ dooing-sync.nvim │ │
│ │ (upstream) │◄──────────│ │ │
│ └──────┬───────┘ │ ┌──────────────┐ │ │
│ │ reads/writes │ │ merge.lua │ │ │
│ ▼ │ │ (3-way) │ │ │
│ ┌──────────┐ │ └──────────────┘ │ │
│ │save_path │◄─────────────│ │ curl + async │
│ │ .json │ write merged │ ┌──────────────┐ │───────────┐ │
│ └──────────┘ │ │ gdrive.lua │ │ │ │
│ │ │ (REST API) │ │ │ │
│ ┌──────────┐ │ └──────────────┘ │ │ │
│ │ base │◄─────────────│ │ │ │
│ │ snapshot │ save base │ ┌──────────────┐ │ │ │
│ └──────────┘ │ │ fs.lua │ │ ▼ │
│ │ │ (watcher) │ │ ┌──────────┐ │
│ │ └──────────────┘ │ │ Google │ │
│ └──────────────────┘ │ Drive │ │
│ │ v3 API │ │
│ └──────────┘ │
└──────────────────────────────────────────────────────────────────┘
lua/dooing-sync/
├── init.lua Entry point. setup(), sync lifecycle, autocmds, user commands.
├── config.lua Default options, credential validation, logging.
├── gdrive.lua Google Drive REST API: OAuth tokens, find, download, upload, create.
├── merge.lua Three-way field-level merge engine. Pure functions, no I/O.
└── fs.lua JSON file I/O, base snapshot management, file watcher (libuv).
init.lua
├── config.lua
├── fs.lua ──── config.lua
├── gdrive.lua ── config.lua
└── merge.lua ── config.lua
All modules depend on config.lua for options and logging. No circular dependencies.
Each todo is a JSON object with these fields:
| Field | Type | Description |
|---|---|---|
id |
string | Unique identifier (format: <unix_timestamp>_<random>). Used as the merge key. |
text |
string | Todo text, may contain #tags. |
done |
boolean | Completion status. |
in_progress |
boolean | In-progress status. |
category |
string | Extracted from first #tag in text. |
created_at |
integer | Unix timestamp of creation. |
completed_at |
integer? | Unix timestamp of completion. |
priorities |
string[]? | Array of priority names (e.g. ["important", "urgent"]). |
estimated_hours |
number? | Estimated completion time in hours. |
notes |
string | Freeform notes text. |
parent_id |
string? | ID of parent todo (for nested tasks). |
depth |
integer | Nesting depth (0 = top level). |
due_at |
integer? | Unix timestamp of due date. |
| File | Location | Purpose |
|---|---|---|
| save_path | vim.fn.stdpath('data') .. '/dooing_todos.json' |
Dooing's live data file. |
| base snapshot | vim.fn.stdpath('data') .. '/dooing_sync_base.json' |
Last successfully synced version. Used as the common ancestor in three-way merge. |
| remote | Google Drive (dooing_todos.json) |
The shared copy. |
dooing-sync uses the OAuth 2.0 refresh token grant for authentication. The user performs a one-time browser-based authorization to obtain a long-lived refresh token.
┌───────────┐ ┌──────────────┐
│ Neovim │ │ Google OAuth │
│ (curl) │ │ Server │
└─────┬─────┘ └──────┬───────┘
│ │
│ POST /token │
│ grant_type=refresh_token │
│ refresh_token=xxx │
│ client_id=xxx │
│ client_secret=xxx │
│───────────────────────────────────►│
│ │
│ { access_token, expires_in } │
│◄───────────────────────────────────│
│ │
│ (cached in memory for ~1hr) │
│ │
The plugin uses drive.file — the most restrictive Google Drive scope:
Allows access only to files created or opened by the app. Does not allow access to any other files on the user's Drive.
| Method | Endpoint | Purpose |
|---|---|---|
POST |
https://oauth2.googleapis.com/token |
Exchange refresh token for access token |
GET |
https://www.googleapis.com/drive/v3/files?q=... |
Search for file by name (and optional parent folder) |
GET |
https://www.googleapis.com/drive/v3/files/{id}?alt=media |
Download file content |
PATCH |
https://www.googleapis.com/upload/drive/v3/files/{id}?uploadType=media |
Update existing file content |
POST |
https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart |
Create new file with metadata + content |
The plugin finds its file on Drive by searching for the configured gdrive_filename
(default: dooing_todos.json) within the configured gdrive_folder_id. The Drive
file ID is cached in memory after the first lookup to avoid repeated searches.
dooing-sync is safe to use with multiple Neovim sessions on the same machine and multiple machines syncing to the same Google Drive file.
| # | Race | Without protection |
|---|---|---|
| R1 | Two sessions read/write base_path concurrently |
Stale base → incorrect merge → data loss |
| R2 | Two sessions write save_path concurrently |
Last writer clobbers the other's merge |
| R3 | Two sessions (or machines) push to Drive concurrently | Last push wins, silently dropping changes |
| R4 | Session reads save_path while another writes |
Stale read → stale merge |
┌──────────────────────────────────────────────────────────────────┐
│ Same Machine │
│ │
│ Session A ──┐ │
│ ├── Local Lockfile ── serializes access to ──┐ │
│ Session B ──┘ (fs.lua) save_path & base_path │ │
│ │ │
│ ▼ │
│ ┌──────────┐ │
│ Machine X ──┐ │ Google │ │
│ ├── Version Conditional Push ────────►│ Drive │ │
│ Machine Y ──┘ (gdrive.lua) └──────────┘ │
│ prevents lost updates │
└──────────────────────────────────────────────────────────────────┘
A lockfile at <base_path>.lock serializes the entire sync cycle across Neovim sessions
on the same machine. The lock covers all local reads, the merge, local writes, and the
push to Drive.
- Mechanism:
O_CREAT|O_EXCLviavim.uv.fs_open()for atomic creation. - Content: The PID of the owning process.
- Stale detection: On lock failure, the lockfile's PID is read and checked with
kill(pid, 0). If the process is dead, the lock is removed and reacquired. - Timeout: Configurable via
lock_timeout_ms(default: 10s). On timeout, the sync is skipped — the next trigger will retry. - Async vs blocking: Two lock functions in
fs.lua:lock_async(timeout_ms, callback)— non-blocking, uses auv_timerto poll every 100ms. Used by all normal sync paths (startup, file watcher, periodic pull, manual).lock(timeout_ms)— blocking, usesvim.wait(). Only used byVimLeavePrewhere we must complete before Neovim exits.
- Reentrancy guard: A module-local
sync_in_progressflag ininit.lua(set eagerly before the async lock callback) prevents reentrant sync attempts (e.g. file watcher firing during an ongoing sync).
Google Drive assigns a monotonically increasing version number to every file. dooing-sync captures the version on download and verifies it before uploading.
- Download: Content and version are fetched in parallel (two API calls:
alt=mediafor content,?fields=versionfor metadata). - Upload: When a version is available, a pre-flight check fetches the current version
from Drive. If it differs from the expected version, the upload is aborted with a
version_mismatcherror (another machine pushed since we last pulled). - Retry: On mismatch, the entire sync cycle is retried (release lock → re-pull →
re-merge → re-push with fresh version). Retries are capped at
max_retries(default: 2). - Graceful fallback: If the version is unavailable (e.g. metadata request failed), the push is unconditional (equivalent to pre-concurrency behavior).
The base snapshot is only updated after a successful push (or when no push is needed because merged == remote). This ensures that a failed push leaves the base in its previous state, so the next sync re-merges correctly without data loss.
There is no "push-only" code path. Every push goes through the full three-way merge cycle
(pull → merge → push), ensuring we never overwrite remote changes.
This runs during dooing-sync.setup() (or on UIEnter if no UI is attached yet).
Lock acquisition and network I/O are fully asynchronous — the main thread is never
blocked, so Neovim remains responsive during the initial sync.
dooing-sync.setup()
│
├─ Validate credentials
│ └─ Missing? → disable sync, return
│
├─ Resolve save_path
│
├─ INITIAL SYNC (async — non-blocking)
│ │
│ ├─ 1. Acquire local lock (async via uv_timer polling)
│ │
│ ├─ 2. Load base snapshot + local file (under lock)
│ │
│ ├─ 3. Pull remote (with version capture)
│ │ ├─ Find file on Drive (by name + folder)
│ │ └─ Download content + version (parallel requests)
│ │ └─ Not found? → push local as-is, save base, unlock, done
│ │
│ ├─ 4. Three-way merge(base, local, remote)
│ │ └─ See "Three-Way Merge Algorithm" below
│ │
│ ├─ 5. Write merged → save_path (if changed)
│ │ └─ Set write guard (suppress file watcher)
│ │
│ ├─ 6. Push merged → Drive (conditional: version check)
│ │ ├─ Success → save base snapshot, unlock, done
│ │ ├─ 412 Mismatch → unlock, retry from step 1 (max 2 retries)
│ │ └─ Other error → unlock, done (retry on next trigger)
│ │
│ └─ 7. Release local lock
│
├─ Start file watcher on save_path
│
├─ Start periodic pull timer
│
├─ Register VimLeavePre autocmd
│
└─ Register :DooingSync, :DooingSyncStatus commands
│
▼
dooing.setup() ← loads the merged file
Triggered by the file watcher when dooing writes to save_path. Uses the full sync
cycle (not a blind push) to prevent lost updates.
dooing saves file
│
▼
fs_event fires
│
├─ Debounce (500ms)
│
├─ Write guard active? → Yes → skip (this was our own write)
│
├─ Sync already in progress? → Yes → skip
│
└─ Full sync cycle (same as startup sync: lock → pull → merge → push → unlock)
Triggered by a repeating timer (default: every 5 minutes). Uses the same full sync cycle.
Timer fires
│
├─ Sync already in progress? → Yes → skip
│
└─ Full sync cycle (lock → pull → merge → push → unlock)
Ensures data is synced before Neovim exits. This is the only blocking sync path —
it uses the synchronous fs.lock() + vim.wait() to guarantee completion before exit.
VimLeavePre
│
└─ Full sync cycle (blocking lock + vim.wait, up to sync_on_close_timeout_ms)
The merge operates on three versions of the todo list:
- Base — the last successfully synced state (common ancestor)
- Local — the current file on this machine
- Remote — the file from Google Drive (another machine's changes)
Each todo item has a unique id field, used as the merge key.
For each id present in any of the three versions:
┌────────┬─────────┬─────────┬──────────────────────────────────┐
│ Base │ Local │ Remote │ Action │
├────────┼─────────┼─────────┼──────────────────────────────────┤
│ — │ ✓ │ — │ Added locally → KEEP │
│ — │ — │ ✓ │ Added remotely → KEEP │
│ — │ ✓ │ ✓ │ Added both → KEEP (dedup by id) │
│ ✓ │ — │ — │ Deleted both → DELETE │
│ ✓ │ — │ ✓ │ Deleted locally → DELETE │
│ ✓ │ ✓ │ — │ Deleted remotely → DELETE │
│ ✓ │ ✓ │ ✓ │ See "Modification Detection" │
└────────┴─────────┴─────────┴──────────────────────────────────┘
When an item exists in all three versions, compare serialized JSON (with sorted keys):
local_json == remote_json → Both same (take either)
local_json == base_json → Only remote changed → take REMOTE
remote_json == base_json → Only local changed → take LOCAL
all three differ → FIELD-LEVEL MERGE
When both local and remote changed the same item differently, merge individual fields:
For each field in { text, done, in_progress, category, created_at,
completed_at, priorities, estimated_hours, notes,
parent_id, depth, due_at }:
local_val == base_val → Use remote_val (remote changed it)
remote_val == base_val → Use local_val (local changed it)
local_val == remote_val → Use either (same change)
all three differ → TRUE CONFLICT → resolve per strategy
| Strategy | Behavior |
|---|---|
recent (default) |
Prefer the item with the higher completed_at or created_at timestamp |
local |
Always prefer the local version |
remote |
Always prefer the remote version |
Unknown fields (added by future dooing versions) are preserved through merges. The merge engine copies any unrecognized keys from both local and remote items to the merged result.
The merged result is an unordered list of items. Dooing re-sorts todos by priority, due date, and creation time on every load, so output order is irrelevant.
When no base snapshot exists (first sync from a machine), base is treated as empty:
- All local items are classified as "added locally"
- All remote items are classified as "added remotely"
- Items with the same
idin both are deduplicated
Uses libuv's fs_event via vim.uv.new_fs_event() to watch dooing's save_path.
Dooing may write multiple times in quick succession (e.g. sort + save). The watcher debounces with a 500ms timer — only the last event in a burst triggers a push.
Event 1 ──► start 500ms timer
Event 2 ──► reset timer
Event 3 ──► reset timer
... 500ms pass ...
──► trigger push callback
When dooing-sync itself writes to save_path (after a merge), a writing_local flag
is set to suppress the file watcher from triggering an unnecessary push. The flag is
cleared after 700ms (beyond the debounce window).
fs.write_json() writes to a .tmp file first, then renames atomically via
os.rename(). This prevents reading a half-written file if dooing and sync race.
The access token is cached in a module-local variable with an expires_at timestamp.
A 60-second safety margin is applied to avoid using an expired token.
token_expires_at = os.time() + expires_in - 60get_access_token() checks the cache first. If expired or missing, it automatically
calls refresh_access_token().
invalidate_token() clears the cache, forcing a refresh on the next request. This
is useful after receiving a 401 Unauthorized response.
Credentials are read from environment variables (not files), making them compatible with any secret management approach:
| Variable | Content |
|---|---|
DOOING_GDRIVE_CLIENT_ID |
OAuth 2.0 Client ID |
DOOING_GDRIVE_CLIENT_SECRET |
OAuth 2.0 Client Secret |
DOOING_GDRIVE_REFRESH_TOKEN |
Long-lived refresh token |
Never break dooing's normal operation. Sync failures are logged but never throw errors or block the editor.
| Scenario | Behavior |
|---|---|
| No credentials in environment | Sync disabled silently, dooing works normally |
| Network unreachable | Push/pull skipped, logged as warning |
| Google API 5xx | Logged as warning, operation skipped |
| Token expired | Auto-refresh; if refresh fails, logged as error |
| Corrupt remote JSON | Logged as error, merge skipped, local preserved |
| Corrupt base snapshot | Treated as first sync (base = nil) |
| Initial sync timeout | Logged as warning, dooing loads local data |
| Concurrent pushes from two machines | Version mismatch → automatic retry with fresh data (up to max_retries) |
| Lock timeout (another local session syncing) | Sync skipped; next trigger retries |
| Neovim crash while holding lock | Stale lock detected by PID check on next sync, automatically removed |
| Level | When |
|---|---|
DEBUG |
Token refreshes, file operations, sync steps (only with debug = true) |
INFO |
Sync results, initial push, credential warnings |
WARN |
Network failures, timeouts, push failures |
ERROR |
Corrupt data, parse failures |
tests/
├── test_config.lua 13 unit tests Config merging, credential validation, path resolution
├── test_fs.lua 13 unit tests JSON I/O, atomic writes, base snapshots, file watcher
├── test_fs_lock.lua 18 unit tests File locking, PID detection, stale lock cleanup
├── test_merge.lua 18 unit tests All merge cases, field-level merge, conflict strategies
├── test_gdrive_etag.lua 14 unit tests Version-based concurrency, pre-flight checks, mismatch detection
├── test_init_sync.lua 9 unit tests Protected sync cycle, retry, lock lifecycle (mocked gdrive)
├── test_gdrive.lua 5 integration Token refresh, push/pull round-trip (requires credentials)
└── test_init.lua 10 integration Full lifecycle: setup, sync, push-on-save, teardown
──────────────
102 total
# Unit tests (offline, fast)
nvim --headless -l tests/test_config.lua
nvim --headless -l tests/test_fs.lua
nvim --headless -l tests/test_fs_lock.lua
nvim --headless -l tests/test_merge.lua
nvim --headless -l tests/test_gdrive_etag.lua
nvim --headless -l tests/test_init_sync.lua
# Integration tests (requires network + OAuth credentials)
nvim --headless -l tests/test_gdrive.lua
nvim --headless -l tests/test_init.lua- Integration tests use a temporary
save_pathandbase_pathin/tmp/ - They clean up after themselves
test_gdrive.luagracefully skips if credentials are missingtest_init.luausesteardown()to clean up state between tests
Tests use a minimal custom runner (no external dependencies):
test(name, fn)— wrapspcall, prints✓/✗with error detailsskip(name, reason)— prints⊘for skipped tests- Non-zero exit code on any failure