Skip to content

jgleiser/ProcessAce

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

156 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

ProcessAce

Website License: Sustainable Use Docker

AI-powered process discovery and documentation engine – from raw text to BPMN 2.0, SIPOC, and RACI. Self-hosted with bring-your-own LLM.

🌐 Website & Enterprise Options: processace.com

Status: Launched in v1.3.0. ProcessAce is in launch state with the current security, privacy, and compliance hardening shipped in the 1.3.x line.


✨ Features

ProcessAce turns raw process evidence into standard, tool-agnostic process documentation in minutes.

  • Ingest Evidence:
    • Text documents (SOPs, meeting notes, emails).
    • Planned: Audio/Video recordings, Images.
  • Analyze & Normalize:
    • Uses LLMs (OpenAI, Google Gemini, Anthropic Claude) to extract steps, actors, and systems.
    • Normalizes data into a structured evidence model.
  • Generate Artifacts:
    • BPMN 2.0 Diagrams: Auto-generated with professional layout (Manhattan routing, grid system).
    • SIPOC Tables: Supplier-Input-Process-Output-Customer matrices.
    • RACI Matrices: Responsible-Accountable-Consulted-Informed matrices.
    • Narrative Docs: Markdown-based process descriptions.
  • Interactive Editing:
    • BPMN Viewer/Editor: View and modify diagrams directly in the browser (bpmn-js v18).
    • Rich Text: Edit narrative docs with a WYSIWYG Markdown editor (EasyMDE).
    • Tables: Interactive SIPOC/RACI editing with add/delete row support.
  • Export Artifacts:
    • BPMN: Export as XML (for tools) or PNG/SVG (for presentations).
    • SIPOC/RACI: Export tables as CSV.
    • Narrative: Download as Markdown or Print/Save as PDF.
  • User Authentication & Workspaces:
    • Secure Login: Email/password with JWT (HTTP-only cookies).
    • Role-Based Access: Admin, Editor, and Viewer roles. The first registered user becomes Admin; later self-registrations become pending Editors until approved by an Admin.
    • Workspaces: Create, switch, and share workspaces for organizing projects (Admin/Editor/Viewer roles).
    • User Data Isolation: Jobs and artifacts scoped per user and workspace.
  • Multi-Provider LLM Support:
    • Choose provider and model for each processing job (OpenAI, Google GenAI, Anthropic).
    • API keys are stored encrypted (AES-256-CBC) in the database.
  • Robust Architecture:
    • Dockerized: Easy deployment with Docker Compose (App + Redis).
    • Async Processing: Redis-backed job queue (BullMQ) for long-running generative tasks.
    • Persistence: SQLite (better-sqlite3 in dev/test, SQLCipher-compatible encrypted SQLite in production).

🚀 Getting Started

Prerequisites

  • Docker & Docker Compose (Recommended)
  • An LLM API key (OpenAI, Google GenAI, or Anthropic)
  • A 32-byte Hex string (for secure API key encryption)

Quick Start (Docker)

  1. Clone the repository:

    git clone [https://github.com/jgleiser/ProcessAce.git](https://github.com/jgleiser/ProcessAce.git)
    cd ProcessAce
  2. Configure Environment:

    cp .env.example .env
    # Edit .env and set JWT_SECRET, ENCRYPTION_KEY, SQLITE_ENCRYPTION_KEY, CORS_ALLOWED_ORIGINS, and REDIS_PASSWORD

    Required for Docker startup:

    • JWT_SECRET: signing secret for auth cookies
    • ENCRYPTION_KEY: 32-byte hex key for encrypting stored provider API keys
    • SQLITE_ENCRYPTION_KEY: production SQLCipher key for the app database
    • CORS_ALLOWED_ORIGINS: comma-separated allowed origins, for example http://localhost:3000
    • REDIS_PASSWORD: shared secret used by the app and Redis container

    Optional:

    • MAX_UPLOAD_SIZE_MB: maximum upload size in megabytes for evidence uploads (defaults to 100)
    • CADDY_HOST and CADDY_EMAIL: required only when using the TLS overlay (docker-compose.tls.yml)
  3. Run with Docker Compose:

    docker compose up -d --build

    Note (Windows/Mac/WSL2): If you encounter SQLITE_IOERR_SHMOPEN errors, ensure the environment variable DISABLE_SQLITE_WAL=true is set in docker-compose.yml (it is by default).

    Note (Linux bind mounts): The host data/ and uploads/ directories must be writable by the container runtime user because the app now runs as a non-root appuser.

    Note (encrypted DB build in Docker): You do not need to install SQLCipher manually on your Windows, macOS, or Linux host just to use Docker Compose. The image builds the production encrypted SQLite module inside the container. If you see encrypted-database load errors after updating, rebuild the image with docker compose build --no-cache app and then restart the stack.

    Note (production DB encryption): Production now expects SQLCipher and a SQLITE_ENCRYPTION_KEY. Existing plaintext production databases are not auto-migrated; follow the documented export/import migration flow before switching an existing instance.

  4. Open the Web UI: Navigate to http://localhost:3000.

  5. Create an Account: Go to /register.html to create your first user account (it becomes the active Admin account). Later self-registrations stay pending until an Admin approves them.

  6. Configure LLM Provider: Go to App Settings (/app-settings.html) to set your LLM provider and API key.

  7. Test the Magic: Drop the provided samples/sample_process.txt file into the upload zone on your dashboard to see your first BPMN diagram and SIPOC table generated instantly!

Production HTTPS

Use the TLS overlay for production deployments so ProcessAce is served behind Caddy with automatic HTTPS:

docker compose -f docker-compose.yml -f docker-compose.tls.yml up -d --build

Set these additional variables in .env before using the overlay:

  • CADDY_HOST: public hostname for the ProcessAce instance
  • CADDY_EMAIL: email address Caddy uses for ACME notifications

When the TLS overlay is enabled, only Caddy publishes 80/443; the app and Redis stay on the internal Compose network. See docs/tls-setup.md for the full setup, migration notes, and nginx/Traefik alternatives.

For non-Docker production installs, you need the normal native Node.js build toolchain required by better-sqlite3 modules on the target machine before running npm ci if no prebuilt binary is available for your platform.


🔑 Bring Your Own LLM

Ollama Deployment Modes

The base Docker stack is cloud-only by default. Bundled Ollama is now opt-in through a Compose override, and host-native Ollama remains supported through environment variables.

For the full setup and troubleshooting guide, see docs/ollama_guide.md.

Bundled CPU Ollama

Use the dedicated Ollama override:

docker compose -f docker-compose.yml -f docker-compose.ollama.yml up -d --build

In this mode, the app container uses:

  • OLLAMA_BASE_URL_DEFAULT=http://ollama:11434/v1
  • OLLAMA_PULL_HOST=http://ollama:11434

Cloud-Only Providers

If you only want OpenAI, Google GenAI, or Anthropic, the default stack stays lean:

docker compose up -d --build

No bundled ollama container is started in this mode.

The base stack still requires the standard security variables in .env:

  • JWT_SECRET
  • ENCRYPTION_KEY
  • SQLITE_ENCRYPTION_KEY
  • CORS_ALLOWED_ORIGINS
  • REDIS_PASSWORD

Windows + AMD GPU Fallback

Docker Desktop on Windows does not currently provide a stable AMD passthrough path for the bundled Ollama container. For Windows hosts with AMD GPUs, run Ollama on the host and point the app container to it:

  1. Install and start Ollama on Windows.

  2. Set the following in .env:

    CORS_ALLOWED_ORIGINS=http://localhost:3000
    OLLAMA_BASE_URL_DEFAULT=http://host.docker.internal:11434/v1
    OLLAMA_PULL_HOST=http://host.docker.internal:11434
  3. Start the stack normally:

    docker compose up -d --build

The App Settings page and Ollama model manager will use the host Ollama instance.

Linux AMD GPU Docker Mode

For Linux hosts with ROCm-capable AMD GPUs, use both the Ollama override and the AMD override:

docker compose -f docker-compose.yml -f docker-compose.ollama.yml -f docker-compose.ollama-amd.yml up -d --build

This override switches the Ollama image to ollama/ollama:rocm and passes through /dev/kfd and /dev/dri.

Host prerequisites:

  • Linux host running Docker Engine
  • ROCm-capable AMD GPU with a working host driver stack
  • Docker access to /dev/kfd and /dev/dri

Validation

Bundled or host Ollama:

  • Open /app-settings.html
  • Select Ollama (Local)
  • Use Load Models or Check Status to verify connectivity
  • Manage curated local generation models in 2.1 Local Model Manager

Important:

  • Ollama is supported for artifact generation
  • transcription remains on OpenAI-compatible STT providers

Linux AMD Docker:

  • docker compose exec ollama ls /dev/kfd /dev/dri
  • Run a model and verify docker compose exec ollama ollama ps

Windows host fallback:

  • Confirm the settings page loads models through http://host.docker.internal:11434/v1
  • Verify GPU activity on the Windows host while Ollama runs

Troubleshooting

  • If the Linux AMD container cannot see /dev/kfd or /dev/dri, the host ROCm or graphics stack is not exposed to Docker correctly.
  • If you expected bundled Ollama but no ollama container exists, start the stack with docker-compose.ollama.yml.
  • If model pulls still hit the wrong Ollama endpoint, check OLLAMA_BASE_URL_DEFAULT and OLLAMA_PULL_HOST in .env.
  • If Ollama is unreachable from Docker in host mode, confirm the host Ollama service is listening on port 11434 and reachable through host.docker.internal.

Bring Your Own LLM

ProcessAce does not bundle or resell any LLM. You configure your own provider and keys via the App Settings page. The application natively supports:

  • OpenAI (default: gpt-5-nano-2025-08-07)
  • Google GenAI (default: gemini-2.5-flash-lite)
  • Anthropic (default: claude-haiku-4-5-20251001)

🧱 Architecture & Auditability

ProcessAce is built for reliability and process mining readiness:

  • Frontend: Vanilla HTML5/JS/CSS Single Page Application.
  • Backend: Node.js Express API.
  • Database: SQLite (better-sqlite3).
  • Queue & Workers: Redis (BullMQ) for background job processing.
  • Audit Trails: Structured, event-style logging (Pino) for events like job_queued, llm_call, and artifact_version_created.

See docs/architecture.md for a deep dive.


🗺️ Documentation


📄 License

ProcessAce is source-available under the ProcessAce Sustainable Use License.

  • Free to use internally, self-host, and modify for internal use.
  • You may not run ProcessAce as a multi-tenant SaaS/platform or resell it without a commercial license.

See LICENSE.md for the full terms. For commercial/enterprise licensing, visit processace.com or see COMMERCIAL_LICENSE.md.


🤝 Contributing

Contributions are welcome! Please check CONTRIBUTING.md and CODE_OF_CONDUCT.md. By contributing, you agree that your contributions may be used in both the Sustainable Use edition and any future commercial editions of ProcessAce.

About

AI-powered process discovery and documentation engine – from recordings and docs to BPMN 2.0, SIPOC and RACI. Self-hosted with bring-your-own LLM.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors