A unified distributed actor framework for building scalable, fault-tolerant systems
Features • Quick Start • Documentation • Examples • Comparison
PlexSpaces is a distributed actor framework that unifies the best patterns from Erlang/OTP, Orleans, Temporal, and modern serverless architectures. It provides a single, powerful abstraction for building durable workflows, stateful microservices, distributed ML workloads, and edge computing applications.
- Five Foundational Pillars: TupleSpace coordination, Erlang/OTP supervision, durable execution, WASM runtime, and Firecracker isolation
- Composable Abstractions: One powerful actor model with dynamic facets instead of multiple specialized types
- Location Transparency: Actors work seamlessly across local processes, containers, and cloud regions
- Polyglot Support: Write actors in Rust, Python, TypeScript, Go, or any language that compiles to WebAssembly
- Production Ready: Built-in observability, fault tolerance, and resource-aware scheduling
- Durable Actors: Stateful actors with automatic persistence and fault recovery
- Virtual Actors: Orleans-style activation/deactivation with automatic lifecycle management
- Workflows: Temporal-style durable workflows with exactly-once execution
- TupleSpace Coordination: Linda-style associative memory for decoupled communication
- Supervision Trees: Erlang/OTP-inspired fault tolerance with restart strategies
- WASM Runtime: Deploy actors written in any language that compiles to WebAssembly
- Firecracker Isolation: Run actors in lightweight microVMs for strong isolation
Behaviors (Compile-time patterns):
- GenServerBehavior: Erlang/OTP-style request/reply
- GenFSMBehavior: Finite state machine
- GenEventBehavior: Event-driven processing
- WorkflowBehavior: Durable workflow orchestration
Facets (Runtime capabilities):
- Infrastructure: VirtualActorFacet, DurabilityFacet, MobilityFacet
- Python WASM durability: Use
@actor(facets=["durability"])and enable viaWasmConfig.durability_enabledin release/node config — see Durability and Bank Account example - Capabilities: HttpClientFacet, KeyValueFacet, BlobStorageFacet
- Timers/Reminders: TimerFacet, ReminderFacet
- Observability: MetricsFacet, TracingFacet, LoggingFacet
- Security: AuthenticationFacet, AuthorizationFacet
- Events: EventEmitterFacet
Primitives:
- ActorRef: Location-transparent actor references
- ActorContext: Service access for actors
- TupleSpace: Linda-style coordination
- Channels: Queue and topic patterns (InMemory, Redis, Kafka, SQLite, NATS, UDP)
- Process Groups: Group communication
- Journaling: Event sourcing and replay
- Data-Parallel Actors (DPA-inspired): ShardGroup for data-parallel sharding with bulk updates, parallel map, and scatter-gather operations
- ShardGroup: Partition data across multiple shards/actors with hash/consistent-hash/range strategies
- BulkUpdateShardGroup: Bulk writes with eventual consistency (DPA UpdateFunction)
- MapShardGroup: Parallel queries across all shards (DPA Map operator)
- ScatterGather: Aggregation queries with fault tolerance (DPA Scatter-Gather)
- Unified SDK:
ParallelClientandUnifiedShardGroupClientfor both WASM/internal and gRPC (optional feature) - Resource-Based Routing: Labels flow through to ActorResourceRequirements for intelligent node placement
- FaaS-Style Invocation: HTTP-based actor invocation via
InvokeActorRPC (GET for reads, POST/PUT for updates, DELETE for deletes)- RESTful API:
/api/v1/actors/{tenant_id}/{namespace}/{actor_type}endpoint (or/api/v1/actors/{namespace}/{actor_type}without tenant_id) - Namespace Support: Organize actors by namespace for better isolation (defaults to "default")
- Tenant Defaulting: Tenant ID defaults to "default" if not provided in path
- AWS Lambda URL Support: Ready for integration with AWS Lambda Function URLs
- Serverless Patterns: Invoke actors like serverless functions with automatic load balancing
- RESTful API:
- Resource-Aware Scheduling: Intelligent placement based on CPU, memory, and I/O profiles
- Multi-Tenancy: Two-level isolation (tenant-id from auth + namespace from application/actor) for secure multi-tenant deployments
- Event Sourcing: Complete audit trail with time-travel debugging
- Distributed Coordination: Actor groups, process groups, and distributed locks
- Observability: Built-in metrics, tracing, and health checks
- gRPC-First: All APIs defined in Protocol Buffers for type safety and multi-language support
- Capability Providers: HTTP, KeyValue, BlobStorage facets for I/O operations
- Security Facets: Authentication, authorization, and encryption support
- Event-Driven: EventEmitter facet for reactive programming patterns
- Graceful Shutdown: Actors using non-memory channels stop accepting new messages but complete in-progress work
- UDP Multicast Channels: Low-latency pub/sub for cluster-wide messaging
- Cluster Configuration: Node grouping via
cluster_namefor shared channels
- NodeService: Comprehensive node management with metrics, health checks, and capacity calculation
- NodeRegistry: TTL-based caching with gossip protocol for efficient node discovery
- Channel Factory: Priority-based backend selection (Kafka → NATS → SQS → ProcessGroup → InMemory)
- SecretMasker: Automatic masking of passwords, API keys, and tokens in API responses
- ServiceLocator: Centralized service discovery with typed accessors for all services
- BlobServiceTrait: Type-safe blob storage access via ServiceLocator
PlexSpaces follows three core principles:
- One Powerful Abstraction: A unified actor model with composable capabilities beats multiple specialized types
- Elevate Research to Production: Generalize proven research concepts (Linda, OTP, virtual actors) into production abstractions
- Composable Over Specialized: Dynamic facets enable capabilities without creating new actor types
- Proto-First: All contracts defined in Protocol Buffers for cross-language compatibility
- Location Transparency: Actors work seamlessly across local processes, containers, and cloud regions
- Fault Tolerance: "Let it crash" philosophy with automatic recovery via supervision trees
- Exactly-Once Semantics: Durable execution with deterministic replay guarantees
- Resource Awareness: Intelligent scheduling based on declared resource profiles
- Observability-First: Built-in metrics, tracing, and health checks for production operations
- TupleSpace Coordination (Linda Model): Decoupled communication via associative memory
- Erlang/OTP Philosophy: Supervision trees, behaviors, and "let it crash" fault tolerance
- Durable Execution: Restate-inspired journaling for exactly-once semantics and fault recovery
- WASM Runtime: Portable, secure actors that run anywhere
- Firecracker Isolation: MicroVM-level isolation for security and resource management
Get PlexSpaces running in under 5 minutes:
# Using Docker (recommended)
docker run -p 8080:8080 -p 8000:8000 -p 8001:8001 plexspaces/node:latest
# Or build from source
git clone https://github.com/plexobject/plexspaces.git
cd plexspaces && make builduse plexspaces_sdk::{
gen_server_actor, plexspaces_handlers, handler,
NodeBuilder, RequestContext, ActorId,
spawn_with_facets, call_message, json,
};
use std::sync::Arc;
use std::time::Duration;
// Define actor with SDK annotations (like Python decorators)
#[gen_server_actor]
struct Counter {
count: i32,
}
impl Counter {
fn new() -> Self { Self { count: 0 } }
}
// Define handlers - GenServer defaults to "call" (request-reply)
#[plexspaces_handlers]
impl Counter {
#[handler("increment")]
async fn increment(
&mut self,
_ctx: &plexspaces_sdk::ActorContext,
msg: &plexspaces_sdk::Message,
) -> Result<serde_json::Value, plexspaces_sdk::BehaviorError> {
let payload: serde_json::Value = serde_json::from_slice(&msg.payload)?;
self.count += payload["amount"].as_i64().unwrap_or(1) as i32;
Ok(json!({ "count": self.count }))
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create and start node
let node = Arc::new(NodeBuilder::new("node1").build().await);
let service_locator = node.service_locator();
// Spawn node in background
let node_clone = node.clone();
tokio::spawn(async move { node_clone.start().await });
tokio::time::sleep(Duration::from_millis(500)).await;
// Create request context (tenant isolation required)
let ctx = RequestContext::new_without_auth("my-tenant".into(), "default".into());
// Spawn actor using SDK helper
let actor_ref = spawn_with_facets(
&ctx, service_locator.clone(),
ActorId::from("counter@node1"), "default",
Counter::new(), vec![],
).await?;
// Send request-reply message using SDK helper
let request = call_message(json!({ "action": "increment", "amount": 5 }));
let reply = actor_ref.ask(request, Duration::from_secs(5)).await?;
let result: serde_json::Value = serde_json::from_slice(&reply.payload)?;
println!("Count: {}", result["count"]);
Ok(())
}That's it! You've created your first actor. See the Getting Started Guide for more examples.
- 📖 Concepts Guide - Learn Actors, Behaviors, Facets, and more
- 🚀 Examples - Explore real-world patterns
- 🏗️ Architecture - Understand the system design
# counter_actor.py - Build to WASM with: plexspaces-py build counter_actor.py -o counter.wasm
from plexspaces import actor, state, handler
@actor
class CounterActor:
count: int = state(default=0)
@handler("increment")
def increment(self, amount: int = 1) -> dict:
self.count += amount
return {"count": self.count}
@handler("get")
def get(self) -> dict:
return {"count": self.count}# Build and deploy
plexspaces-py build counter_actor.py -o counter.wasm
curl -X POST http://localhost:8094/api/v1/deploy \
-F "namespace=default" -F "actor_type=counter" -F "wasm=@counter.wasm"
# Invoke via HTTP
curl "http://localhost:8094/api/v1/actors/default/counter/invoke?msg_type=increment" \
-d '{"amount": 5}'See the Getting Started Guide for detailed tutorials and the Concepts Guide to understand the fundamentals.
PlexSpaces excels at:
- Durable Workflows: Long-running business processes with automatic recovery
- Stateful Microservices: Services that maintain state across requests
- Distributed ML Workloads: Parameter servers, distributed training, and inference pipelines
- Event Processing: Real-time stream processing with exactly-once semantics
- Game Servers: Stateful game sessions with automatic migration and fault tolerance
- Edge Computing: Deploy actors to edge locations with automatic synchronization
- FaaS Platforms: Build serverless platforms with durable execution
- HTTP-Based Invocation: Invoke actors via REST API (
GET /api/v1/actors/{tenant_id}/{namespace}/{actor_type}or/api/v1/actors/{namespace}/{actor_type}) - Namespace Support: Organize actors by namespace within tenants for better isolation
- AWS Lambda Integration: Ready for AWS Lambda Function URLs and API Gateway
- Serverless Functions: Treat actors as serverless functions with automatic scaling
- HTTP-Based Invocation: Invoke actors via REST API (
See Use Cases for detailed examples.
┌─────────────────────────────────────────────────────────┐
│ PlexSpaces Node │
├─────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Actors │ │ Workflows │ │ TupleSpaces │ │
│ │ (GenServer) │ │ (Durable) │ │ (Linda) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │ │ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Actor Runtime & Supervision │ │
│ └──────────────────────────────────────────────────┘ │
│ │ │ │ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ Journaling │ WASM Runtime │ Firecracker │ │
│ └──────────────────────────────────────────────────┘ │
│ │ │ │ │
│ ┌──────────────────────────────────────────────────┐ │
│ │ gRPC Services & Service Mesh │ │
│ └──────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
See Architecture Guide for detailed design.
PlexSpaces includes comprehensive examples organized by complexity:
- Timers: Scheduled tasks and periodic operations
- Durable Actors: State persistence and recovery
- WASM Calculator: Polyglot actors in Python/JavaScript
- Actor Groups: Sharding and load distribution
- Matrix Multiply: Distributed computation
- Heat Diffusion: Multi-actor simulation
- MPI Patterns: Scatter-gather coordination
- Byzantine Generals: Consensus algorithms
- N-Body Simulation: Complex physics simulation
- Order Processing: Real-world workflow orchestration
Real-world use cases deployed as WASM actors (Python, TypeScript, Go):
- Ray Parameter Server (Python): Distributed ML training with gradient aggregation
- Erlang/OTP Rate Limiter (Go): Sliding window rate limiting service
- Bank Account (Python): Durable actors with checkpointing
- Orleans Batch Predictor (TypeScript): Virtual actor ML inference
Side-by-side comparisons with 24+ frameworks:
- Erlang/OTP, Orleans, Temporal, Restate
- Ray, Cloudflare Workers, Azure Durable Functions
- AWS Step Functions, wasmCloud, Dapr
- And many more...
See Examples for the complete list.
- Actor System: Comprehensive guide to the unified actor system - actors, supervisors, applications, facets, behaviors, lifecycle, linking/monitoring, and observability
- Getting Started: Quick start guide and tutorials
- Concepts: Core concepts explained (Actors, Behaviors, Facets, TupleSpace, FaaS-Style Invocation, etc.)
- Architecture: System design, abstractions, and primitives (including FaaS Invocation)
- Detailed Design: Comprehensive component documentation with all facets, behaviors, APIs, and primitives (including InvokeActor Service)
- Security: Authentication (JWT for HTTP, mTLS for gRPC), tenant isolation, JWT claims and CLI token creation, middleware, and local testing (
PLEXSPACES_DISABLE_AUTH) - Installation: Docker, Kubernetes, and manual setup
- Testing: How to run unit tests, integration tests, and example tests
- WASM Deployment: Deploy polyglot WASM applications (Rust, Python, TypeScript, Go)
- Use Cases: Real-world application patterns and use cases (including FaaS Platforms)
- Examples: Example gallery with feature matrix
- CLI Reference: Command-line tools and operations
- API Reference: Full API documentation
Complete Documentation: All documentation is in the docs/ directory. Crate-specific documentation is in crates/*/README.md with references to main docs.
PlexSpaces unifies patterns from multiple frameworks:
| Framework | Pattern | PlexSpaces Abstraction |
|---|---|---|
| Erlang/OTP | GenServer, Supervision | GenServerBehavior, Supervisor |
| Akka | Actor Model, Message Passing | Actor, ActorRef, tell()/ask() |
| Orleans | Virtual Actors | VirtualActorFacet |
| Temporal | Durable Workflows | WorkflowBehavior, DurabilityFacet |
| Restate | Durable Execution | DurabilityFacet, Journaling |
| Ray | Distributed ML | GenServerBehavior, TupleSpace |
| Cloudflare Workers | Durable Objects | VirtualActorFacet, DurabilityFacet |
See Framework Comparisons for detailed side-by-side examples.
docker pull plexspaces/node:latest
docker run -p 8080:8080 plexspaces/node:latestkubectl apply -f k8s/deployment.yamlgit clone https://github.com/plexobject/plexspaces.git
cd plexspaces
cargo build --releaseSee Installation Guide for detailed instructions.
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
# Clone and build
git clone https://github.com/plexobject/plexspaces.git
cd plexspaces
make build
# Run tests
make test # Includes unit tests and integration tests (see docs/testing.md)
# Run examples
make test-examples
# Check code coverage
cargo tarpaulin --lib --fail-under 95PlexSpaces logs detailed information when initializing storage backends. To see these logs:
# Start a node with INFO logging enabled
RUST_LOG=info cargo run -p plexspaces-cli -- start --node-id test-node --listen-addr 0.0.0.0:8090
# Run an example with logging
cd examples/rust/embedded/durable_actor
RUST_LOG=info cargo runExample output showing backend initialization:
INFO plexspaces_keyvalue: KeyValue storage initialized db_path="/tmp/plexspaces-test-node.db" table="kv_store" backend="SQLite"
INFO plexspaces_locks: Locks storage initialized db_url="sqlite:///tmp/plexspaces.db" table="locks" backend="SQLite"
INFO plexspaces_journaling: Journal storage initialized backend="InMemory"
INFO plexspaces_blob: Blob storage initialized backend="minio" bucket="plexspaces" endpoint="http://localhost:9000"
Supported backends: SQLite, PostgreSQL, Redis, DynamoDB, MinIO/S3/GCP/Azure (blob), InMemory.
PlexSpaces is licensed under the GNU Lesser General Public License v2.1.
PlexSpaces is the evolution of JavaNow (based on Actors/Linda memory model and MPI), a comprehensive parallel computing framework developed for my post-graduate research in the late 1990s.
PlexSpaces incorporates patterns from:
- Erlang/OTP: Supervision trees and fault tolerance
- Akka: Actor model and message passing
- Microsoft Orleans: Virtual actors and activation patterns
- Temporal: Durable workflows and exactly-once execution
- Restate: Durable execution and journaling
- Ray: Distributed ML and resource scheduling
- Cloudflare Workers: Edge computing and Durable Objects
- GitHub Issues: Report bugs and request features
- Discussions: Ask questions and share ideas
- Documentation: Full documentation
Built with ❤️ by the PlexSpaces team
Website • Documentation • Examples • GitHub