Everything you need to build distributed AI systems, with zero external dependencies
Pure Rust + Tokio implementation. No etcd, NATS, or Consul required. Deploy anywhere with minimal footprint.
Built-in gossip-based node discovery and failure detection. Nodes automatically find each other and form clusters.
ActorRef supports unified access to local and remote actors. Same API regardless of where actors run.
Native support for streaming requests and responses. Perfect for LLM token generation and real-time data processing.
Full Python API via PyO3. Use the @as_actor decorator to turn any class into a distributed actor.
Built on Tokio async runtime with HTTP/2 transport. Handle thousands of concurrent actors efficiently.
# Install from source
pip install maturin
maturin develop
# Or using uv
uv pip install -e .
from pulsing.actor import as_actor, create_actor_system, SystemConfig
@as_actor
class Calculator:
def __init__(self, initial: int = 0):
self.value = initial
def add(self, n: int) -> int:
self.value += n
return self.value
def get(self) -> int:
return self.value
import asyncio
async def main():
system = await create_actor_system(SystemConfig.standalone())
calc = await Calculator.local(system, initial=100)
result = await calc.add(50) # 150
result = await calc.add(25) # 175
value = await calc.get() # 175
print(f"Final value: {value}")
await system.shutdown()
asyncio.run(main())
Built for high-throughput distributed computing
* Benchmarks measured on Apple M2 MacBook Air. Results may vary based on workload and hardware.
Build scalable LLM inference backends with streaming token generation. Native integration with Transformers, vLLM, and MLX.
Replace Ray for lightweight distributed workloads. Perfect for ML pipelines, data processing, and microservices.
Designed for cloud-native deployments. Service discovery works seamlessly with K8s Service IPs and rolling updates.
Pulsing was created to fill the gap between heavyweight distributed systems (like Ray) and simple async programming. It provides just enough infrastructure for building distributed AI applications without the complexity of external coordination services. The Actor model makes reasoning about distributed state simple, while the SWIM protocol ensures automatic cluster management.
Learn the Actor Model βSend messages to actors without knowing their physical location. Pulsing automatically routes messages across the cluster using efficient HTTP/2 transport.
Actors can be local or remote - your code stays the same. Scale from single-node to distributed deployment without changing a line of application logic.
Learn More β# Node 1 - Start seed node with public actor
config = SystemConfig.with_addr("0.0.0.0:8000")
system = await create_actor_system(config)
await system.spawn(Worker(), "worker", public=True)
# Node 2 - Find and use remote actor
config = SystemConfig.with_addr("0.0.0.0:8001") \
.with_seeds(["192.168.1.1:8000"])
system = await create_actor_system(config)
worker = await system.find("worker")
result = await worker.process(data) # Same API!
Built-in SWIM protocol enables automatic node discovery and failure detection. No external coordination service required - nodes find each other and form clusters automatically.
Gossip-based protocol ensures consistency across the cluster with minimal network overhead. Nodes join and leave gracefully with automatic state reconciliation.
Learn More β# Kubernetes deployment - just configure Service IP
config = SystemConfig.with_addr("0.0.0.0:8080") \
.with_seeds(["actor-cluster.svc:8080"])
# Nodes automatically discover each other
# via K8s load balancing + SWIM protocol
system = await create_actor_system(config)
# Monitor cluster health
members = await system.get_members()
print(f"Cluster size: {len(members)}")
Native streaming support for token-by-token generation. Built-in OpenAI-compatible API router for seamless integration with existing tooling.
Integrates with popular frameworks including Transformers, vLLM, and MLX (for Apple Silicon). Deploy production LLM services with a single command.
Learn More β# Start OpenAI-compatible Router
pulsing actor router --addr 0.0.0.0:8000 \
--http_port 8080 --model_name my-llm
# Start vLLM Worker
pulsing actor vllm --model Qwen/Qwen2.5-0.5B \
--addr 0.0.0.0:8001 --seeds 127.0.0.1:8000
# Test with curl
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "my-llm", "messages": [...]}'
Pulsing is open source and community-driven. Get involved!