mesh-llm — Decentralised LLM Inference
February 26, 2026
Turn spare GPU capacity into a shared inference mesh. Serve many models across machines, run models larger than any single device, and scale capacity to meet demand. OpenAI-compatible API on every node.
Long, long time friend of WebDirections, Mike Neal has a new project that allows for distributed inference where you and others can share a small amount of your compute to run inference for open source models.
You can try that now. You can share your own resources or take advantage of others’.







