OpenGPU Mesh – Decentralized GPU Compute Sharing Network
A decentralized open-source platform that allows students to share idle GPU power securely and request distributed compute time using a token-based scheduling system.
High-performance GPUs are expensive and inaccessible for many students and open-source contributors. Meanwhile, thousands of GPUs remain idle on personal systems during off-hours.
This leads to:
Limited AI/ML research access
Slower innovation in open-source
Inefficient hardware utilization
💡 Solution
OpenGPU Mesh is a decentralized compute-sharing network that enables:
Users to share idle GPU resources
Students to request temporary compute access
Token-based fair usage tracking
Secure, containerized execution of jobs
It transforms unused GPUs into a distributed open compute grid.
🏗 How It Works
1️⃣ Node Registration
Users install a lightweight GPU agent:
Detects available GPU (via NVIDIA-SMI)
Registers node with central scheduler
Reports idle capacity
2️⃣ Job Submission
Students:
Submit ML training job (Docker container)
Specify required GPU memory & time
Receive token cost estimation
3️⃣ Scheduler Allocation
Backend:
Matches job to available GPU node
Deploys container securely
Monitors execution via WebSockets
4️⃣ Token System
Contributors earn tokens by sharing GPU
Users spend tokens when running jobs
Ensures fair and sustainable ecosystem
🛠 Tech Stack
Backend
FastAPI (API server)
Python scheduler engine
WebSockets (real-time monitoring)
Docker (secure job isolation)
Orchestration
Kubernetes (future scalability)
Distributed node registry
GPU Management
NVIDIA-SMI monitoring
CUDA compatibility check
Security
Container sandboxing
Resource limitation
Time-based job kill switch