Every Stera machine is a node. Together, they form
the world's first sovereign distributed AI network.
A Stera machine has serious compute — 24 to 128 GB of GPU memory, built for local AI inference. But no one runs inference 24 hours a day. Most of the time, the GPU sits idle. That idle compute is wasted potential.
Stera Grid changes this. When your machine is not working for you, it can contribute its GPU to the Grid — a distributed network of Stera machines that collectively provide AI training compute. You earn credits for every cycle your GPU contributes. Those credits reduce the cost of owning a Stera machine to zero — or below.
On the other side, AI researchers and companies need GPU compute to train models. Today, they pay enormous prices to cloud providers for access to centralized data centers. The Grid offers an alternative: distributed training across thousands of sovereign machines, at a fraction of the cost.
Stera Grid
─────────────────────────────────────────────────
GPU Dedicators AI Trainers
(Stera owners) (Researchers, companies)
┌──────────┐ ┌──────────┐
│ Stera │──┐ ┌──│ Submit │
│ Machine │ │ │ │ Training│
└──────────┘ │ │ │ Job │
┌──────────┐ │ ┌──────────┐ │ └──────────┘
│ Stera │──┼─▶│ Grid │◀┤
│ Machine │ │ │ Router │ │ ┌──────────┐
└──────────┘ │ └──────────┘ │ │ Submit │
┌──────────┐ │ │ └──│ Training│
│ Stera │──┘ │ │ Job │
│ Machine │ ▼ └──────────┘
└──────────┘ ┌──────────────┐
│ Distribute │
┌──────────┐ │ Fragments │
│ Stera │──│ Coordinate │
│ Machine │ │ Aggregate │
└──────────┘ └──────────────┘
┌──────────┐ │
│ Stera │──────────┘
│ Machine │
└──────────┘
Credits flow: Trainers ──▶ Grid ──▶ Dedicators
The Grid Router receives training jobs, fragments them into workloads that can run across distributed GPUs, coordinates execution, and aggregates results. Each Stera machine processes its fragment locally — training data is encrypted in transit and at rest. The machine owner controls when the GPU is available: specific hours, idle-only, or always.
This is not a blockchain. It is not a token economy. It is a practical compute network where real GPU cycles produce real training results, and real credits flow from the people who need compute to the people who provide it.
Your Stera Machine
─────────────────────────────────────────────────
Your work (priority)
┌─────────────────────────────────────────────┐
│ Your tasks always come first. │
│ Grid work pauses instantly when you need │
│ your GPU. Zero impact on your experience. │
└─────────────────────────────────────────────┘
│
│ Idle GPU
▼
Grid contribution
┌─────────────────────────────────────────────┐
│ Encrypted training fragments │
│ No access to your data │
│ You control the schedule │
│ Disconnect any time │
└─────────────────────────────────────────────┘
│
▼
Credits earned
┌─────────────────────────────────────────────┐
│ Per-hour rate based on your GPU │
│ Redeemable for Marketplace products │
│ Convertible to cash │
│ Offset: machine pays for itself over time │
└─────────────────────────────────────────────┘
Your work always takes priority. The moment you need your GPU, Grid processing pauses instantly and your machine returns to full personal performance. Grid work only runs when you are not using the machine — or during hours you designate.
Privacy is absolute. Grid training fragments are encrypted. The training workload cannot access your local data, your models, your files, or your Stera intelligence. It runs in a sandboxed partition with no visibility into your system. Your machine is contributing compute, not access.
Credits accumulate automatically. Use them to purchase Talent Pods and models from the Marketplace, convert them to cash, or let them compound. A Stera machine running Grid contribution during idle hours can offset its own cost within months.
Path 1: Direct Purchase
─────────────────────────────────────────────────
Trainer pays credits ──▶ GPU Dedicators earn
credits
┌─────────────────────────────────────────────┐
│ Pay per GPU-hour │
│ Fraction of data center pricing │
│ Distributed: slower, more fragmented │
│ Best for: non-urgent training, fine-tuning │
│ research experiments, small teams │
└─────────────────────────────────────────────┘
Path 2: Revenue Share
─────────────────────────────────────────────────
Trainer publishes project ──▶ GPU Dedicators
Shares future revenue invest idle
in exchange for compute compute now
┌─────────────────────────────────────────────┐
│ No upfront cost │
│ Publish your training project openly │
│ Offer future revenue shares to dedicators │
│ Dedicators choose which projects to back │
│ When model ships: revenue flows to backers │
│ Best for: startups, open research, │
│ independent developers, ambitious projects │
└─────────────────────────────────────────────┘
Path 1: Direct Purchase. You buy GPU credits and spend them on training. The cost per GPU-hour is significantly lower than centralized cloud providers — because the hardware already exists, it is already paid for, and the owners are monetizing idle time. Training is distributed and may be slower than a dedicated cluster, but for fine-tuning, experimentation, and research workloads, the economics are transformative.
Path 2: Revenue Share. You have a model worth training but no budget for compute. You publish your training project on the Grid — what the model does, what data it trains on, what market it targets. GPU dedicators review published projects and choose to back the ones they believe in by contributing compute. In return, they receive a share of future revenue when the trained model ships on the Marketplace. This is venture funding expressed as GPU cycles — the network itself becomes your investor.
Cloud Training (today)
─────────────────────────────────────────────────
NVIDIA H100 (cloud) ~€2.50 - 4.00/hr
NVIDIA A100 (cloud) ~€1.50 - 3.00/hr
Minimum commitment Hours to months
Availability Waitlists, quotas
Data Leaves your control
Stera Grid (distributed)
─────────────────────────────────────────────────
Equivalent compute ~€0.30 - 0.80/hr
Minimum commitment None
Availability Scales with network
Data Encrypted fragments
Revenue share path €0 upfront
Why it's cheaper
─────────────────────────────────────────────────
Hardware is already owned (sunk cost)
No data center overhead
No corporate margin
Owners earn from idle time they'd waste anyway
Cloud GPU pricing reflects data center construction, cooling, staffing, corporate overhead, and profit margins on top of hardware cost. Stera Grid has none of these. The hardware is already in people's homes and offices, already paid for, already powered. The marginal cost of contributing idle GPU cycles is close to electricity alone.
The result: training compute at 70-80% less than cloud pricing. Not because the compute is inferior — the GPUs are the same silicon — but because the economic structure is fundamentally different. There is no landlord.
Stera Grid: The Larger Vision
─────────────────────────────────────────────────
Phase 1 (current focus)
┌─────────────────────────────────────────────┐
│ Distributed AI training │
│ GPU credit economy │
│ Revenue share model │
└─────────────────────────────────────────────┘
Phase 2
┌─────────────────────────────────────────────┐
│ Distributed inference at scale │
│ Run large models across multiple machines │
│ Collective intelligence services │
└─────────────────────────────────────────────┘
Phase 3
┌─────────────────────────────────────────────┐
│ Knowledge propagation network │
│ Operational intelligence sharing │
│ Collective capability growth │
└─────────────────────────────────────────────┘
Phase 4
┌─────────────────────────────────────────────┐
│ Autonomous service economy │
│ Machines offering services to machines │
│ A self-sustaining AI ecosystem │
└─────────────────────────────────────────────┘
GPU sharing for training is the first application — the one that creates immediate value for both sides. But a network of thousands of sovereign AI machines with serious compute is capable of far more.
Distributed inference allows models too large for a single machine to run across multiple nodes — a 400B parameter model running on ten Stera machines coordinated by the Grid. Knowledge propagation lets every machine learn from the collective experience of the network. And as the network matures, machines themselves become economic actors — offering specialized services to other machines in the Grid.
This is the long-term vision: not a network of computers, but a network of intelligences. Each sovereign, each owned by its operator, each contributing to and benefiting from the collective. The more machines that join, the more capable every machine becomes. The more capable every machine becomes, the more value it generates. The network grows because participating in it is better than not participating.
We are transparent about this: the Grid requires scale to work. A distributed training network with fifty machines is a proof of concept. A network with five thousand machines is a competitive alternative to cloud compute. A network with fifty thousand machines is a paradigm shift.
This is why we are building the Grid into Stera from the beginning — not as an afterthought, but as a core incentive for ownership. Buying a Stera machine is not just buying personal AI. It is buying a stake in a growing compute network. Your machine is an asset that earns while it sits idle. The larger the network grows, the more your machine earns, and the more capable it becomes.
We will launch Grid functionality when the network reaches sufficient density to deliver reliable training throughput. Until then, every Stera machine sold brings the network closer to the threshold where the Grid activates — and every owner's machine starts earning.
Whether you want to contribute GPU compute and earn from your Stera machine, or you need distributed training infrastructure for your AI models — the Grid is being built for you.