DOC.SG-001 / 2022
ProjectSTARGRID
DomainNewSpace · Orbital Compute
StageEarly — Defunct
PeriodSept 2021 — Mar 2022

starGRID

A computational cluster in low Earth orbit. Rethinking the space architecture to reduce — at maximum — its dependence on the ground.

Overview

§ 01

STARGRID proposes an orbital automation layer — a powerful, space-located computational cluster that gives nanosatellite operators the backhaul, compute and virtualization they currently rent from a sprawling ground segment.

The premise is simple: as constellations grow into the hundreds and thousands, the ground-initiated, command-and-control style of operations cracks under its own weight. Bandwidth becomes the bottleneck. Provisioning a 1000-satellite constellation conservatively requires more than a hundred ideally positioned ground stations — and most of what they downlink is data with no features of interest.

STARGRID inverts the topology. A meshed grid of compute-heavy nanosats sits between customer satellites and the ground, exposing a virtual machine, a connectivity hook, and a modular bus as the three primitives operators build on. Customers focus on their mission payload. STARGRID handles the rest of the stack.

Concept
Compute-as-a-service in low Earth orbit
Form factor
Cluster of CubeSat-class nanosats with optical inter-links
Primitives
starBUS · starHOOK · starVM
Customer surface
Encrypted data link · Virtual machine · Service API
Target verticals
Earth observation · Sat-IoT · Deep space · In-space research
ORBITAL LAYER · STARGRID MESH5 nodes · optical inter-linksEARTHCUSTOMER LEO SATELLITESvia starHOOK · token-authenticatedNODE · starBUSCPU · GPU · AI · MASS STORAGE
Fig. 00 · Orbital cluster · two-layer schematicDOC.SG-001 / FIG-00

The bottleneck of today’s space architecture

§ 02

The CubeSat form factor democratised access to orbit, but the operations layer didn’t keep up. We inherited a model where everything important happens on the ground.

Four converging constraints

01 · Obsolete architecture
Ground-initiated command-and-control was designed for a handful of large birds — not for finer-grained, hundred-node constellations.
02 · Limited scalability
Reconfiguring a constellation of hundreds of nanosats from the ground takes weeks to months. Uplink data volume is on the order of kilobytes per pass.
03 · Physical constraints
Cubesats can’t increase receiver gain due to size. Bottom-up decomposition fails on systems this complex.
04 · CAPEX intensity
Supporting a 1000-sat constellation conservatively requires 112 ideally positioned ground stations.
The old ground-initiated command-and-control style systems aren’t going to work for these finer-grained systems.— STARGRID Pitch, §04

Use case: a 22-antenna data-intense constellation

The math gets ugly fast. Ten-minute downlink windows. Nine sats per antenna per pass. The result: high latency from capture to receipt, scalability ceilings, an expensive shared ground segment, and a forced CAPEX-over-OPEX design choice.

Antennas required
22
For a single data-intense constellation operator.
Daily downlink volume
11TB/day
Average across the constellation, raw.
Trash data downlinked
95%
Frames containing no features of interest, on average.
Packet loss
88%
Average across the shared ground segment.

Source · Lucia & Denby, “Orbital Edge Computing: Nanosatellite Constellations as a New Class of Computer System.”

An orbital automation layer

§ 03

STARGRID is a meshed compute cluster in LEO. It exposes computation, storage, AI co-processing and inter-satellite connectivity as services. Customer satellites connect through a single hook; their operations team interacts with a virtual machine instead of a ground station network.

EARTH · GROUND SEGMENTGS-1GS-2STARGRID · ORBITAL LAYERcompute · storage · AI · backhaulCUSTOMER LEO SATELLITESSTARGRID nodeCustomer satelliteOptical inter-linkstarHOOK link
Fig. 01 · Mesh topology · customer ↔ STARGRID ↔ groundDOC.SG-001 / FIG-01

The shift is architectural, not just operational. Customers stop bending their satellite design around bandwidth scarcity. They get a programmable surface — a virtualization environment in orbit, a “Firmware Over The Space” workflow, real-time delivery between adjacent and opposing satellites, and one-click reconfiguration up to 1,000 nodes. The grid is architectural-decision agnostic: anything from Earth observation to deep space can plug in.

Proprietary components

§ 04

Three primitives. A modular bus to host the stack, a hook to attach a third-party satellite, and a virtual machine that exposes the grid as a programmable surface.

starBUS
Modular nanosat bus

The chassis. Hosts every STARGRID payload across the cluster.

  • On-board AI & high-performance compute
  • Optical laser inter-links between nodes
  • Sensor modules shared across the fleet — production cost down per data type
  • Optimised data production across the constellation
FIG · 02-A
starHOOK
Third-party connectivity

The interlink. The point at which a customer satellite joins the grid.

  • Highly encrypted data link with the GRID
  • Each hook labelled with a unique token, accessed by the starVM
  • Failsafe transmission, customer-side API and interface
FIG · 02-B
starVM
Virtual machine in orbit

The customer surface. A virtualization environment that runs in space.

  • Compute, storage, interfaces — exposed as a service
  • Store-forward terminal for individual SpaceVMs
  • Run programs, use AI, operate telemetry & firmware
  • Customers can sell data to other customers within the grid
FIG · 02-C$ run

Paradigm shift · ground → orbit

§ 05

Today’s allocation of effort between ground and space is upside-down. STARGRID is built for the ratio that comes next.

Today4,000 sats in orbit
5%
95%  ·  ground segment tasks & operations
Tomorrow~×100 sats in orbit
80%  ·  complex, automated space tasks & data processing
20% ground

In-space tasks are dark · ground tasks are light · accent indicates the inversion.

What customers actually get

§ 06

Hypotheses, with the math behind them. Each figure is a working assumption — to be tested against real customer data during the experiments laid out in WP1.

Latency reduction
×600
Capture-to-result, by processing in-situ instead of round-tripping through ground.
Downlink cost reduction
×20–50
Through in-orbit pre-processing and lossless compression.
Constellation scalability
×10
Per existing operations footprint.
Sats per ground station / revolution
×20
More satellites managed by a single ground station per pass.
Sat-Ops team size
÷4
Team reduction through Firmware-Over-The-Space & in-orbit virtualization.
Reconfiguration scope
1knodes
“One-click” up to 1,000-node constellation reconfiguration.

Business model · who pays for what

§ 07

The grid is a horizontal layer; the verticals are anyone running fleets. Six customer groups identified during the value-proposition design loop.

EO · Earth obs.
In-situ processing · reduced downlink cost · low latency
Sat-IoT
Fast constellation reconfiguration · automated mission control
Deep space
Reduced complexity with Earth comms · exploration data pre-processing
Asteroid mining
Synchronisation of orbital and extra-orbital operations
In-space research
Aggregated calculations across a combined data pool
Space manufacturing
Real-time telemetry coordination across operators

Revenue streams (hypotheses)

01
Rent of computational power
02
Network utilisation — flat fee or per minute
03
Dedicated mission-control services
04
Consultancy & integration

Work packages framework

§ 08

We organised the validation effort into three work-package streams and one management framework. Each stream runs as a build–measure–learn loop with explicit exit criteria. The framework is shareable; the contents are not.

WP 01Value Proposition
Market analysisSegmentation, customer identification, competitor mapping.
Problem & customersJobs to be done, pains and gains, problem-hypothesis testing.
Solution & productPain relievers, must-have features, monetisation hypothesis.
WP 02System Architecture
Functional designBasic concept, design parameters, ground-station strategy.
ImplementationNetwork architecture, satellite arch, framework & VM, starHOOK.
OperationalProgramming & control, service management, dashboards.
WP 03Concept Refinement
Business modelRevenue forecast, partner stack, financial requirements.
Risk assessmentInvestment, technical, market, mitigation strategy.
TRL(prod)Component readiness with weighted criticality.

Methodology · build–measure–learn loop · weekly Monday focus group, Wednesday review · exit criteria gating each loop.

Roadmap · value creation steps

§ 09
Q4 2021
Core team buildingSkillsets · long term
Identification, assessment, on-boarding. Term-sheet design, incorporation. Validate the idea further with the right people in the room.
Q1 2022
3D MVP demoPrivately backed
Idea feasibility & viability+ visual CGI demonstration. Conceptualisation of the new architecture, technical & scientific validation, use cases, storyboard, agency discovery.
€100k
Q2 2022
Ground demoFunding goal
PoC of optical inter-link at km distance, in motion, while computing tasks in real time through VM access. starHOOK on the ground field; bus paper design (COTS).
€6M
Q1 2023
Tech demo in spaceSTARGRID-1
PoC of orbital automation layer: validation of the architecture plus starHOOK in space — connectivity and data transmission with third-party objects.
€20M

Reflections

§ 10

STARGRID was a study in what it takes to make a deep-tech venture legible — to engineers, to investors, to a customer who has never bought from us before.

I came in as the venture architect on a team of two seasoned hardware engineers. My job wasn’t to design the satellite. It was to make the satellite worth designing — to ground a complex technical premise in a sequence of customer problems we could falsify, one at a time.

The hardest part was resisting the gravity of pure architecture. It’s tempting, with this kind of system, to keep zooming in on the mesh protocol, the optical link budgets, the failsafe primitives. The work that actually moved the project forward was the opposite: zooming out, naming the four customer groups, writing the value proposition canvases, finding the must-have problem.

We treated the work-package document as a working agreement between three founders and a university institute — not as a deliverable. Every package had a leader, an exit criterion, and an interview script. The build–measure–learn loop sat on top.

The pitch and the visual system came last, intentionally. By the time we drew the architecture diagram, every annotation traced back to a problem hypothesis and a customer group. That’s the discipline I’m proudest of carrying out of this project.

END OF DOCUMENT · DOC.SG-001 · 2022
Author
Jacopo Pelanda Mazza
Set in
Inter · JetBrains Mono
Sources
STARGRID Pitch v0.3 · WP Breakdown