NVIDIA proved that a single neural network can replace three DSP blocks in the 5G receiver chain. We're building the inference hardware to deploy it inside the radio unit — cutting fronthaul by 10–20× at sub-1W power.
* Power projection based on INT8 MAC estimates for 730K params at target clock rates. Subject to FPGA validation.
Today's O-RAN 7.2x split treats the radio unit as a dumb RF frontend. All L1 intelligence sits in the DU, connected by expensive dedicated fiber.
Raw I/Q samples per antenna × subcarrier over dedicated eCPRI fiber. Scales linearly with antennas.
25–50 GbpsLMMSE matrix inversions and K-best tree search run iteratively on every subframe, burning massive compute.
500–700 mWThree separate blocks — channel estimation, MIMO equalization, demapping — each requiring per-scenario reconfiguration.
3 blocks × N configsA CNN + GNN architecture (730K params, INT8) jointly performs channel estimation, equalization, and demapping in a single forward pass — inside the radio unit.
The neural receiver architecture was pioneered by NVIDIA Research (Cammerer et al., 2023) and validated using Sionna, NVIDIA's open-source link-level simulator. A CNN+GNN with ~700K parameters can jointly replace channel estimation, equalization, and demapping with near-LMMSE BER performance at significantly lower computational complexity.
The hardware execution layer — taking a validated neural receiver architecture and compiling it into fixed-function inference silicon that fits inside a radio unit at sub-1W power. NVIDIA validated on GPUs. We're building the silicon to deploy it at the edge.
Based on: "A Neural Receiver for 5G NR Multi-user MIMO" — NVIDIA Research, Dec 2023. Open-source implementation: NVlabs/neural_rx
Validate on FPGA first, then deploy on purpose-built inference silicon that compiles the full forward pass into fixed logic.
The biggest names in wireless just committed billions to AI in the RAN. But nobody is purpose-building the inference layer for the radio unit itself.
NVIDIA committed $1B equity investment in Nokia for commercial AI-RAN products on the Aerial RAN Computer Pro platform.
Validated fully software-defined, GPU-accelerated AI-RAN delivering 16-layer massive MU-MIMO outdoors.
5G infrastructure market projected to reach $675.9B by 2034. Private 5G networks growing at 65.4% CAGR through 2030.
Big players validate at DU/cloud level. Nobody is building purpose-built neural inference for the radio unit itself. That's us.
NVIDIA invented the neural receiver and open-sourced it. Their strategy is GPU-based DU acceleration (Aerial RAN Computer Pro) — selling high-margin GPU platforms to operators and DU vendors. Building sub-1W embedded inference silicon for the radio unit is the opposite of their business model. NVIDIA is moving up-stack toward cloud-RAN. We're moving down-stack into the radio unit itself.
Both full-time. Complementary: Usama ships products and exits. Yasir knows exactly what to build.
The Builder & Operator
The Domain Scientist
We're raising a pre-seed to build the FPGA prototype. Investors, partners, and engineers — we'd love to talk.