A single 730K-parameter neural receiver replaces three legacy DSP blocks — channel estimation, equalization, and demapping — directly inside the radio unit.
Today's O-RAN 7.2x split treats the radio unit as a dumb RF frontend. All L1 intelligence sits in the DU, connected by expensive dedicated fiber.
Raw I/Q samples per antenna × subcarrier over dedicated eCPRI fiber. Scales linearly with antennas.
25-50 GbpsLMMSE matrix inversions and K-best tree search run iteratively on every subframe, burning massive compute.
500-700 mWThree separate blocks — channel estimation, MIMO equalization, demapping — each requiring per-scenario reconfiguration.
3 blocks × N configsA CNN + GNN architecture (730K params, INT8) jointly performs channel estimation, equalization, and demapping in a single forward pass — inside the radio unit.
Validate on FPGA first, then deploy on purpose-built inference silicon that compiles the full forward pass into fixed logic.
The biggest names in wireless just committed billions to AI in the RAN. But nobody is purpose-building the inference layer for the radio unit itself.
NVIDIA committed $1B equity investment in Nokia for commercial AI-RAN products on the Aerial RAN Computer Pro platform.
Validated fully software-defined, GPU-accelerated AI-RAN delivering 16-layer massive MU-MIMO outdoors.
5G infrastructure market projected to reach $675.9B by 2034. Private 5G networks growing at 65.4% CAGR through 2030.
Big players validate at DU/cloud level. Nobody is building purpose-built neural inference for the radio unit itself. That's us.
Both full-time. Complementary: Usama ships products and exits. Yasir knows exactly what to build.
We're raising a pre-seed to build the FPGA prototype. Investors, partners, and engineers — we'd love to talk.