Pre-Seed · Building the future of RAN

Deploying Neural Receivers in Radio Unit Silicon

NVIDIA proved that a single neural network can replace three DSP blocks in the 5G receiver chain. We're building the inference hardware to deploy it inside the radio unit — cutting fronthaul by 10–20× at sub-1W power.

10–20× Fronthaul Reduction
~2 Gbps LLR-Only Output vs 25–50 Gbps IQ
~150 mW Power Target * INT8 inference
730K Parameters Single forward pass

* Power projection based on INT8 MAC estimates for 730K params at target clock rates. Subject to FPGA validation.

The RAN Baseband Stack Hasn't Changed Since 3G

Today's O-RAN 7.2x split treats the radio unit as a dumb RF frontend. All L1 intelligence sits in the DU, connected by expensive dedicated fiber.

📡

Massive Fronthaul

Raw I/Q samples per antenna × subcarrier over dedicated eCPRI fiber. Scales linearly with antennas.

25–50 Gbps

Power-Hungry DSP

LMMSE matrix inversions and K-best tree search run iteratively on every subframe, burning massive compute.

500–700 mW
🔧

Rigid Pipeline

Three separate blocks — channel estimation, MIMO equalization, demapping — each requiring per-scenario reconfiguration.

3 blocks × N configs

One Neural Receiver Replaces Three DSP Blocks

A CNN + GNN architecture (730K params, INT8) jointly performs channel estimation, equalization, and demapping in a single forward pass — inside the radio unit.

Traditional O-RAN 7.2x

Antenna → RF → ADC + FPGA
↓ eCPRI 25–50 Gbps fiber
DU: Channel Estimation (LMMSE)
DU: MIMO Equalizer (K-Best)
DU: Demapper (QAM → LLR)
DU: LDPC Decoder
25–50 Gbps Fronthaul

neuraRAN Smart RU

Antenna → RF → ADC + FPGA (FFT)
↓ Resource grid
★ Neural Receiver (CNN+GNN, INT8)
  Joint ch.est + EQ + demapping
↓ LLRs only ~2 Gbps
Lightweight DU: LDPC decode only
~2 Gbps Fronthaul · 10–20× Reduction

Research Foundation

What NVIDIA Proved

The neural receiver architecture was pioneered by NVIDIA Research (Cammerer et al., 2023) and validated using Sionna, NVIDIA's open-source link-level simulator. A CNN+GNN with ~700K parameters can jointly replace channel estimation, equalization, and demapping with near-LMMSE BER performance at significantly lower computational complexity.

What We're Building

The hardware execution layer — taking a validated neural receiver architecture and compiling it into fixed-function inference silicon that fits inside a radio unit at sub-1W power. NVIDIA validated on GPUs. We're building the silicon to deploy it at the edge.

Based on: "A Neural Receiver for 5G NR Multi-user MIMO" — NVIDIA Research, Dec 2023. Open-source implementation: NVlabs/neural_rx

FPGA Prototype → Hardcoded Inference Silicon

Validate on FPGA first, then deploy on purpose-built inference silicon that compiles the full forward pass into fixed logic.

Phase 1 · Pre-Seed

FPGA Validation

  • Reproduce NVIDIA baseline results in Sionna
  • End-to-end neural receiver on Xilinx/AMD FPGA
  • INT8 quantization impact study
  • Prove fronthaul reduction + BER parity vs LMMSE
  • Real channel data validation
  • Working demo in 6–9 months
Phase 2 · Series A

Production Silicon

  • Hardcoded circuit-based inference silicon
  • Full forward pass compiled to fixed logic
  • Zero software overhead, sub-1W power
  • Deliverable as ASIC-ready IP core
  • Developed with experienced silicon partners

The Industry Just Validated AI-RAN

The biggest names in wireless just committed billions to AI in the RAN. But nobody is purpose-building the inference layer for the radio unit itself.

🤝

NVIDIA + Nokia: $1B Partnership

NVIDIA committed $1B equity investment in Nokia for commercial AI-RAN products on the Aerial RAN Computer Pro platform.

📶

SoftBank: GPU-Accelerated vRAN

Validated fully software-defined, GPU-accelerated AI-RAN delivering 16-layer massive MU-MIMO outdoors.

📈

$34B → $676B Market

5G infrastructure market projected to reach $675.9B by 2034. Private 5G networks growing at 65.4% CAGR through 2030.

🎯

Nobody Owns the RU Layer

Big players validate at DU/cloud level. Nobody is building purpose-built neural inference for the radio unit itself. That's us.

💡

Why Not NVIDIA?

NVIDIA invented the neural receiver and open-sourced it. Their strategy is GPU-based DU acceleration (Aerial RAN Computer Pro) — selling high-margin GPU platforms to operators and DU vendors. Building sub-1W embedded inference silicon for the radio unit is the opposite of their business model. NVIDIA is moving up-stack toward cloud-RAN. We're moving down-stack into the radio unit itself.

Builder-Operator + Domain Scientist

Both full-time. Complementary: Usama ships products and exits. Yasir knows exactly what to build.

CEO

Usama Zaidi

The Builder & Operator

  • 3× founder with exits
  • Epik → acquired by Granite Telecom (4th largest US telecom)
  • Senior Architect, Google Fiber — AI/ML for network anomaly detection
  • CTO SpeechTrans — world's first real-time phone translation
  • Deep systems: Linux internals, FPGA/SDR, VoIP/SS7
linkedin →
CTO

Yasir Ahmed

The Domain Scientist

  • 20+ years wireless PHY engineering
  • Virginia Tech MPRG under Ted Rappaport — Space-Time Block Codes
  • Qualcomm — physical-layer modem testing, Cloud ML
  • Founded RAYmaps — mmWave ray-tracing, RIS coverage engines
  • 12 IEEE publications, Springer book on wireless communications
linkedin →

Let's Build the Neural RAN Together

We're raising a pre-seed to build the FPGA prototype. Investors, partners, and engineers — we'd love to talk.