What Actually Shipped at MWC
The commercial products announced in Barcelona are real and meaningful. Quanta Cloud Technology launched AI-RAN servers built on Nvidia ARC platforms. Supermicro extended support across the full Nvidia AI-RAN portfolio. MSI unveiled a unified AI-vRAN platform with dynamic GPU allocation between 5G and AI workloads. Lanner Electronics launched its AstraEdge AI Server lineup purpose-built to co-locate AI inference, RAN functions, and packet processing at cell sites.
Every one of them is a server. They live in the DU layer, connected to the radio unit over a fronthaul link. They process baseband after the RU has already done its job. The radio unit remains a pipe: receive RF, convert to digital samples, hand upstream. What happens inside that conversion is not AI. It is the same fixed-function DSP chain it has always been.
RU: FFT · CP removal Legacy DSP unchanged ↕ eCPRI fronthaul DU: GPU / AI-RAN server (Nvidia ARC, Supermicro, MSI, Lanner, Quanta) Neural inference here ↕ Core network AI at the DU. RU untouched.
RU: FFT · CP removal Neural Receiver (INT8 silicon) ch.est + EQ + demap in 1 pass ↕ ~2 Gbps LLRs only standard Ethernet DU: LDPC decode only Intelligence at the antenna.
The Layer MWC Skipped
The O-RAN Alliance standardized the fronthaul interface between the DU and the RU — the protocols, the planes, the timing. What it deliberately left open is the RU's internal baseband processing: the exact functions that determine receiver sensitivity, interference tolerance, and spectral efficiency. Every operator at MWC is running AI above that boundary. Nobody is running it below.
That boundary is where the physics actually lives. The channel estimate that feeds a neural receiver is computed from the same signal the RU is already handling. Propagating it upstream to a DU adds latency, burns fronthaul bandwidth, and makes tight feedback loops — the kind required for beam management in high-frequency bands — physically impossible at scale.
The industry's current approach optimizes a layer that is already well-understood. The RU baseband is the layer that is not — and not because the problem is too hard, but because no one has built the inference silicon to put there.
Ericsson made this architectural assumption explicit. Its MWC partnership with Intel — framed as a "software-upgradable path to 6G" — is built around Intel Xeon 6+ CPUs running AI in software at the DU layer. The RU is out of scope. Nokia's positioning is the same. So is T-Mobile's joint announcement with Ericsson and Nvidia on GPU-accelerated AI RAN. All DU-layer and above. All servers.
The OCUDU Signal
One MWC announcement deserves a closer read. On Monday evening, the OCUDU Ecosystem Foundation was formed — a public-private partnership seeded by AMD, AT&T, DeepSig, Ericsson, Nokia, Nvidia, SoftBank, and Verizon, with a mandate to build a common open-source 5G and 6G RAN software stack.
Read the founding members carefully. DeepSig is the only company in that list whose core product is neural processing for the physical layer. Its inclusion in an open-source RAN foundation is not incidental. It signals that the industry knows the PHY needs to change — and that a common software substrate is a prerequisite for changing it.
An open software stack for the RAN is the same forcing function that O-RAN was for the DU-RU split. It creates the interface. It does not create the implementation.
What the Coalition Numbers Tell You
Nvidia's 6G coalition at MWC spanned more than a dozen global operators and vendors. The AI-RAN Alliance ran 33 demos on the show floor — triple last year's count, with 26 of 33 built on Nvidia AI Aerial. T-Mobile ran concurrent AI and 5G workloads live: video streaming, generative AI inference, and AI-powered captioning simultaneously on a single radio network.
| Layer | MWC 2026 Activity | RU Baseband? |
|---|---|---|
| Core / Cloud | Large Telco Models, network automation, NOC AI | No |
| DU / vRAN | Nvidia ARC, Supermicro, MSI, Lanner AI-RAN servers | No |
| DU Software | Ericsson/Intel Xeon 6+, Nokia GPU options, OCUDU open stack | No |
| RU Baseband | — | Unaddressed |
Sources: Microwave Journal MWC 2026 Highlights, TechSpot, AI News, Techloy, Deutsche Telekom MWC announcements.
Why This Matters for 5.5G and 6G
The denser cell sites that 5.5G and 6G require make the RU baseband problem more acute, not less. More antennas, higher frequencies, tighter latency budgets — the physics of the next generation pushes the intelligence requirement down toward the radio unit, even as the industry's investment dollars flow in the other direction.
Every new AI-RAN server at the DU still demands high-bandwidth eCPRI fronthaul from each RU. At mmWave small cell densities — 40–60 sites per square mile — that fiber cost is the deployment blocker, not the compute. Moving inference to the RU is the only architecture that changes that math.
6G sensing, ISAC, and Physical AI applications require sub-millisecond feedback loops between antenna and inference. DU-layer AI, 10–20 ms of fronthaul away from the antenna, cannot close that loop. The compute has to be at the RF boundary.
GPU clusters at the DU burn kilowatts serving tens of radio units. Purpose-built INT8 inference silicon inside the RU targets sub-150 mW. At the cell site densities 5.5G and 6G require, the power delta is not incremental — it determines whether outdoor small cells can be deployed without grid upgrades.
The O-RAN 7.2x split leaves the RU's internal processing deliberately unspecified. The OCUDU foundation is building the software substrate. The inference silicon that runs inside the RU — hardcoded, power-efficient, antenna-adjacent — does not exist as a commercial product. That window is open now, before the 6G silicon generation locks in.
Bottom Line
MWC 2026 did not change the neuraRAN thesis. It confirmed it. The industry spent a week declaring AI-native networks a multi-trillion-dollar infrastructure shift — and every dollar announced is flowing into the DU layer and above. The RU baseband pipeline, the O-RAN 7.2x stack running the same fixed DSP it ran in 3G, was not on a single stage.
The GPU clusters and AI-RAN servers shipping today are the first chapter. They validate that neural processing belongs in the RAN. They don't answer what silicon runs it at the antenna — where the physics of 5.5G and 6G actually demand it.
That's the layer we're building for. The validation from Barcelona is not that someone else is doing it. The validation is that no one is.