Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Financial and Market intelligence
Fundamental & Alternative Datasets
Government & Defense
Professional Services
Banking & Capital Markets
Economy & Finance
Energy Transition & Sustainability
Technology & Innovation
Podcasts & Newsletters
Research — April 2, 2026
Mobile World Congress 2026 in Barcelona in early March marked a clear pivot: communications service providers and their ecosystems are no longer treating generative AI as a conversational layer bolted onto existing operations. Instead, the event's most consequential announcements pointed to agentic AI as the next operating model for networks and network operations — and telco infrastructure as a service substrate for enterprise AI.

MWC 2026 delivered a coherent message: Communications service providers (CSPs) are pursuing a two-front transformation — agentic operations to run networks better and AI-era infrastructure offerings to remain relevant beyond connectivity. The shift from co-pilots to bounded autonomy is directionally correct, but value will accrue only where agents are tied to measurable outcomes, strict guardrails and credible shared-state models. Simultaneously, "infrastructure for AI" is hardening into real product and capex motion: Sovereign AI factories, programmable network APIs, AI-grade data center interconnection and SGP.32-enabled device life cycle orchestration are converging into a telco-shaped role in the AI value chain. The competitive environment will likely remain unforgiving. Hyperscalers will likely dominate generic GPU supply and self-build interconnect at extreme scale; telcos must win where they are structurally advantaged — regulated industries, locality/sovereignty, deterministic performance and operational trust — and where partnerships with major networking vendors can accelerate data center interconnect productization. The implication is to productize narrow, provable wins and scale them through partnerships, avoiding a repeat of earlier telco data center and edge-compute hype cycles.

From co-pilots to bounded autonomy in network operations
MWC's most important operational shift was the repositioning of agentic AI as an execution layer, not merely as an interface, a theme reinforced by Nokia Oyj and Amazon Web Services Inc. (agentic AI-powered 5G-Advanced network slicing, with du (Emirates Integrated Telecommunications Co. PJSC) and Orange SA as the first operators to test the solution), NTT Docomo Inc. and NEC Corp. (agentic AI-automated 5G core construction on AWS), and Huawei Investment & Holding Co. Ltd. and ZTE Corp. (autonomous-network operations framing), alongside workflow players such as ServiceNow Inc. Telefonaktiebolaget LM Ericsson (publ) added to the momentum with a partnership with French AI startup Mistral AI SAS to develop agentic AI tools tailored for telecom networks, with a stated long-term focus on automation, resilience and 6G evolution. The practical change is less about replacing engineers and more about formalizing closed-loop network operations patterns. The clearest early wins will be in domains where autonomy can be bounded, reversible and auditable: alarm correlation, ticket enrichment and routing, and policy tuning for differentiated services. This direction also reflects a tacit industry admission: many earlier automation programs stalled because they were brittle, siloed and overly deterministic. Agentic approaches can make operations more resilient by coordinating across tools and handling edge cases, but only if vendors keep actions within limits and based on accurate network data, and demonstrate improvements to real-world performance. The primary risk is "agent-washing" — autonomy narratives without production-grade controls, proofs and metrics.
Implication: Agentic AI will likely not deliver fully dynamic networks in the near term, but it is reshaping assurance and service management operating models. CSPs and vendors that industrialize bounded autonomy with explicit governance and measurable KPIs should see durable advantage.
CSP AI factories and sovereign AI as the most defensible telco wedge
"Infrastructure for AI" was dominated by a specific posture: CSPs aiming to sell sovereign, policy-governed AI capacity and services to enterprises that cannot (or will not) run sensitive workloads exclusively on hyperscalers. Red Hat's AI factory narrative — exemplified by operator Telenor ASA as a proof point — captured the core thesis: multi-tenant platforms that emphasize locality, compliance, and operational control, supported by telco-grade life-cycle management. What distinguishes this from prior "edge compute" cycles is explicit commercial packaging: GPU capacity, platform software, security/governance and managed services, positioned closer to cloud economics than to bespoke professional services. The strategic logic is sound — sovereignty, data residency and regulated-industry requirements represent a defensible wedge — but execution discipline will be decisive. Competing credibly requires capacity planning, high GPU utilization, platform reliability, developer experience and ecosystem integration. The most likely winners will likely be CSPs that build repeatable offerings around regulated verticals and low-latency regional interconnect, while partnering aggressively for elements they cannot differentiate.
Implication: AI factories represent the clearest near-term path for telcos to monetize AI beyond traditional connectivity services — particularly when sovereignty and compliance matter. However, success depends on operational execution, not marketing promises: CSPs must prove utilization, reliability and repeatability. Without those proof points, "AI factories" risk replaying prior edge-compute narratives that failed to scale.
Programmable networks meet enterprise agents: Network APIs become a control surface
MWC reinforced that telco roles in the AI ecosystem extend beyond compute and transport. The network itself is being framed as a programmable instrument for AI applications and enterprise agents. Nokia's Network as Code narrative is emblematic — anchored at MWC by a new integration with Google Cloud's agentic AI stack, and positioning network capabilities as software surfaces that can be requested, tuned and validated programmatically (quality, slicing, prioritization, security posture). The strategic importance is twofold. First, APIs offer a path to monetize differentiated network behavior for latency-sensitive, safety-critical or performance-variable applications. Second, network programmability becomes more compelling in an agentic world because agents can incorporate network state and policy into automated decision-making. Commercialization remains the core challenge. Network APIs tend to fail when they are difficult to consume, inconsistently implemented across operators or priced without clear linkage to business outcomes. Viable strategies emphasize a focused set of outcome-based APIs, consistent governance and security, and operator-to-operator portability where possible. This entire strategy depends on whether initiatives like CAMARA and the GSMA Open Gateway succeed or fail in standardizing network APIs.
Implication: Network APIs are becoming more strategic as AI agents and applications demand predictable performance and real-time adaptability. The opportunity is real but fragile: inconsistent implementation and weak productization can stall adoption. CSPs should narrow their focus to outcomes-based APIs, then prove ROI through enterprise reference workflows rather than broad "developer ecosystem" messaging.
AI data center interconnection becomes a product
As AI workloads sprawl across clusters, regions and AI factories, the bottleneck is increasingly internetworking: data center bandwidth, latency variance, congestion control and power efficiency. MWC 2026 made this a visible battleground not only for CSPs but also for the routing/optical/open networking ecosystem that sells into both telcos and cloud providers. Major vendors emphasized AI-era data center interconnect scaling and operational simplification as first-order requirements, including high-density routing upgrades (e.g., 800 gigabits per second/1.6 terabits per second trajectories), coherent optics advances and automation-centric optical operations (such as in NTT's iOWN optical networking project). The signal was that AI is shifting DCI from "backbone plumbing" to a strategic differentiator that determines whether distributed infrastructure behaves like a single compute domain. Hyperscalers will continue to self-build where scale and ROI justify it (dark fiber, private backbone, custom fabrics). However, a large middle market — enterprises, sovereign and regional clouds, AI service providers outside hyperscaler orbit and many CSP-hosted AI factory models — requires AI-grade interconnect without hyperscaler capex and bespoke operations. For telcos, that broader market opportunity is the productization of "optics and IP as a programmable service" with measurable SLAs and automated operations.
Implication: AI elevates data center interconnection from transport to platform capability. Hyperscalers will likely dominate at extreme scale, but telcos and networking suppliers can win by productizing deterministic optical underlays, AI-aware routing and managed dark-fiber operations for the non-hyperscaler market.
SGP.32 and device orchestration: Quiet prerequisite for distributed physical AI
While less visible than GPU announcements, the IoT eSIM standard SGP.32 emerged as a critical enabling layer for the next wave of distributed systems: fleets of IoT devices and edge nodes feeding data into AI workflows and receiving policy/logic updates back. Telit Cinterion, Soracom Inc. and emnify positioned SGP.32 as an operational unlock for "factory-to-field" onboarding, provider independence and programmable life cycle management. The key shift is treating eSIM less as a connectivity convenience and more as an operational control plane: standardized provisioning flows, profile management as code, and resilience in multi-operator deployments. This plays into a larger opportunity: telco-supported edge compute for distributed inference. As enterprises push AI processing closer to where data is generated (sites, vehicles, retail, industrial operations), connectivity and eSIM life cycle become intertwined with placement decisions (central AI factory versus regional edge versus on-premises). This has meaningful energy and power implications as well: moving inference outward may help reduce backhaul intensity and latency and help avoid over-concentrating power demand in mega data centers, but it also creates a new operational burden — powering, cooling, and managing many smaller AI footprints with consistent security and life-cycle governance. A credible telco role is an integrated package: SGP.32-driven onboarding + secure connectivity + regional edge compute options, governed by clear policies, observability and predictable energy/power economics.
Implication: SGP.32 is becoming foundational for globally distributed, AI-enabled operations and physical AI edge inference: it simplifies onboarding, life-cycle control and resilience for device fleets and edge nodes. CSPs can amplify this by bundling connectivity with regional edge compute but must operationalize distributed power/energy constraints and deliver repeatable deployment playbooks.
451 Research from S&P Global Energy Horizons provides technology industry research, data, and advisory solutions. For more information or to contact us, please visit 451 Research.
This article was published by S&P Global Market Intelligence and not by S&P Global Ratings, which is a separately managed division of S&P Global.
Content Type
Language