The jury is still out on whether artificial intelligence (AI) will eat the world, but generative AI (genAI) is certainly consuming a lot of bandwidth. ChatGPT alone has more than 300 million weekly active users. The explosive growth of genAI brings a critical challenge to both enterprise network managers and communication service providers: The WAN must evolve to meet the needs of AI traffic, and communication service providers (CSPs) have an opportunity to both advance their own networks and partner with their enterprise customers.

Reimagining network traffic in the AI era Traditionally, the enterprise WAN has been designed around downstream-focused traffic. For example, typical web services follow a nine-to-one ratio of downloaded versus uploaded data, with internet services often offering download speeds five- to 10-times faster than upload speeds.

AI applications dramatically upend this pattern. AI workloads require high-bandwidth, symmetrical connectivity that can handle massive, unpredictable data transfers. While AI-driven technologies are also growing quickly across fields such as precision agriculture, medical imaging analysis, climate change modeling, and predictive machinery maintenance, these interactions tend to be processed on site at the edge location where the data is generated. GenAI applications act more like a highly interconnected peer-to-peer network. Small language models (SLMs) and agentic AI in multiple locations interact constantly with large language models (LLMs) in data centers, generating data-heavy and highly personalized results.

Unique network challenges The AI revolution introduces critical networking challenges.

  • Unprecedented traffic patterns: AI applications generate bursty, unpredictable traffic that can spike instantaneously. A single user prompt might trigger connections to dozens of edge sites and massive file uploads, potentially saturating existing WAN circuits.
  • Latency sensitivity: Unlike video conferencing where latency is merely annoying, AI applications can have their entire processing disrupted by network delays. Distributed AI architectures, which process data across multiple servers and edge locations, are particularly vulnerable to latency issues.
  • Scalability demands: Network infrastructures must now support complex peer-to-peer architectures where multiple AI agents collaborate across distributed enterprise networks. This requires robust, flexible network overlay technologies that can dynamically manage these intricate interactions.
  • Network security: Securing data and mitigating compliance risk is equally important. Sensitive information accidentally sent to an LLM may be used to train a model, exposing the data in the public domain. Visibility into employees’ use of generative AI applications is key to rein in shadow AI activities.

Innovative solutions: AI-powered network management To address these challenges, Broadcom has introduced VeloRAIN (Robust Artificial Intelligence Networking) architecture. VeloRAIN enhances the network intelligence of VeloCloud SD-WAN to ensure that enterprises can use AI capabilities effectively, without compromising on network performance or user experience. It’s networking for AI – building optimized infrastructure for AI apps – and AI for networking, using generative AI to improve market-leading VeloCloud SD-WAN.

VeloRAIN is built on four pillars:

  • AI-driven application profiling: Advanced machine learning systems can now identify and prioritize AI applications. VeloRAIN will also be able to identify encrypted application traffic that was previously unreadable and treat it appropriately. Companies will be able to customize network performance to meet the needs of the most important applications, whether or not they are AI apps.
  • Dynamic Application-Based Slicing (DABS) and AI-based network optimization: This approach focuses on application-layer optimization, guaranteeing quality of experience across multiple network types. AI workloads receive the necessary bandwidth and low latency required for optimal performance, while less critical applications can be de-prioritized. DABS also includes user profiles to prioritize traffic based on a user’s identity and attributes, helping ensure that key users receive the quality-of-experience (QOE) they need.
  • AI Operations (AIOps) with real-time data: Using anonymized data from its more than half a million deployed VeloCloud SD-WAN Edges, VeloRAIN uses AI to automate network operations (AIOps) and dynamically adjust policies. In retail stores, for example, computer vision applications that analyze in-store traffic and customer behavior could benefit from real-time policy adjustments, helping shoppers have a seamless experience. This dynamic approach lets companies continually adjust network configurations as needed, ensuring the best possible experience for users and applications. AI also augments VeloCloud’s Dynamic Multipath Optimization™ (DMPO). Feeding packet capture data into an LLM, will allow end users to ask natural-language questions such as “Why is the network running slowly?” and receive an immediate response.
  • AI-based network security: Using AI and machine learning (ML), the Symantec Global Intelligence Network analyzes over one billion organic threat signals daily and fine-tunes security policies for AI applications. The results of this analysis also help predict evolving threats. Reducing the attack surface with segmentation and minimizing exposure with visibility, control, and preventing data leakage helps AI adoption.
VeloRAIN techniques become even more important as wireless solutions such as 5G fixed-wireless access (FWA) and satellite become more popular. Although they are gaining market share, wireless links are still not as stable as wired and could feel the network disruptions of generative AI even more sharply. VeloCloud machine learning algorithms can now accurately predict network conditions across various connection types from satellite to 5G, allowing dynamic bandwidth and latency management. VeloCloud SD-WAN can also bond multiple wireless links, creating one logical link that can handle the traffic loads of generative AI.

Looking forward: The intelligent, adaptive network The telecommunications industry stands at a critical juncture. The networks of tomorrow must be intelligent, adaptive, and purpose-built for AI workloads. This means moving beyond traditional static network architectures to create dynamic, AI-powered infrastructure that can anticipate and respond to changing application demands.

For CSPs and enterprise network managers, the message is clear: the future of networking is not just about connecting devices, but about creating intelligent, responsive ecosystems that can support the next generation of computational complexity.