Configuring the AI-enabled router in 5G networks

Support for Quality of Service (QoS) is required in 5G to deliver differentiated services to multiple types and levels of customers. For example, the latency and reliability requirements required for a connected robotic control application are very different from those for a music streaming application. However, these two applications can share the same network resources, which requires efficient planning and prioritization.

Network slicing is one way to meet the demands of particularly demanding use cases. Slicing has been proposed in the context of 5G to meet various requirements for Ultra-Reliable, Low Latency Communication (URLLC), Massive Machine-to-Machine Communication (MMtC), and Enhanced Mobile Broadband (eMBB).

5G use cases will have varying traffic conditions: the traffic distributions per network slice and the number of users per service can vary dynamically over time. Overprovisioning resources may not be recommended in all cases, in order to reduce deployment costs and make optimal use of spectrum. To provide strict QoS support as specified by 5G Differentiated Services, queue management configurations and port configurations must be resilient to changes in traffic patterns.

A key resource to meet these requirements are routers that perform efficient queue management, congestion control, and flow priorities for efficient network slicing. Configuring routers has traditionally been an expert-driven process with static or rule-based configurations for individual flows. However, under dynamically varying traffic conditions, as proposed in 5G use cases, these traditional approaches can generate suboptimal configurations.

Problems with current approaches include:

  • Hard-coded rules are not scalable
  • Policies may be unachievable or sub-optimal compared to current conditions
  • New problems never seen before cannot be tackled.

We propose a solution to this problem based on model-based reinforcement learning (LR). Techniques such as RL have been exploited to route traffic over networks, but the automated configuration of internal port queues within routers / switches is a relatively unexplored area.

For example…

5G Slice Bottleneck in Router Port

Figure 1 illustrates a scenario where the end-to-end network slice failed to meet the service level quality of service objectives. Using a diagnostic tool, we were able to identify the cause as a statically configured edge router.

The objective of our work is to automatically reconfigure the port queue to overcome this bottleneck.

Router Queue Modeling

A router typically has two types of network element components organized on separate processing planes i.e. a control plane and a forwarding plane. Switch and router interfaces have ingress (inbound) queues and egress (outbound) queues. An ingress queue stores packets until the switch or router CPU can pass the data to the appropriate interface. An egress queue stores packets until the switch or router can serialize the data to the destination address.

Figure 2: Regulation and shaping of router queues.

Router Queue Regulation and Shaping

Congestion occurs when the rate of inbound traffic is greater than what can be successfully processed and serialized on an egress interface.

To study the effects of changes in router configurations, an elaborate queue network model is used, which evaluates the I / O queues within the router. The effect of various configuration and traffic changes on observed outputs, such as packet drop rates, latency, and throughput, is included in the model. This is crucial because the live network data will need to be evaluated for configuration changes in the simulation environment.

Solution Brief: Reinforcement Learning Modeling for Router Port Configuration

Figure 3: Model-Based Reinforcement Learning for Port Queue Configuration

Model-Based Reinforcement Learning for Port Queue Configuration

Figure 3 highlights the different aspects of our framework:

  1. Queue Model Simulator: In order to study the effects of changes in router configurations, an elaborate queue network model that evaluates the queues within the router is used.
  2. Port configuration RL agent training: A partially observable Markov decision process (POMDP) ​​is derived using the conditional probabilities of an action affecting the state, observations, and rewards of the queue model. waiting. The reinforcement learning agent is trained to take actions to provide optimal port queue configurations.
  • Optimal configuration deployment: The formed policy is deployed on the observed router traffic and proves to provide a fair and priority allocation of resources to the queues, which prevents the possibility of bottlenecks.

For more details on the solution, please refer to our document “Automated Configuration of Router Port Queues Using Template-Based Reinforcement Learning”.

Example output: ingress queue control strategy

To demonstrate the effect of applying the reinforcement learning framework on a sub-optimally configured port, a queue network simulation model was designed to replicate the bottleneck scenario of the 5G slice. Figure 4 shows that the usage and measured queue length for queue 7 is high, although queue 1 has lower than normal utilization. An uneven distribution of traffic was also observed in other queues.

The aim was to generate a policy that would lighten this load, while maintaining the priorities of individual flows within the system.

Figure 4: Input Queue Simulation Outputs

Input queue simulation outputs

A POMDP model has been trained to generate a policy that can reconfigure the system appropriately. Rewards have been configured to reward improvements in observed throughput, residence times, and packet drop rates. This policy has been implemented on the router model of Figure 5 and has been shown to significantly improve the fair use of all queues, despite varying traffic patterns.

Figure 5: Improvement in queue length observed after policy deployment.

Improvement in queue length observed after policy deployment


Static configurations of edge and aggregation routers used in 5G networks rely on human experts and are unable to dynamically modify or learn from deployment errors. Our work has presented an automated technique for router port configurations that adapt to changes in traffic patterns and user requirements.

Through precise modeling of input and output queues, policies are generated using partially observable Markov decision processes. The research described in this article has shown this to be effective in regulating and shaping traffic for a range of router configurations, including changing priorities, weights, queuing disciplines, drop rate and bandwidth limitation. Our results were tested on real Ericsson deployment use cases with router configurations.

We believe that such an AI-based network element configuration framework will be common in the future to learn, act and dynamically evolve 5G and 6G network routers and switches.

The references

Java Modeling Tools Queue Network Simulator (JMT)

POMDP solver

Learn more

Read the full article by Ajay Kattepur, Sushanth David and Swarup Mohalik: “Automated Router Port Queue Configuration Via Model-Based Reinforcement Learning,” Data Driven Smart Grids Workshop, International Conference on Communications (ICC), 2021.

Learn more about the Ericsson 6000 series.

Read our report Artificial Intelligence and Machine Learning in Next Generation Systems.

Read our blog post, Can AI-powered services help ensure service continuity?

Here’s what you need to know about implementing automation in 5G transport networks.

Read our introduction to data-driven network architecture.

Source link

Previous Memphis officer Antonio Marshall accused in fatal accident of "remorseful"
Next You can't take it with you: stop data exfiltration now

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *