Home Blog Quality of Service: Queuing and Policing

Blog

Oct 9
Quality of Service: Queuing and Policing
Posted by Trevor Butler

In this post I am going to focus on the Queuing and Policing aspects of QoS.  To reiterate the scenario I started with in the first part, we will focus on a very common deployment of QoS; support of end to end voice and video deployments.  Additionally, I will focus on Cisco's newer MQC way of deploying QoS rather than the older MLS QoS.

This post is the third post in a three part series on QoS.  If you haven't read the first blog post, I highly recommend reading the first part here. https://www.lookingpoint.com/blog/quality-of-service-when-to-use-it

At this point in our QoS journey we have mapped out the path traffic will take, identified the network devices that need to be configured along that path, then marked the configuration at the access switchports and trusted the markings between network equipment along the path. Our network is now classifying traffic! But so what.

Classified traffic is great and all but if we aren't using those classifications to manipulate traffic behavior then we just went though all this work for nothing. At the end of the day we are trying to prioritize traffic so now that we have identified the traffic lets start prioritizing.

Shaping vs policing

Whenever you look at the throughput of traffic on an interface, rarely do we see a constant stream of data. The vast majority of the time traffic comes and goes in a spike of data. When I first started looking into QoS I was confused by the difference between shaping and policing. Simply put the difference between shaping and policing is what to do with the data that exceeds the bandwidth of the outgoing interface during traffic spikes.

  • Traffic Shaping: Shaping is a store and forward type of algorithm. It buffers the spike of data over a threshold and transmits it later after the traffic spike calms. This has the effect of smoothing out the transmission of traffic, but causing delay on the network.

  • Traffic Policing: Policing is a bit more aggressive. It is a collection of various algorithms that will drop traffic once over a certain threshold. This has the effect, on TCP traffic at least, to slowing the transmission rate of traffic on the network via TCP's built in windowing algorithm, but this causes more TCP retransmits on the network

NOTE: For this post I will focus on Shaping as this is the most common method for limiting traffic in an enterprise network.

Let's shape some Traffic over MPLS

The most common use for traffic shaping is at the boundary between your LAN and WAN. Often our LAN bandwidth is magnitudes higher than our WAN circuits can handle, and when we talk about some of the older WAN technologies, such as MPLS, this discrepancy is even greater. For this reason it is best to create QoS policy to tell the routers at each end of the MPLS circuit what to do with traffic once the high bandwidth of our LAN traffic hits the lower bandwidth of our WAN.

For most providers when more traffic is seen on their ingress interface than they are contractually obligated to provide in their Committed Information Rate (CIR) then they will drop traffic using a policing policy. Because we don't want more TCP retransmits on our network, it is in our best interest to shape traffic on our egress side of the MPLS circuit allowing us to buffer the traffic and keep the provider from exceeding the CIR.

policy-map SHAPE
class class-default
shape average 40000000
service-policy QOS-OUT

Shaping traffic is super easy, just create a policy-map, then under the default class add the shape command in bits that matches the CIR of the circuit. Then apply this policy-map to the MPLS interface on the router in the outbound direction using the command "service-policy output SHAPE".

Queuing Traffic for MPLS

Now this SHAPE policy will buffer any traffic that exceeds the 40mbps we see in this example and send it during a traffic lull. But if you notice we are shaping under the default class, and weren't we setting up the QoS policy to provide end-to-end QoS. After all that's why we choose to send traffic over MPLS rather than over the internet, because our MPLS provider honors our QoS tags. Well the eagle eyed among you would have noticed that last line; its a call to another service policy called QOS-OUT. Without this line all traffic would be buffered and processed in a single buffer First-In-First-Out (FIFO).

class-map match-any QOS_PREMIUM_QUEUE
match dscp cs4 af41 af42 af43 cs5 ef
class-map match-any QOS_ENHANCEDPLUS_QUEUE
match dscp cs3 af31 af32 af33 cs6 cs7
class-map match-any QOS_ENHANCED_QUEUE
match dscp cs2 af21 af22 af23
class-map match-any QOS_BASICPLUS_QUEUE
match dscp cs1 af11 af12 af13
class-map match-any QOS_BASIC_QUEUE
match dscp default
!
policy-map QOS-OUT
class QOS_PREMIUM_QUEUE
bandwidth percent 30
random-detect dscp-based
class QOS_ENHANCEDPLUS_QUEUE
bandwidth percent 30
random-detect dscp-based
class QOS_ENHANCED_QUEUE
bandwidth percent 20
random-detect dscp-based
class QOS_BASICPLUS_QUEUE
bandwidth percent 10
random-detect dscp-based
class QOS_BASIC_QUEUE
bandwidth percent 10
fair-queue

Anyone who has worked with MPLS providers may know that they will only honor a smaller number of buffers than we can on the LAN side. With Cisco's MQC QoS, we have 12 buffers that we can play with, but as we see with our MPLS provider they only support 4 additional buffers outside their default queue. So along with shaping the traffic we need to reclassify our traffic to fit into one of these 5 buffers.

The way we accomplish this is with class-maps and the "match dscp" command. as we can see above if traffic matches any of the DSCP values in the list then it will be classified in that class. Then in the policy-map we can do something with these new classifications.

NOTE: The MPLS provider should have documentation to what DSCP values map to which of their QoS queues and can change from provider to provider. The number of queues and the names in this example were based on the specific providers information, yours may vary.

Now let's look at the QOS-OUT policy; we can see each of the defined classes under the policy this creates 5 queues that will be give that certain amount of bandwidth. These ultimately become ratios so you can expect for every one packet egressing from the BASIC_QUEUE, 2 packets will egress from the ENHANCED_QUEUE, and 3 packets will egress from the PREMIUM_QUEUE. The effectively means on average the router is sending more from the higher queues that map to DSCP values that time sensitive packets are marked with.

Dealing with Queue Overflow

At the end of the day these policies are ran on hardware, and hardware has a physical limitation to the amount of data it can hold. Because of this limitation, we need to let the router know how to handle a queue that is about to fill up. The is what the random-detect dscp-based and fair-queue commands are about. random-detect dscp-based tells the policy to enable Weighted Random Early Detection (WRED) for that queue.

WRED utilizes the priority of the different DSCP values in the queue to determine the drop ratio. If you remember back to the first post, we talked about how AF11 has a higher priority than AF13; this priority is the Weighted element in WRED. The router will drop more of the AF13 packets than AF11 packets to make room in the queue.

Fair-queue is, as the name sounds, a straight First-In-First-Out queue. If you notice, Fair-queuing is only applied to the BASIC_QUEUE and that class matches on only default traffic, or traffic without a DSCP value. Because there aren't any DSCP value to prioritize all packets in this queue are treated the same. Tail Drop is utilized in this instance when the queue is full.

OK, so now that we have the full picture across all of the QoS topics, lets put it all together!

We identified the flow of an application(s) path. We determined which access switchports will use these applications for marking, which infrastructure trunks will need to trust the markings, and where the bottlenecks are to apply shaping and queuing policy.

At the access switches, we have a policy-map that classifies traffic with DSCP values. By defining class-maps that match based on ACLs or protocols that define each application we can mark traffic with the correct DSCP value. Then at each trunk we can trust and preserve these markings as they route towards the MPLS circuit.

At the MPLS circuit, the SHAPE policy-map has a single class in it that shapes all of the traffic that is egressing the port going to MPLS. This sets a maximum bandwidth for all traffic so we don't exceed the CIR of the circuit. Then inside the SHAPE policy-map a second policy-map is called, QOS-OUT.

The QOS-OUT policy assigns each of the DSCP values to a queue, these queues are then given a percentage of that shaped bandwidth. Additionally a weighted drop probability based on DSCP priority is defined in the event a queue is full. For any non-marked traffic a default queue is maintained with a default First-In-First-Out approach.

Hopefully now you can understand each of the components of this popular QoS policy. By no means is what I've talked about over the past 3 Blogs close to the extent of what you can do with QoS, I simply wanted to show the basics of an end-to-end QoS policy!

 

 

 

As always if you have any questions on your network configuration and would like to schedule a free consultation with us, please reach out to us at sales@lookingpoint.com and we’ll be happy to help!

Contact Us

 

Written By:

Trevor Butler, Network Architect

subscribe to our blog

Get New Unique Posts