Sep
20

In this final part of our blog series on QoS with the PIX/ASA, we examine the remaining two tools that we find on some devices - traffic shaping and traffic policing.

Traffic Shaping

Traffic shaping on the security appliance allows the device to limit the flow of traffic. This mechanism will buffer traffic over the "speed limit" and attempt to send the traffic later. On the 7.x security device, traffic shaping must be applied to all outgoing traffic on a physical interface. Shaping cannot be configured for certain types of traffic. The shaped traffic will include traffic passing though the device, as well as traffic that is sourced from the device.

In order to configure traffic shaping, use the class-default class and apply the shape command in Policy Map Class Configuration mode. This class-default class is created automatically for you by the system. It is a simple match any class map that allows you to quickly match all traffic. Here is a sample configuration:

pixfirewall(config-pmap)#policy-map PM-SHAPER
pixfirewall(config-pmap)# class class-default
pixfirewall(config-pmap-c)# shape average 2000000 16000
pixfirewall(config-pmap-c)# service-policy PM-SHAPER interface outside

Verification is simple. You can run the following to confirm your configuration:

pixfirewall(config)# show run policy-map
!
policy-map PM-SHAPER
 class class-default
shape average 2000000 16000
!

Another excellent command that confirms the effectiveness of the policy is:

pixfirewall(config)# show service-policy shape
Interface outside:
 Service-policy: PM-SHAPER
Class-map: class-default
shape (average) cir 2000000, bc 16000, be 16000
Queueing
     queue limit 64 packets
 (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 0/0

Traffic Policing

With a policing configuration, traffic that exceeds the "speed limit" on the interface is dropped. Unlike traffic shaping configurations on the appliance, with policing you can specify a class of traffic that you want the policing to effect. Let's examine a traffic policing configuration. In this configuration, we will limit the amount of Web traffic that is permitted in an interface.

pixfirewall(config)# access-list AL-WEB-TRAFFIC permit tcp host 192.168.1.110 eq www any
pixfirewall(config-if)# class-map CM-POLICE-WEB
pixfirewall(config-cmap)# match access-list AL-WEB-TRAFFIC
pixfirewall(config-cmap)# policy-map PM-POLICE-WEB
pixfirewall(config-pmap)# class CM-POLICE-WEB
pixfirewall(config-pmap-c)# police input 1000000 conform-action transmit exceed-action drop
pixfirewall(config-pmap-c)# service-policy PM-POLICE-WEB interface outside

Notice we can verify with similar commands that we used for shaping!

pixfirewall(config)# show run policy-map
!
policy-map PM-POLICE-WEB
 class CM-POLICE-WEB
  police input 1000000
!
pixfirewall(config)# show ser
pixfirewall(config)# show service-policy police
Interface outside:
  Service-policy: PM-POLICE-WEB
    Class-map: CM-POLICE-WEB
      Input police Interface outside:
        cir 1000000 bps, bc 31250 bytes
        conformed 0 packets, 0 bytes; actions:  transmit
        exceeded 0 packets, 0 bytes; actions:  drop
        conformed 0 bps, exceed 0 bps

I hope that you enjoyed this four part series on QoS on the PIX/ASA! Please look for other posts about complex configurations on the security appliances very soon. I have already been flooded with recommendations!

Happy Studies!

Sep
12

This blog is focusing on QoS on the PIX/ASA and is based on 7.2 code to be consistent with the CCIE Security Lab Exam as of the date of this post. I will create a later blog regarding new features to 8.X code for all of you non-exam biased readers :-)

NOTE: We have already seen thanks to our readers that some of these features are very model/license dependent! For example, we have yet to find an ASA that allows traffic shaping. 

One of the first things that you discover about QoS for PIX/ASA when you check the documentation is that none of the QoS tools that these devices support are available when you are in multiple context mode. This jumped out at me as a bit strange and I just had to see for myself. Here I went to a PIX device, switched to multiple mode, and then searched for the priority-queue global configuration mode command. Notice that, sure enough, the command was not available in the CUSTA context, or the system context.

pixfirewall# configure terminal
pixfirewall(config)# mode multiple
WARNING: This command will change the behavior of the device
WARNING: This command will initiate a Reboot
Proceed with change mode? [confirm]
Convert the system configuration? [confirm]
pixfirewall> enable
pixfirewall# show mode
Security context mode: multiple
pixfirewall# configure terminal        
pixfirewall(config)# context CUSTA
Creating context 'CUSTA'... Done. (2)
pixfirewall(config-ctx)# context CUSTA
pixfirewall(config-ctx)# config-url flash:/custa.cfg
pixfirewall(config-ctx)# allocate-interface e2 
pixfirewall(config-ctx)# changeto context CUSTA
pixfirewall/CUSTA(config)# pri?     
configure mode commands/options:
privilege
pixfirewall/CUSTA# changeto context system
pixfirewall# conf t
pixfirewall(config)# pr?
configure mode commands/options:
privilege 

OK, so we have no QoS capabilities when in multiple context mode. :-| What QoS capabilities do we possess on the PIX/ASA when we are behaving in single context mode? Here they are:

  • Policing – you will be able to set a “speed limit” for traffic on the PIX/ASA. The policer will discard any packets trying to exceed this rate. I always like to think of the Soup Guy on Seinfeld with this one - "NO BANDWIDTH FOR YOU!" 
  • Shaping – again, this tool allows you to set a speed limit, but it is “kinder and gentler”. This tool will attempt to buffer traffic and send it later should the traffic exceed the shaped rate.
  • Priority Queuing – for traffic (like VoIP that rely hates delays and variable delays (jitter), the PIX/ASA does support priority queuing of that traffic. The documentation refers to this as a Low Latency Queuing (LLQ).

Now before we get too excited about these options for tools, we must understand that we are going to face some pretty big limitations with their usage compared to shaping, policing, and LLQ on a Cisco router. We will detail these limitations in future blogs on the specific tools, but here is an example. We might get very excited when we see LLQ in relation to the PIX/ASA, but it is certainly not the LLQ that we are accustomed to on a router. On a router, LLQ is really Class-Based Weighted Fair Queuing (CBWFQ) with the addition of strict Priority Queuing (PQ). On the PIX/ASA, we are just not going to have that type of granular control over many traffic forms. In fact, with the standard priority queuing approach on the PIX/ASA, there is a single LLQ for your priority traffic and all other traffic falls into a best effort queue.

If you have been around QoS for a while, you are going to be very excited about how we set these mechanisms up on the security appliance. We are going to use the Modular Quality of Service Command Line Interface (MQC) approach! The MQC was invented for CBWFQ on the routers, but now we are seeing it everywhere. In fact, on the security appliance it is termed the Modular Policy Framework. This is because it not only handles QoS configurations, but also traffic inspections (including deep packet inspections), and can be used to configure the Intrusion Prevention and Content Management Security Service Modules. Boy, the ole’ MQC sure has come a long way.

While you might be frustrated with some of the limitations in the individual tools, at least there are a couple of combinations that can feature the tools working together. Specificaly, you can:

  • Use standard priority queueing (for example for voice) and then police for all of the other traffic.
  • You can also use traffic shaping for all traffic in conjunction with hierarchical priority queuing for a subset of traffic. Again, in later blogs we will educate you more fully on each tool.

Thanks for reading and I hope you are looking forward to future blog entries on QoS with the ASA/PIX.

Aug
26

Note: The following post is an excerpt from the full QoS section of IEWB-RS VOL1 version 5.

Peak shaping may look confusing at first sight; however, its function becomes clear once you think of oversubscription. As we discussed before, oversubscription means selling customers more bandwidth than a network can supply, hoping that not all connections would use their maximum sending rate at the same time. With oversubscription, traffic contract usually specifies three parameters: PIR, CIR and Tc – peak rate, committed rate and averaging time interval for rate measurements. The SP allows customers to send traffic at rates up to PIR, but only guarantees CIR rate in case of network congestion. Inside the network SP uses any of the max-min scheduling procedures to implement bandwidth sharing in such manner that oversubscribed traffic has lower preference than conforming traffic. Additionally, the SP generally assumes that customers respond to notifications of traffic congestion in the network (either explicit, such as FECN/BECN/TCP ECN or implicit such as packet drops in TCP) by slowing down sending rate.

Commonly, customers implement traffic shaping to conform to traffic contract, and provider uses traffic policing to enforce the contract. If a contract specifies PIR, then it makes sense for customer to shape traffic at PIR rate. However, this makes difficult to deduce CIR value just by looking at the router configuration. In some circumstances, like with Frame-Relay networks, a secondary parameter, known as minCIR, may help to understand the configuration quickly. In general, it would benefit to see CIR and PIR in the shaping configuration at the same time. This is exactly the idea behind shape peak. When you configure

shape peak <CIR> <Bc> <Be>

the actual maximum sending rate is limited to:

PIR = CIR*(1+Be/Bc).

That is, each time interval Tc=Bc/CIR the shaper allows sending up to Bc+Be bits of data. By default, if you omit the value for Be, it equals to Bc and thus PIR=2*CIR by default. However, due to some IOS show output discrepancy, this is NOT reflected in “show” command output, unless you explicitly specify the Be value in command line. With shape peak configured this way, you can see both CIR as the “average rate” and PIR as the “target rate” when issuing “show policy-map” command.

Rack1R6#show policy-map interface fastEthernet 0/0.146
FastEthernet0/0.146

Service-policy output: POLICY_VLAN146_OUT

Class-map: HTTP (match-all)
6846 packets, 4065413 bytes
5 minute offered rate 63000 bps, drop rate 0 bps
Match: access-group 180
Traffic Shaping
Target/Average Byte Sustain Excess Interval Increment
Rate Limit bits/int bits/int (ms) (bytes)
128000/64000 1600 6400 6400 100 1600
...

All other shaping functions remain the same as with the classic GTS - shape peak is just more suited for use with oversubscription scenarios. Also, in Frame-Relay networks you may want to use configuration similar to the following to respond to congestion notifications:

shape peak <CIR> <Bc> <Be>
shape adaptive <CIR>

To illustrate the use of shape peak, let's look at the following scenario. Here, R4 serves two customers (R1 and R6) sending their traffic across one serial link of 128Kbps between R4 and R5. The fictive ISP sells 128Kbps (PIR) to each of the customers, guaranteeing only 64Kbps (CIR). Let's assume the measurement interval of 100ms for this configuration. The serial link, which is the oversubscribed resource, uses WFQ for fair bandwidth sharing between two flows.

Oversubscription scenario

R1:
access-list 180 permit tcp any eq 80 any
!
class-map HTTP
match access-group 180
!
policy-map POLICY_VLAN146_OUT
class HTTP
shape peak 64000 6400 6400
!
interface FastEthernet 0/0
service-policy output POLICY_VLAN146_OUT

R6:
access-list 180 permit tcp any eq 80 any
!
class-map HTTP
match access-group 180
!
policy-map POLICY_VLAN146_OUT
class HTTP
shape peak 64000 6400 6400
!
interface FastEthernet 0/0.146
service-policy output POLICY_VLAN146_OUT

R4:
!
! All HTTP traffic
!
ip access-list extended HTTP
permit tcp any eq 80 any
!
class-map HTTP
match access-group name HTTP

!
! Traffic from R1 and R6 respectively
!
ip access-list extended FROM_R1
permit ip host 155.1.146.1 any
!
ip access-list extended FROM_R6
permit ip host 155.1.146.6 any
!
!
!
class-map FROM_R1
match access-group name FROM_R1
!
class-map FROM_R6
match access-group name FROM_R6

!
! Subrate policers
!
policy-map SUBRATE_POLICER
class FROM_R1
police cir 64000 bc 3200 pir 128000 be 6400
conform-action set-prec-transmit 1
exceed-action set-prec-transmit 0
violate-action drop
class FROM_R6
police cir 64000 bc 3200 pir 128000 be 6400
conform-action set-prec-transmit 1
exceed-action set-prec-transmit 0
violate-action drop

!
! Policer configuration using MQC syntax.
!
policy-map POLICE_VLAN146
class HTTP
service-policy SUBRATE_POLICER
!
interface FastEthernet 0/1
service-policy input POLICE_VLAN146

The idea is to allow R1 and R6 send up to 128Kbps if there is enough bandwidth on the serial link. However, if both of the sources start streaming at the same time, the SP may only guarantee up to 64Kbps to each of sending routers. The implementation meters each flow against 64Kbps and 128Kbps meters, and marks all conforming traffic with IP precedence of 1. All exceeding traffic is marked with IP precedence of 0. Since the serial link uses WFQ, we conclude that traffic marked with IP precedence of zero has lower scheduling weight. Thus, if IP precedence 1 traffic exist on the link, it is given preference over low-priority traffic (precedence 0).

To verify our configuration in action, start downloading a large file from R1 across R4 and see the statistics on R1 and R4:

Rack1R4#show policy-map interface fastEthernet 0/1
FastEthernet0/1

Service-policy input: POLICE_VLAN146

Class-map: HTTP (match-all)
20451 packets, 12066090 bytes
30 second offered rate 126000 bps, drop rate 0 bps
Match: access-group name HTTP

Service-policy : SUBRATE_POLICER

Class-map: FROM_R1 (match-all)
20451 packets, 12066090 bytes
30 second offered rate 126000 bps, drop rate 0 bps
Match: access-group name FROM_R1
police:
cir 64000 bps, bc 3200 bytes
pir 128000 bps, be 6400 bytes
conformed 11113 packets, 6556670 bytes; actions:
set-prec-transmit 1
exceeded 9338 packets, 5509420 bytes; actions:
set-prec-transmit 0
violated 0 packets, 0 bytes; actions:
drop
conformed 64000 bps, exceed 62000 bps, violate 0 bps

Class-map: FROM_R6 (match-all)
0 packets, 0 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: access-group name FROM_R6
police:
cir 64000 bps, bc 3200 bytes
pir 128000 bps, be 6400 bytes
conformed 0 packets, 0 bytes; actions:
set-prec-transmit 1
exceeded 0 packets, 0 bytes; actions:
set-prec-transmit 0
violated 0 packets, 0 bytes; actions:
drop
conformed 0 bps, exceed 0 bps, violate 0 bps

Class-map: class-default (match-any)
0 packets, 0 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: any

!
! The above statistics demonstrate that R1 uses almost all available bandwidth
! From the output below we can see that R1 is set to CIR 64Kbps and PIR 128Kbs.
! We may also notice that shaper was active for some time, delaying hundreds of
! exceeding packets. This usually happens in the beginning of TCP session when
! sendger aggressively increases sending rate.
!

Rack1R1#show policy-map interface fastEthernet 0/0
FastEthernet0/0

Service-policy output: POLICY_VLAN146_OUT

Class-map: HTTP (match-all)
3225 packets, 1897929 bytes
30 second offered rate 124000 bps, drop rate 0 bps
Match: access-group 180
Traffic Shaping
Target/Average Byte Sustain Excess Interval Increment
Rate Limit bits/int bits/int (ms) (bytes)
128000/64000 1600 6400 6400 100 1600

Adapt Queue Packets Bytes Packets Bytes Shaping
Active Depth Delayed Delayed Active
- 0 3225 1897929 348 205320 no

Class-map: class-default (match-any)
29 packets, 4378 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: any

Now start another file transfer, this time from R6 down to a host behind, R5 across the serial link. This will make both flows compete for the link bandwidth, and result in fair sharing of the link bandwidth. Now verify the policer statistics once again:

Rack1R4#show policy-map interface fastEthernet 0/1
FastEthernet0/1

Service-policy input: POLICE_VLAN146

Class-map: HTTP (match-all)
35113 packets, 20715559 bytes
30 second offered rate 126000 bps, drop rate 0 bps
Match: access-group name HTTP

Service-policy : SUBRATE_POLICER

Class-map: FROM_R1 (match-all)
29986 packets, 17691740 bytes
30 second offered rate 63000 bps, drop rate 0 bps
Match: access-group name FROM_R1
police:
cir 64000 bps, bc 3200 bytes
pir 128000 bps, be 6400 bytes
conformed 18466 packets, 10894940 bytes; actions:
set-prec-transmit 1
exceeded 11520 packets, 6796800 bytes; actions:
set-prec-transmit 0
violated 0 packets, 0 bytes; actions:
drop
conformed 63000 bps, exceed 0 bps, violate 0 bps

Class-map: FROM_R6 (match-all)
5127 packets, 3023819 bytes
30 second offered rate 63000 bps, drop rate 0 bps
Match: access-group name FROM_R6
police:
cir 64000 bps, bc 3200 bytes
pir 128000 bps, be 6400 bytes
conformed 5124 packets, 3022049 bytes; actions:
set-prec-transmit 1
exceeded 3 packets, 1770 bytes; actions:
set-prec-transmit 0
violated 0 packets, 0 bytes; actions:
drop
conformed 63000 bps, exceed 0 bps, violate 0 bps

Class-map: class-default (match-any)
0 packets, 0 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: any

!
! Verify statistics for both traffic shapers on R1 and R6. Both are set for PIR=128Kbps.
! However, metered rate is close to CIR, and the shaping is inactive. The sending rate
! went down thanks to TCP implicit congestion management procedure, that makes protocol
! sending rate adaptive to congestion in networks.
!

Rack1R6#show policy-map interface fastEthernet 0/0.146
FastEthernet0/0.146

Service-policy output: POLICY_VLAN146_OUT

Class-map: HTTP (match-all)
6846 packets, 4065413 bytes
5 minute offered rate 63000 bps, drop rate 0 bps
Match: access-group 180
Traffic Shaping
Target/Average Byte Sustain Excess Interval Increment
Rate Limit bits/int bits/int (ms) (bytes)
128000/64000 1600 6400 6400 100 1600

Adapt Queue Packets Bytes Packets Bytes Shaping
Active Depth Delayed Delayed Active
- 0 6846 4065413 3 1782 no

Class-map: class-default (match-any)
191 packets, 43930 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Rack1R1#show policy-map interface fastEthernet 0/0
FastEthernet0/0

Service-policy output: POLICY_VLAN146_OUT

Class-map: HTTP (match-all)
33062 packets, 19505469 bytes
30 second offered rate 63000 bps, drop rate 0 bps
Match: access-group 180
Traffic Shaping
Target/Average Byte Sustain Excess Interval Increment
Rate Limit bits/int bits/int (ms) (bytes)
128000/64000 1600 6400 6400 100 1600

Adapt Queue Packets Bytes Packets Bytes Shaping
Active Depth Delayed Delayed Active
- 0 33062 19505469 2632 1552858 no

Class-map: class-default (match-any)
7641 packets, 7385752 bytes
30 second offered rate 0 bps, drop rate 0 bps
Match: any

Now let's confirm that WFQ is actually working on the serial interface between R4 and R5 and provides truly fair division of the bandwidth:

Rack1R4#show queueing interface serial 0/1
Interface Serial0/1 queueing strategy: fair
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: weighted fair
Output queue: 12/1000/64/0 (size/max total/threshold/drops)
Conversations 2/3/256 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
Available Bandwidth 96 kilobits/sec

(depth/weight/total drops/no-buffer drops/interleaves) 6/16192/0/0/0
Conversation 134, linktype: ip, length: 580
source: 155.1.146.1, destination: 155.1.58.8, id: 0xEB41, ttl: 254,
TOS: 32 prot: 6, source port 80, destination port 11001

(depth/weight/total drops/no-buffer drops/interleaves) 6/16192/0/0/0
Conversation 192, linktype: ip, length: 580
source: 155.1.146.6, destination: 155.1.108.10, id: 0x70CA, ttl: 254,
TOS: 32 prot: 6, source port 80, destination port 11002

To summarize, shape peak is a special form of shaping specifically adapted to configure oversubscription scenarios. All other properties of GTS remains the same.

Jul
03

This may seem to be a basic topic, but it looks like many people are still confused by the difference between those two concepts. Let us clear this confusion at once!

Shaping vs Policing

Look at the diagram above. Both router links are clocked at 128Kbps, and the test packet flow has packet size of 1000 bytes each, being sent at a sustained rate of 16 packets per second, effectively saturating the 128Kbps link. Consider what happens when we shape the flow down to 64Kbps. Egress packets are also serialized at 128Kbps - therefore the shaper needs to buffer and delay packets to obtain the target average rate of 64Kbps. Shaper performs that by delaying every burst each Tc interval. For this example, the Bc value (shaper burst) equals to packet size, so effectively every 1/16s interval egress link is busy and the next 1/16s interval it is idle. The average bps rate is total volume of (4*1000) divided by 1/2s (time to send) and multiplied by 8 (to get bps) yielding the result of 64000bps.

The summary points about shaping are as follows:

I) Shaper send Bc amount of data every Tc interval at physical port speed
II) Since shaper delays packets, it uses a queue to store them
III) Shaper queue may use different scheduling algorithms, e.g. WFQ, CBWFQ, FIFO
IV) Shaper unifies traffic flow and introduces delay, which may affect end-to-end QoS characteristics
V) Shaping is generally a WAN technology, used to share a multipoint interface bandwidth and/or compensate for speed differences between sites

On the other hand, policer behaves in a much simpler manner. It achieves the same average traffic rate by dropping a packet that could exceed the policed rate. The policer algorithm is simple: remember the last packet arrival time (“T0”), the current credit (“Cr0”) and “PolicerRate” constant (64Kpbs in our example). (There is also the “Bc” – burst size, but we will ignore it for a moment). When a new packet of size “S” arrives at a moment of time “T” the policer performs the following:

a) Calculate accumulated credit: Cr = Cr0 + (T-T0)*PolicerRate (note: Bc ignored here).
b) If (S <= Cr) than Cr0 = Cr – S and packet is admitted, since we have enough credit
c) Else packet is denied and credit remains the same: Cr0 = Cr.
d) Store the last packet arrival time: T0=T

This simple admission procedure allows for very efficient hardware implementations. Look at the above diagram again. For a sustained packet flow, policer drops every next packet, for it can’t accumulate 1000 bytes of credit during 1/16s (the packet arriving rate) since the “PolicerRate” is just 64000bps. Therefore, every 1/16s the policer is only able to accumulate 500 bytes of credit, and it takes 1/8 of a second to get enough credit to admit a packet.

Now, for the policer burst size. As we remember, with shaping Bc effectively defines the amount of data sent every Tc. With policing it’s purpose is different, however. Look at the diagram below:

Policing Burst

The flow no longer sustains. At one moment, the source is paused and then resumed. Policers were designed to take advantage of such irregular behavior. The long pause allows the policer to accumulate more credit and then use it to accept a “large” packet train at once. However, it can’t be allowed for the credit to grow in unbounded manner (e.g. 1 hours of pause between packets yielding very large credit). Therefore, a committed burst size is used by policers as follows:

a) Calculate new credit: Cr = Cr0 + (T-T0)*PolicerRate
b) If (Cr > Bc) then Cr = Bc
c) If (S <= Cr) then Cr0 = Cr – S and packet is admitted.
d) Else packet is denied and Cr0 = Cr.
e) Store the last packet arrival time: T0=T

The Bc constant limits the amount of credit a policer is allowed to accumulate during the idle periods. Obviously, you will never want to set up Bc lower than a network MTU, for this will prohibit any packet from passing admission. Note the following interesting relation: Tc=Bc/PolicerRate. This is sometimes called “averaging interval”. By the policer design, if you observe the policer traffic flow for “Tc” amount of time, you will never see average bitrate to go above “PolicerRate”. Note that policer “Tc” has nothing to do with “shaping” Tc, as they have very different purpose and meaning.

In summary, the key points about policing:

I) Policer uses a simple, credit based admission model
II) Policer never delays packets and never “absorbs” or smoothes packet bursts
III) Policers are usually used at the edge of a network to control packet admission
IV) Policers could be used in either ingress or egress direction

The last question – how would one calculate a Bc value for a policer? As you’ve seen, for a sustained traffic flow it does not matter what size of Bc to pick up – it does not affect the average packet rate. However, in real life, traffic flows are very irregular. If you pick Bc value too small, you may end up dropping too much packets. Obviously, this is bad for protocols like TCP, which consider a dropped packet to be a signal of congestion. Therefore, there exist some general rules of thumb to pick up Bc values based on policer rate. Generally, Bc should be no less than 1,5s*PolicerRate, but you should calculate the optimal value empirically, by running application tests.

Further Reading:
Comparing Traffic Policing and Traffic Shaping for Bandwidth Limiting

Jun
26

The goal of this article is to discuss how would the following configuration work in the 3560 series switches:

interface FastEthernet0/13
switchport mode access
load-interval 30
speed 10
srr-queue bandwidth shape 50 0 0 0
srr-queue bandwidth share 33 33 33 1
srr-queue bandwidth limit 20

Before we begin, let’s recap what we know so far about the 3560 egress queuing:

1) When SRR scheduler is configured in shared mode, bandwidth allocated to each queue is based on relative weight. E.g. when configuring "srr-queue bandwidth share 30 20 25 25" we obtain the weight sum as 30+20+25+25 = 100 (could be different, but it’s nice to reference to “100”, as a representation of 100%). Relative weights are therefore “30/100”, “20/100”, “25/100”, “25/100” and you can calculate the effective bandwidth *guaranteed* to a queue multiplying this weight by the interface bandwidth: e.g. 30/100*100Mbps = 30Mbps for the 100Mbps interface and 30/100*10Mbps=3Mbps for 10Mbps interface. Of course, the weights are only taken in consideration when interface is oversubscribed, i.e. experiences a congestion.

2) When configured in shaped mode, bandwidth restriction (policing) for each queue is based on inverse absolute weight. E.g. for “srr-queue bandwidth shape 30 0 0 0” we effectively restrict the first queue to “1/30” of the interface bandwidth (which is approximately 3,3Mbps for 100Mbps interface and approximately 330Kbps for a 10Mbps interface). Setting SRR shape weight to zero effectively means no shaping is applied. When shaping is enabled for a queue, SRR scheduler does not use shared weight corresponding to this queue when calculating relative bandwidth for shared queues.

3) You can mix shaped and shared settings on the same interface. For example two queues may be configured for shaping and others for sharing:

interface FastEthernet0/13
srr-queue bandwidth share 100 100 40 20
srr-queue bandwidth shape 50 50 0 0

Suppose the interface rate is 100Mpbs; then queues 1 and 2 will get 2 Mbps, and queues 3 and 4 will share the remaining bandwidth (100-2-2=96Mbps) in proportion “2:1”. Note that queues 1 and 2 will be guaranteed and limited to 2Mbps at the same time.

4) The default “shape” and “share” weight settings are as follows: “25 0 0 0” and “25 25 25 25”. This means queue 1 is policed down to 4Mbps on 100Mpbs interfaces by default (400Kbps on 10Mbps interface) and the remaining bandwidth is equally shared among the other queues (2-4). So take care when you enable “mls qos” in a switch.

5) When you enable “priority-queue out” on an interface, it turns queue 1 into priority queue, and scheduler effectively does not account for the queue’s weight in calculations. Note that PQ will also ignore shaped mode settings as well, and this may make other queues starve.

6) You can apply “aggregate” egress rate-limitng to a port by using command “srr-queue bandwidth limit xx” at interface level. Effectively, this command limits interface sending rate down to xx% of interface capacity. Note that range starts from 10%, so if you need speeds lower than 10Mbps, consider changing port speed down o 10Mbps.

How will this setting affect SRR scheduling? Remember, that SRR shared weights are relative, and therefore they will share the new bandwidth among the respective queues. However, shaped queue rates are based on absolute weights calculated off interface bandwidth (e.g. 10Mbps or 100Mbps) and are subtracted from interface “available” bandwidth. Consider the example below:

interface FastEthernet0/13
switchport mode access
speed 10
srr-queue bandwidth shape 50 0 0 0
srr-queue bandwidth share 20 20 20 20
srr-queue bandwidth limit 20

Interface sending rate is limited to 2Mbps. Queue 1 is shaped to 1/50 of 10Mps, which is 200Kbps of bandwidth. The remaining bandwidth 2000-200=1800Kbps is divided equally among other queues in proportion 20:20:20=1:1:1. That is, in case on congestion and all queues filled up, queue 1 will get 200Kbps, and queues 2-4 will get 600Kbps each.

Quick Questions and Answers

Q: How would I determine which queue will the packet go to? What if my packet has a CoS and DSCP value set at the same time?
A: That depends on what you are trusting at classification stage. If you trust CoS value, then QoS to Output Queue map will be used. Likewise, if you trust DSCP value, then DSCP to Output Queue map will determine the outgoing queue. Use “show mls qos map” commands to find out the current mappings.

Q: What if I’ve configured “shared” and “shaped” srr-queue settings for the same queue?
A: The shaped queue settings will override shared weight. Effectively, shared weight will also be exempted from SRR calculations. Note that in shaped mode queue is still guaranteed it’s bandwidth, but at the same time is not allowed to send above the limit.

Q: What if priority-queue is enabled on the interface? Can I restrict the PQ sending rate using “shaped” weight?
A: No you can’t. Priority-queue will take all the bandwidth if needed, so take care with traffic admission.

Q: How will a shaped queue compete with shared queues on the same interface?
A: Shared queues share the bandwidth remaining from the shaped queues. At the same time, shaped queues are guaranteed the amount of bandwidth allowed by their absolute weight.

Q: How is SRR shared mode different from WRR found in the Catalyst 3550?
A: SRR in shared mode essentially behaves similar to WRR, but is designed to be more efficient. Where WRR would empty the queue up to it’s credit in single run, SRR will take series of quick runs among all the queues, providing more “fair” share and smoother traffic behavior.

Verification scenario diagram and configs

3560 Egress Queuing

For the lab scenario, we configure R1, R3 and R5 to send traffic down to R2 across two 3560 switches saturating the link between them. All routers share one subnet 172.16.0.X/24 where X is the router number. SW1 will assign CoS/IP Precedence values of 1, 3 and 5 respectively to traffic originated by R1, R3 and R5. At the same time, SW1 will apply egress scheduling on it’s connection to SW2. R2’s function is to meter the incoming traffic, by matching the IP precedence values in packets. Note that SW2 has mls qos disabled by default.

We will use the default CoS to Output Queue mappings with CoS 1 mapped to Queue 2, CoS 3 mapped to Queue 3 and CoS 5 mapped to Queue 1. Note that by the virtue of the default mapping tables, CoS 0-7 map to IP Precedence 0-7 (which become overwritten), so we can match IP precedence’s on R2.

SW1#show mls qos maps cos-output-q
Cos-outputq-threshold map:
cos: 0 1 2 3 4 5 6 7
------------------------------------
queue-threshold: 2-1 2-1 3-1 3-1 4-1 1-1 4-1 4-1

SW1’s connection to SW2 is set to 10Mbps port rate, and further limited down to 2Mps by the use of “srr bandwidth limit” command. We will apply different scenarios and see how SRR behaves. Here comes the configurations for SW1 and R2:

SW1:
interface FastEthernet0/1
switchport mode access
load-interval 30
mls qos cos 1
mls qos trust cos
spanning-tree portfast
!
interface FastEthernet0/3
switchport mode access
load-interval 30
mls qos cos 3
mls qos trust cos
spanning-tree portfast
!
interface FastEthernet0/5
load-interval 30
mls qos cos 5
mls qos trust cos
spanning-tree portfast

R2:
class-map match-all PREC5
match ip precedence 5
class-map match-all PREC1
match ip precedence 1
class-map match-all PREC3
match ip precedence 3
!
!
policy-map TEST
class PREC5
class PREC3
class PREC1
!
access-list 100 deny icmp any any
access-list 100 permit ip any any
!
interface FastEthernet0/0
ip address 172.16.0.2 255.255.255.0
ip access-group 100 in
load-interval 30
duplex auto
speed auto
service-policy input TEST

To simulate traffic flow we execute the following command on R1, R3 and R5:

R1#ping 172.16.0.2 repeat 100000000 size 1500 timeout 0

In the following scenarios port speed is locked to 10Mbps and additionally port is limited to 20% of the bandwidth, with the effective sending rate of 2Mbps.

First scenario: Queue 1 (prec 5) is limited to 200Kbps while Queue 2 (prec 1) and Queue 3 (prec 3) share the remaining bandwidth in equal proportions:

SW1:
interface FastEthernet0/13
switchport mode access
load-interval 30
speed 10
srr-queue bandwidth share 33 33 33 1
srr-queue bandwidth shape 50 0 0 0
srr-queue bandwidth limit 20

R2#sh policy-map interface fastEthernet 0/0 | inc bps|Class
Class-map: PREC5 (match-all)
30 second offered rate 199000 bps
Class-map: PREC3 (match-all)
30 second offered rate 886000 bps
Class-map: PREC1 (match-all)
30 second offered rate 887000 bps
Class-map: class-default (match-any)
30 second offered rate 0 bps, drop rate 0 bps

Second Scenario: Queue 1 (prec 5) is configured as priority and we see it leaves other queues starving for bandwidth:

SW1:
interface FastEthernet0/13
switchport mode access
load-interval 30
speed 10
srr-queue bandwidth share 33 33 33 1
srr-queue bandwidth shape 50 0 0 0
srr-queue bandwidth limit 20
priority-queue out

R2#sh policy-map interface fastEthernet 0/0 | inc bps|Class
Class-map: PREC5 (match-all)
30 second offered rate 1943000 bps
Class-map: PREC3 (match-all)
30 second offered rate 11000 bps
Class-map: PREC1 (match-all)
30 second offered rate 15000 bps
Class-map: class-default (match-any)
30 second offered rate 0 bps, drop rate 0 bps

Third Scenario: Queues 1 (prec 5) and 2 (prec 1) are shaped to 200Kbps, while Queue 3 (prec 3) takes all the remaining bandwidth:

SW1:
interface FastEthernet0/13
switchport mode access
load-interval 30
speed 10
srr-queue bandwidth share 33 33 33 1
srr-queue bandwidth shape 50 50 0 0
srr-queue bandwidth limit 20

R2#sh policy-map interface fastEthernet 0/0 | inc bps|Class
Class-map: PREC5 (match-all)
30 second offered rate 203000 bps
Class-map: PREC3 (match-all)
30 second offered rate 1569000 bps
Class-map: PREC1 (match-all)
30 second offered rate 199000 bps
Class-map: class-default (match-any)
30 second offered rate 0 bps, drop rate 0 bps

Jan
26

To begin with, why whould anyone need to run Multilink PPP (MLPPP or MLP) with Interleaving over Frame-Relay? Well, back in days, when Frame-Relay and ATM were really popular, there was a need to interwork the two technologies: that is, transparently pass encapsulated packets between FR and ATM PVCs. (This is similar in concept with modern L2 VPN interworking, however it was specific to ATM and Frame-Relay). Let's imagine a situation where we have slow ATM and Frame-Relay links, used to transport a mix of VoIP and data traffic. As we know, some sort of fragmentation and interleaving scheme should be implemented, in order to keep voice quality under control. Since there was no fragmentation scheme common to both ATM and Frame-Relay, people came with idea to run PPP (yet another L2 tech) over Frame-Relay and ATM PVCs and use PPP multilink and interleave feature to implement fragmentation. (Actually there was no good scheme for native fragmentation and interleaving with VoIP over ATM - the cell mode technology - how ironic!)

Before coming up with a configuration example, let's discuss briefly how PPP Multilink and Interleave works. MLPPP is defined under RFC 1990, and it's purpose is to group a number of physical links into one logical channel with larger "effective" bandwidth. As we discussed before, MLPPP uses a fragmentation algorithm, where one large frame is being split at Layer2 and replaced with a bunch of sequenced (by the use of additional MLPPP header) smaller frames which are then being sent over multiple physical links in parallel. The receiving side will then accept fragments, reorder some of them if needed, and assemble the pieces into complete frame using the sequence numbers.

So here comes the interleave feature: small voice packets are not fragmented by MLPPP (no MLPPP header and sequence number added) and are simply inserted (intermixed) among the fragments of large data packet. Of course, a special interleaving priority queue is used for this purpose, as we have discussed before.

To summarize:

1) MLPPP uses fragmentation scheme where large packets are sliced in pieces and sequence numbers are added using special MLPPP headers
2) Small voice packets are interleaved with fragments of large packets using a special priority queue

We see that MLPPP was originally designed to work with multiple physical links at the same time. However, PPP Multilink Interleave only works with one physical link. The reason is that voice (small) packets are being sent without sequence numbers. If we were using multiple physical links, the receiving side may start accepting voice packets out of their original order (due to different physical link latencies). And since voice packets bear no fragmentation headers, there is no way to reorder them. In effect, packets may arrive to their final destination out of order, degrading voice quality.

To overcome this obstacle, Multiclass Multilink PPP (MCMLPPP or MCMLP) has been introduced in RFC 2886. Under this RFC, different "fragment streams" or classes are supported at sending and receiving sides, using independent sequence numbers. Therefore, with MCMLPPP voice packets may be sent using MLPPP header with separate sequence numbers space. In result, MCMPPP permits the use of fragmentation and interleaving over multiple physical links at time.

Now back to our MLPPPoFR example. Let's imagine the situation where we have two routers (R1 and R2) connected via FR cloud, with physical ports clocked at 512Kpbs and PVC CIR values equal to 384Kbps (There is no ATM interworking in this example). We need to provide priority treatment to voice packets and enable PPP Multilink and Interleave to decrease serialization delays.

[R1]---[DLCI 112]---[Frame-Relay]---[DLCI 211]---[R2]

Start by defining MQC policy. We need to make sure that software queue gives voice packets priority treatmet, or else interleaving will be useless

R1 & R2:

!
! Voice bearer
!
class-map VOICE
match ip dscp ef

!
! Voice signaling
!
class-map SIGNALING
match ip dscp cs3

!
! CBWFQ: priority treatment for voice packets
!
policy-map CBWFQ
class VOICE
priority 48
class SIGNALING
bandwidth 8
class class-default
fair-queue

Next create a Virtual-Template interface for PPPoFR. We need to calculate the fragment size for MLPPP. Since physical port speed is 512Kpbs, and required serialization delay should not exceed 10ms (remember, fragment size is based on physical port speed!), the fragment size must be set to 512000/8*0,01=640 bytes. How is the fragment size configured with MLPPP? By using command ppp multilink fragment delay - however, IOS CLI takes this delay value (in milliseconds) and multiplies it by configured interface (virtual-template) bandwidth (in our case 384Kbps). We can actually change the virtual-template bandwidth to match the physical interface speed, but this would affect the CBWFQ weights! Therefore, we take the virtual-template bandwidth (384Kpbs) and adjust the delay to make sure the fragment size matches the physical interace rate is 512Kpbs. This way, the "effective" delay value would be set to "640*8/384 = 13ms" (Fragment_Size/CIR*8) to accomodate the physical and logical bandwidth discrepancy. (This may be unimportant if our physical port speed does not differ much from PVC CIR. However, if you have say PVC CIR=384Kbps and port speed 768Kbps you may want to pay attention to this issue)

R1:
interface Loopback0
ip address 177.1.101.1 255.255.255.255
!
interface Virtual-Template 1
encapsulation ppp
ip unnumbered Loopback 0
bandwidth 384
ppp multilink
ppp multilink interleave
ppp multilink fragment delay 13
service-policy output CBWFQ

R2:
interface Loopback0
ip address 177.1.102.1 255.255.255.255
!
interface Virtual-Template 1
encapsulation ppp
ip unnumbered Loopback 0
bandwidth 384
ppp multilink
ppp multilink interleave
ppp multilink fragment delay 13
service-policy output CBWFQ

Next we configure PVC shaping settings by using legacy FRTS configuration. Note that Bc is set to CIR*10ms.

R1 & R2:
map-class frame-relay SHAPE_384K
frame-relay cir 384000
frame-relay mincir 384000
frame-relay bc 3840
frame-relay be 0

Finally we apply all the settings to the Frame-Relay interfaces:

R1:
interface Serial 0/0/0:0
encapsulation frame-relay
frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/0:0.1 point-to-point
no ip address
frame-relay interface-dlci 112 ppp virtual-template 1
class SHAPE_384K

R2:
interface Serial 0/0/1:0
encapsulation frame-relay
frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/1:0.1 point-to-point
no ip address
no frame-relay interface-dlci 221
frame-relay interface-dlci 211 ppp virtual-Template 1
class SHAPE_384K

Verification

Two virtual-access interfaces have been cloned. First for the member link:

R1#show interfaces virtual-access 2
Virtual-Access2 is up, line protocol is up
Hardware is Virtual Access interface
Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Link is a member of Multilink bundle Virtual-Access3 <---- MLP bundle member
PPPoFR vaccess, cloned from Virtual-Template1
Vaccess status 0x44
Bound to Serial0/0/0:0.1 DLCI 112, Cloned from Virtual-Template1, loopback not set
Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset
Last input 00:00:52, output never, output hang never
Last clearing of "show interface" counters 00:04:17
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo <---------- FIFO is the member link queue
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
75 packets input, 16472 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
86 packets output, 16601 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

Second for the MLPPP bundle itself:

R1#show interfaces virtual-access 3
Virtual-Access3 is up, line protocol is up
Hardware is Virtual Access interface
Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Open: IPCP
MLP Bundle vaccess, cloned from Virtual-Template1 <---------- MLP Bundle
Vaccess status 0x40, loopback not set
Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset
Last input 00:01:29, output never, output hang never
Last clearing of "show interface" counters 00:03:40
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: Class-based queueing <--------- CBWFQ is the bundle queue
Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/1/128 (active/max active/max total)
Reserved Conversations 1/1 (allocated/max allocated)
Available Bandwidth 232 kilobits/sec
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
17 packets input, 15588 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
17 packets output, 15924 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

Verify the CBWFQ policy-map:

R1#show policy-map interface
Virtual-Template1

Service-policy output: CBWFQ

Service policy content is displayed for cloned interfaces only such as vaccess and sessions
Virtual-Access3

Service-policy output: CBWFQ

Class-map: VOICE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp ef (46)
Queueing
Strict Priority
Output Queue: Conversation 136
Bandwidth 48 (kbps) Burst 1200 (Bytes)
(pkts matched/bytes matched) 0/0
(total drops/bytes drops) 0/0

Class-map: SIGNALING (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp cs3 (24)
Queueing
Output Queue: Conversation 137
Bandwidth 8 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0
(depth/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any)
17 packets, 15554 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
Flow Based Fair Queueing
Maximum Number of Hashed Queues 128
(total queued/total drops/no-buffer drops) 0/0/0

Check PPP multilink status:

R1#ping 177.1.102.1 source loopback 0 size 1500

Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 177.1.102.1, timeout is 2 seconds:
Packet sent with a source address of 177.1.101.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 64/64/64 ms

R1#show ppp multilink

Virtual-Access3, bundle name is R2
Endpoint discriminator is R2
Bundle up for 00:07:49, total bandwidth 384, load 1/255
Receive buffer limit 12192 bytes, frag timeout 1000 ms
Interleaving enabled <------- Interleaving enabled
0/0 fragments/bytes in reassembly list
0 lost fragments, 0 reordered
0/0 discarded fragments/bytes, 0 lost received
0x34 received sequence, 0x34 sent sequence <---- MLP sequence numbers for fragmented packets
Member links: 1 (max not set, min not set)
Vi2, since 00:07:49, 624 weight, 614 frag size <------- Fragment Size
No inactive multilink interfaces

Verify the interleaving queue:

R1#show interfaces serial 0/0/0:0
Serial0/0/0:0 is up, line protocol is up
Hardware is GT96K Serial
MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation FRAME-RELAY, loopback not set
Keepalive set (10 sec)
LMI enq sent 10, LMI stat recvd 11, LMI upd recvd 0, DTE LMI up
LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0
LMI DLCI 1023 LMI type is CISCO frame relay DTE
FR SVC disabled, LAPF state down
Broadcast queue 0/64, broadcasts sent/dropped 4/0, interface broadcasts 0
Last input 00:00:05, output 00:00:02, output hang never
Last clearing of "show interface" counters 00:01:53
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: dual fifo <--------- Dual FIFO
Output queue: high size/max/dropped 0/256/0 <--------- High Queue
Output queue: 0/128 (size/max) <--------- Low (fragments) queue
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
47 packets input, 3914 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
1 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
47 packets output, 2149 bytes, 0 underruns
0 output errors, 0 collisions, 4 interface resets
0 output buffer failures, 0 output buffers swapped out
1 carrier transitions
Timeslot(s) Used:1-24, SCC: 0, Transmitter delay is 0 flags

Further Reading

Reducing Latency and Jitter for Real-Time Traffic Using Multilink PPP
Multiclass Multilink PPP
Using Multilink PPP over Frame Relay

Jan
24

This is a "modern" way to configure FRTS, using MQC commands only to accomplish the task. With MQC approach, an unified interface has been introduced to configure all QoS settings, irrelevant of underlying technology.

In summary:

- Legacy command frame-relay traffic-shaping is incompatible with MQC-based FRTS (you can't mix them)
- Fancy queueing could not be used as a PVC-queueing strategy: CBWFQ is the only option available
- Per-VC CBWFQ is configured via hierarchical policy-maps configuration: Parent policy sets shaping values, while child policy implements CBWFQ
- You may apply policy-map per-interface (subinterface) or per-VC, using match fr-dlci under class-map submode

Example: Shape PVC to 384Kbps and provide LLQ treatment for voice bearer packets on PVC queue

class-map VOICE
match ip dscp ef
!
class-map DATA
match ip dscp cs1

!
! "Child" policy-map, used to implement CBWFQ
!

policy-map CBWFQ
class VOICE
priority 64
class DATA
bandwidth 128
class class-default
fair-queue

!
! "Parent" policy map, used for PVC shaping
!

policy-map SHAPE_384K
class class-default
shape average 384000
shape adaptive 192000
service-policy CBWFQ

interface Serial 0/0/0:0.1
ip address 177.0.112.1 255.255.255.0
service-policy output SHAPE_384K
frame-relay interface-dlci 112

Verification: check out policy map settings

Rack1R1#show policy-map interface serial 0/0/0:0.1

Serial0/0/0:0.1

Service-policy output: SHAPE_384K

Class-map: class-default (match-any)
1942 packets, 1590741 bytes
5 minute offered rate 48000 bps, drop rate 0 bps
Match: any
Traffic Shaping
Target/Average Byte Sustain Excess Interval Increment
Rate Limit bits/int bits/int (ms) (bytes)
384000/384000 2400 9600 9600 25 1200

Adapt Queue Packets Bytes Packets Bytes Shaping
Active Depth Delayed Delayed Active
- 0 1936 1581717 0 0 no

Service-policy : CBWFQ

Class-map: VOICE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: protocol rtp
Match: ip dscp ef (46)
Queueing
Strict Priority
Output Queue: Conversation 40
Bandwidth 64 (kbps) Burst 1600 (Bytes)
(pkts matched/bytes matched) 0/0
(total drops/bytes drops) 0/0

Class-map: DATA (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp cs1 (8)
Queueing
Output Queue: Conversation 41
Bandwidth 128 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0
(depth/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any)
1942 packets, 1590741 bytes
5 minute offered rate 48000 bps, drop rate 0 bps
Match: any
Queueing
Flow Based Fair Queueing
Maximum Number of Hashed Queues 32
(total queued/total drops/no-buffer drops) 0/0/0

The amount of bandwidth, available for allocation to CBWFQ classes, is taken from shape adaptive value. If the latter is not configured, shape average
value is used instead. Note, that as you configure bandwidth settings for classes, their values are not subtracted from remaining bandwidth. This is in contraty with
"classic" CBWFQ, applied to a physical interface (not subinterface or PVC)

Verification (with the example above):

Rack1R1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
Rack1R1(config)#policy-map CBWFQ
Rack1R1(config-pmap)#class class-default
Rack1R1(config-pmap-c)#no fair-queue
Rack1R1(config-pmap-c)#bandwidth 256
I/f shape class class-default requested bandwidth 256 (kbps), available only 192 (kbps)

Note that available bandwidth is set to shape adaptive value, even though we have priority configured under class VOICE and bandwidth
settings under class DATA

- You can't apply FRF.12 fragmentation with MQC commands - it should be applied at physical interface level. By doing so, FRF.12 is effectively enabled for all PVCs
- Physical interface queue could be set to any of WFQ/CQ/PQ or CBWFQ (not restricted to FIFO as with FRTS legacy) - though this is rarely needed

Example: Shape PVC DLCI 112 to 384Kpbs and enable FRF.12 fragmentation for all PVCs

class-map VOICE
match ip dscp ef
!
class-map DATA
match ip dscp cs1

!
! Match the specific DLCI
!
class-map DLCI_112
match fr-dlci 112

!
! "Child" policy-map, used to implement CBWFQ
!

policy-map CBWFQ
class VOICE
priority 64
class DATA
bandwidth 128
class class-default
fair-queue

!
! "Parent" policy map, used for PVC shaping
! With multiple classes, we can match different DLCIs
! all at the same physical interface (where they belongs)
!

policy-map INTERFACE_POLICY
class DLCI_112
shape average 384000
shape adaptive 192000
service-policy CBWFQ

!
! Apply the parent policy map at physical interface level
! Also, configure FRF.12 "global" settings here
!

interface Serial 0/0/0:0
service-policy output INTERFACE_POLICY
frame-relay fragment 640 end-to-end

Verification:

Rack1R1#show policy-map interface serial 0/0/0:0

Serial0/0/0:0

Service-policy output: INTERFACE_POLICY

Class-map: DLCI_112 (match-all)
1040 packets, 95287 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: fr-dlci 112
Traffic Shaping
Target/Average Byte Sustain Excess Interval Increment
Rate Limit bits/int bits/int (ms) (bytes)
384000/384000 2400 9600 9600 25 1200

Adapt Queue Packets Bytes Packets Bytes Shaping
Active Depth Delayed Delayed Active
- 0 1040 95287 4 1373 no

Service-policy : CBWFQ

Class-map: VOICE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: protocol rtp
Match: ip dscp ef (46)
Queueing
Strict Priority
Output Queue: Conversation 40
Bandwidth 64 (kbps) Burst 1600 (Bytes)
(pkts matched/bytes matched) 0/0
(total drops/bytes drops) 0/0

Class-map: DATA (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp cs1 (8)
Match: fr-dlci 112
Queueing
Output Queue: Conversation 41
Bandwidth 128 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0
(depth/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any)
1040 packets, 95287 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
Flow Based Fair Queueing
Maximum Number of Hashed Queues 32
(total queued/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any)
2594 packets, 153695 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Verify fragmentation settings:

Rack1R1#show interface serial 0/0/0:0
Serial0/0/0:0 is up, line protocol is up
Hardware is GT96K Serial
MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation FRAME-RELAY, loopback not set
Keepalive set (10 sec)
LMI enq sent 21224, LMI stat recvd 21224, LMI upd recvd 0, DTE LMI up
LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0
LMI DLCI 1023 LMI type is CISCO frame relay DTE
FR SVC disabled, LAPF state down
Fragmentation type: end-to-end, size 640, PQ interleaves 0 <--------- Fragment Size
Broadcast queue 0/64, broadcasts sent/dropped 63160/0, interface broadcasts 56080
Last input 00:00:03, output 00:00:02, output hang never
Last clearing of "show interface" counters 2d10h
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 6
Queueing strategy: weighted fair
Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/1/256 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
Available Bandwidth 1152 kilobits/sec
5 minute input rate 0 bits/sec, 1 packets/sec
5 minute output rate 0 bits/sec, 1 packets/sec
272202 packets input, 27557680 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
15 input errors, 15 CRC, 8 frame, 0 overrun, 0 ignored, 5 abort
333676 packets output, 42152431 bytes, 0 underruns
0 output errors, 0 collisions, 16 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions
Timeslot(s) Used:1-24, SCC: 0, Transmitter delay is 0 flags

Subscribe to INE Blog Updates

New Blog Posts!