Sep
20

In this final part of our blog series on QoS with the PIX/ASA, we examine the remaining two tools that we find on some devices - traffic shaping and traffic policing.

Sep
12

This blog is focusing on QoS on the PIX/ASA and is based on 7.2 code to be consistent with the CCIE Security Lab Exam as of the date of this post. I will create a later blog regarding new features to 8.X code for all of you non-exam biased readers :-)

Aug
26

Note: The following post is an excerpt from the full QoS section of IEWB-RS VOL1 version 5.

Peak shaping may look confusing at first sight; however, its function becomes clear once you think of oversubscription. As we discussed before, oversubscription means selling customers more bandwidth than a network can supply, hoping that not all connections would use their maximum sending rate at the same time. With oversubscription, traffic contract usually specifies three parameters: PIR, CIR and Tc – peak rate, committed rate and averaging time interval for rate measurements. The SP allows customers to send traffic at rates up to PIR, but only guarantees CIR rate in case of network congestion. Inside the network SP uses any of the max-min scheduling procedures to implement bandwidth sharing in such manner that oversubscribed traffic has lower preference than conforming traffic. Additionally, the SP generally assumes that customers respond to notifications of traffic congestion in the network (either explicit, such as FECN/BECN/TCP ECN or implicit such as packet drops in TCP) by slowing down sending rate.

Jul
03

This may seem to be a basic topic, but it looks like many people are still confused by the difference between those two concepts. Let us clear this confusion at once!

Jun
26

The goal of this article is to discuss how would the following configuration work in the 3560 series switches:

interface FastEthernet0/13
switchport mode access
load-interval 30
speed 10
srr-queue bandwidth shape 50 0 0 0
srr-queue bandwidth share 33 33 33 1
srr-queue bandwidth limit 20

Before we begin, let’s recap what we know so far about the 3560 egress queuing:

1) When SRR scheduler is configured in shared mode, bandwidth allocated to each queue is based on relative weight. E.g. when configuring "srr-queue bandwidth share 30 20 25 25" we obtain the weight sum as 30+20+25+25 = 100 (could be different, but it’s nice to reference to “100”, as a representation of 100%). Relative weights are therefore “30/100”, “20/100”, “25/100”, “25/100” and you can calculate the effective bandwidth *guaranteed* to a queue multiplying this weight by the interface bandwidth: e.g. 30/100*100Mbps = 30Mbps for the 100Mbps interface and 30/100*10Mbps=3Mbps for 10Mbps interface. Of course, the weights are only taken in consideration when interface is oversubscribed, i.e. experiences a congestion.

2) When configured in shaped mode, bandwidth restriction (policing) for each queue is based on inverse absolute weight. E.g. for “srr-queue bandwidth shape 30 0 0 0” we effectively restrict the first queue to “1/30” of the interface bandwidth (which is approximately 3,3Mbps for 100Mbps interface and approximately 330Kbps for a 10Mbps interface). Setting SRR shape weight to zero effectively means no shaping is applied. When shaping is enabled for a queue, SRR scheduler does not use shared weight corresponding to this queue when calculating relative bandwidth for shared queues.

3) You can mix shaped and shared settings on the same interface. For example two queues may be configured for shaping and others for sharing:

interface FastEthernet0/13
srr-queue bandwidth share 100 100 40 20
srr-queue bandwidth shape 50 50 0 0

Suppose the interface rate is 100Mpbs; then queues 1 and 2 will get 2 Mbps, and queues 3 and 4 will share the remaining bandwidth (100-2-2=96Mbps) in proportion “2:1”. Note that queues 1 and 2 will be guaranteed and limited to 2Mbps at the same time.

4) The default “shape” and “share” weight settings are as follows: “25 0 0 0” and “25 25 25 25”. This means queue 1 is policed down to 4Mbps on 100Mpbs interfaces by default (400Kbps on 10Mbps interface) and the remaining bandwidth is equally shared among the other queues (2-4). So take care when you enable “mls qos” in a switch.

5) When you enable “priority-queue out” on an interface, it turns queue 1 into priority queue, and scheduler effectively does not account for the queue’s weight in calculations. Note that PQ will also ignore shaped mode settings as well, and this may make other queues starve.

6) You can apply “aggregate” egress rate-limitng to a port by using command “srr-queue bandwidth limit xx” at interface level. Effectively, this command limits interface sending rate down to xx% of interface capacity. Note that range starts from 10%, so if you need speeds lower than 10Mbps, consider changing port speed down o 10Mbps.

How will this setting affect SRR scheduling? Remember, that SRR shared weights are relative, and therefore they will share the new bandwidth among the respective queues. However, shaped queue rates are based on absolute weights calculated off interface bandwidth (e.g. 10Mbps or 100Mbps) and are subtracted from interface “available” bandwidth. Consider the example below:

interface FastEthernet0/13
switchport mode access
speed 10
srr-queue bandwidth shape 50 0 0 0
srr-queue bandwidth share 20 20 20 20
srr-queue bandwidth limit 20

Interface sending rate is limited to 2Mbps. Queue 1 is shaped to 1/50 of 10Mps, which is 200Kbps of bandwidth. The remaining bandwidth 2000-200=1800Kbps is divided equally among other queues in proportion 20:20:20=1:1:1. That is, in case on congestion and all queues filled up, queue 1 will get 200Kbps, and queues 2-4 will get 600Kbps each.

Jan
26

To begin with, why whould anyone need to run Multilink PPP (MLPPP or MLP) with Interleaving over Frame-Relay? Well, back in days, when Frame-Relay and ATM were really popular, there was a need to interwork the two technologies: that is, transparently pass encapsulated packets between FR and ATM PVCs. (This is similar in concept with modern L2 VPN interworking, however it was specific to ATM and Frame-Relay). Let's imagine a situation where we have slow ATM and Frame-Relay links, used to transport a mix of VoIP and data traffic. As we know, some sort of fragmentation and interleaving scheme should be implemented, in order to keep voice quality under control. Since there was no fragmentation scheme common to both ATM and Frame-Relay, people came with idea to run PPP (yet another L2 tech) over Frame-Relay and ATM PVCs and use PPP multilink and interleave feature to implement fragmentation. (Actually there was no good scheme for native fragmentation and interleaving with VoIP over ATM - the cell mode technology - how ironic!)

Jan
24

This is a "modern" way to configure FRTS, using MQC commands only to accomplish the task. With MQC approach, an unified interface has been introduced to configure all QoS settings, irrelevant of underlying technology.

Subscribe to INE Blog Updates

New Blog Posts!