Posts Tagged ‘catalyst-qos’

Nov
29

In this first of a series of blog posts regarding Catalyst QoS, we will exam the AutoQoS capabilities on the 3560 Catalyst devices. AutoQoS allows for the automation of QoS settings for the switch with an absolute minimum of configuration required from the engineer. In particular, the 3560 AutoQoS features automates the classification and congestion management configurations required in VoIP environments. You should note that the 3560 AutoQoS has much “catching up” to do when you compare this feature to AutoQoS for VoIP and AutoQoS for Enterprise that are both now possible in the pure router class of Cisco devices.

First, the easy part. The interface configuration command required for QoS is simply:

auto qos voip [cisco-phone | cisco-softphone | trust]

Notice the auto qos voip command is used in conjunction with keywords that specify what devices to “trust” when it comes to these important VoIP packets. The cisco-phone keyword instructs the AutoQoS feature to only trust and act upon the incoming voice packets if they are truly sent from a Cisco IP Phone. The phone’s presence is detected thanks to CDP. Similarly, the cisco-softphone keyword instructs the device to only trust and act upon the voice packets if they are sent from a Cisco phone running in software on a PC. Finally, the trust keyword instructs the device to trust markings for VoIP packets that are coming from another switch or router over the port.

Continue Reading

Tags: ,

Sep
11

People are often confused with per-VLAN classification, policing and marking features in the Catalyst 3550 and 3560 models. The biggest problem is lack of comprehensive examples in the Documentation CD. Let’s quickly review and compare traffic policing features available on both platforms. The material below is a condensed excerpt of several Catalyst QoS topics covered in the “QoS” section of our IEWB VOL1 V5. You will find more in-depth explanations and large number of simulation-based verifications in the upcoming section of the workbook.

Continue Reading

Tags: , , , , , , , ,

Jul
20

I found it important to make a small post in reply to the following question:

Hi Petr
i’m still confused between
mls qos min-reserve and wrr-queue bandwidth
what is the difference between the two

thx

The 3550 WRR (weighted round robin) scheduler algorithm utilizes four configurable queues at each interface of the switch. Let’s consider just FastEthernet ports for simplicity in this post. For each queue, the following important parameters could be configured:

1) WRR scheduling weight. Defines how much attention the queue is given in case of congestion. The weight essentially defines the number of packets taken from queue each time WRR scheduler runs through queues in sequence. WRR weights are defined per-interface using the command wrr-queue bandwidth w1 w2 w3 w4. Theoretically, if each queue holds packets of approximately the same size, the proportion of bandwidth guaranteed to queue number “k” (k=1..4) is: wk=wk/(w1+w2+w3+w4) [this formula does not hold strict if packet sizes are much different]. If queue 4 is turned into priority by using priority-queue out, than it’s weight is ignored in computations (w4 is set to 0 in the above formula). The currently assigned weights could be verified as follows:

SW4:
interface FastEthernet 0/1
 wrr-queue bandwidth 10 20 30 40

Rack1SW4#show mls qos interface queueing
FastEthernet0/1
Egress expedite queue: dis
wrr bandwidth weights:
qid-weights
 1 - 10
 2 - 20
 3 - 30
 4 - 40
...

2) Queue size. Number of buffers allocated to hold packets when the queue is congested. When a queue grow up to this limit, further packets are dropped. The queue size value is not explicitly configurable per FastEthernet interface. Rather, each queue is mapped to one of eight globally configurable “levels”. Each level, in turn, is assigned the number of buffers available to queues mapped to this level. Therefore, the mapping is as following: queue-id -> global-level -> number-of-buffers. By default, each of eight levels is assigned the value of “100″. This means that every queue mapped to this level will have 100 buffers allocated. The interface-level command to assign a level to a queue is wrr-queue min-reserve Queue-id Global-level. By default, queues 1 through 4 are mapped to levels 1 through 4. Look at the following example and verification:

SW4:
mls qos min-reserve 1 10
mls qos min-reserve 2 20
mls qos min-reserve 3 30
mls qos min-reserve 4 40
!
! Assign 40 buffers to queue 1
! Assign 30 buffers to queue 2
! Assign 20 buffers to queue 3
! Assign 10 buffers to queue 4
!
interface FastEthernet0/1
 wrr-queue min-reserve 1 4
 wrr-queue min-reserve 2 3
 wrr-queue min-reserve 3 2
 wrr-queue min-reserve 4 1

Rack1SW4#show mls qos interface fastEthernet 0/1 buffers
FastEthernet0/1
Minimum reserve buffer size:
 10 20 30 40 100 100 100 100
Minimum reserve buffer level select:
 4 3 2 1

3) CoS values to Queue-ID (1,2,3,4) mapping table (per-port). Defines (per-interface) which outgoing packets are mapped to this queue based on the calculated CoS value. The interface-level command to define the mappings is wrr-queue cos-map Queue-ID Cos1 [Cos2] [Cos3] … [Cos8]. For example:

SW4:
interface FastEthernet0/1
 wrr-queue cos-map 1 0 1 2
 wrr-queue cos-map 2 3 4
 wrr-queue cos-map 3 6 7
 wrr-queue cos-map 4 5

Rack1SW4#show mls qos interface fastEthernet 0/1 queueing
FastEthernet0/1
Egress expedite queue: dis
wrr bandwidth weights:
qid-weights
 1 - 10
 2 - 20
 3 - 30
 4 - 40
Cos-queue map:
cos-qid
 0 - 1
 1 - 1
 2 - 1
 3 - 2
 4 - 2
 5 - 4
 6 - 3
 7 - 3

Note that the CoS value is either based on the original CoS field from the incoming frame (if CoS was trusted) or is calculated using the global DSCP to CoS mapping table (for IP packets).

Note that for GigabitEthernet ports on the 3550 platform, the configuration options are more flexible – you can specify queue-depths per-interface, configure drop thresholds, map DSCP value to thresholds and define the drop strategy. However, this topic is for separate post :) .

Tags: , , , ,

Mar
03

The 3560 QoS processing model is tightly coupled with it’s hardware architecture borrowed from the 3750 series switches. The most notable feature is the internal switch ring, which is used for the switch stacking purpose. Packets entering a 3560/3750 switch are queued and serviced twice: first on the ingress, before they are put on the internal ring, and second on the egress port, where they have been delivered by the internal ring switching. In short, the process looks as follows:

[Classify/Police/Mark] -> [Ingress Queues] -> [Internal Ring] -> [Egress Queues]

For more insights and detailed overview of StackWise technology used by the 3750 models, visit the following link:

Cisco StackWise Technology White Paper

Continue Reading

Tags: , , , , , , , ,

Feb
26

Catalyst QoS configuration for IP Telephony endpoints is one of the CCIE Voice labs topics. Many people have issues with that one, because of need to memorize a lot of SRND recommendations to do it right. The good news is that during the lab exam you have full access to the QoS SRND documents and UniverCD content. The bad news is that you won’t probably have enough time to navigate the UniverCD with comfort plus the reference configurations often have a lot of typos and mistakes in them.

Continue Reading

Tags: , , , , , ,

Feb
23

QoS features available on Catalyst switch platforms have specific limitations, dictated by the hardware design of modern L3 switches, which is heavily optimized to handle packets at very high rates. Catalyst switch QoS is implemented using TCAM (Ternary Content Addressable Tables) – fast hardware lookup tables – to store all QoS configurations and settings. We start out Catalyst QoS overview with the old, long time available in the CCIE lab, the Catalyst 3550 model.

Continue Reading

Tags: , , , , , , ,

Categories

CCIE Bloggers