Mar
03

The 3560 QoS processing model is tightly coupled with it’s hardware architecture borrowed from the 3750 series switches. The most notable feature is the internal switch ring, which is used for the switch stacking purpose. Packets entering a 3560/3750 switch are queued and serviced twice: first on the ingress, before they are put on the internal ring, and second on the egress port, where they have been delivered by the internal ring switching. In short, the process looks as follows:

[Classify/Police/Mark] -> [Ingress Queues] -> [Internal Ring] -> [Egress Queues]

For more insights and detailed overview of StackWise technology used by the 3750 models, visit the following link:

Cisco StackWise Technology White Paper

Next, it should be noted that the 3560 model is capable of recognizing and processing IPv6 pacekts natively – this feature affects some classification options. Another big difference is the absence of internal DSCP value and the use of QoS label for internal packet marking. This feature allows the 3560 switches to provide different classes of services to CoS or DSCP marked packets, by allowing them to be mapped to different queues/thresholds etc. Many other concepts and commands are different as well, as is some nomenclature (e.g. the number of the priority queue). We will try to summarize and analyze the differences under the following paragraphs. The biggest obstacle is absence of any good information source on the 3750/3560 switches QoS, besides the Cisco Documentation site, which has really poor documents regarding both models.

1. Queue Scheduler: SRR

One significant change is replacement of WRR scheduler with SRR, where latter stands for either Shared Round Robin or Shaped Round Robin – which are two different modes of operation for the scheduler. As we remember, what WRR does is simply de-queue the number of packets proportional to the weight assigned to a queue walking through queues in round-robin fashion. SRR performs similar in shared more: each output queue has a weight value assigned, and is serviced in proportion to the assigned weight when interface is under congestion.

The implementation details are not documented to open public by Cisco; however, so far we know that Shared Round Robin tries to behave more “fairly” than WRR does. While on every scheduler round WRR empties each queue up to the maximum number of allowed packet, before switching to the next queue, SRR performs a series of quick runs each round, deciding on whether to de-queue a packet from the particular queue (based on queue weights). In effect, SRR achieves more “fairness” on per-round basis, because it does not take the whole allowed amount each time it visits a queue. On the “long run” SRR and WRR behave similarly.

The shaped mode of SRR is something not available with WRR at all. With this mode, each queue has weight that determines the maximum allowed transmission rate for a queue. That is, interface bandwidth is divided in proportions of queue weights, and each queue is not allowed to send packets above this “slice” of common bandwidth. The details of the implementation are not provided, so we can only assume it’s some kind of effective policing strategy. Pay special attention, that SRR weight values are *absolute*, not relative. That is, the proportion of interface bandwidth give to a particular queue is “1/weight*{interface speed}. So for a weight of “20″ the limit is “1/20*Speed” and equals to 5Mbps with 100Mbps interface. Also, by default, queue 1 has SRR shaped weight 1/25, so take care when you turn MLS QoS on.

An interface could be configured for SRR shared and shaped mode at the same time. However, SRR shaped mode always take preference over SRR shared weights. Finally, SRR provides support for priority queue but this queue is not subject to SRR shaping limits as well. The weight assigned to the priority queue is simply ignored for SRR calculations. Note that unlike the 3550, on the 3560, egress priority queue has queue-id 1, not 4. Here is an example:

!
!  By settings shape weight to zero we effectively disable shaping for the particular queue
!
interface gigabitethernet0/1
 srr-queue bandwidth shape 10 0 0 0
 srr-queue bandwidth share 10 10 60 20
 !
 !  As expedite queue (queue-id 1) is enabled, it’s weight is no longer honored by the SRR scheduler
 !
 priority-queue out

Another interesting egress feature is port rate-limit. SRR could be configured to limit the total port bandwidth to a percentage of physical interface speed – from 10% to 90%. Don’t configure this feature if there is no need to limit overall port bandwidth.

!
! Limit the available egress bandwidth to 80% of interface speed
!
interface gigabitethernet0/1
 srr-queue bandwidth limit 80

Note that the resulting port rate depends on the physical speed of the port – 10/100/1000Mbps.

2. Egress Queues

The 3550 has four egress queues identified by their numbers 1-4 available to every interface,. Queue number 4 could be configured as expedite. On the 3560 side, we have the same four egress queues, but this time for expedite services we could configure queue-id 1.

With the 3550, for “class-to-queue” mapping only CoS to queue-id table is available on a per-port basis. Globally configurable DSCP to CoS mapping table is used to map an internal DSCP value to the equivalent CoS. As for the 3560 model, DSCP and CoS values are mapped to queue-ids separately. That means IP and non-IP traffic could be mapped/serviced separately. What if an IP packet comes with DSCP and CoS values both set? Then the switch will use the marking used for classification (e.g. CoS if trust cos or set cos were used) to assign the packet to a queue/threshold.

The 3550 supports WTD and WRED as queue drop strategy (the latter option available on Gigabit ports only). The 3560 model supports WTD as the only drop strategy, allowing for three per-queue drop thresholds. Only two of the thresholds are configurable – called explicit drop thresholds – and the third one is fixed to mark the full queue state (implicit threshold).

Finally, the mentioned mappings are configured for queue-id and drop threshold simultaneously in global configuration mode – unlike the 3550 where you configured CoS to Queue-ID and DSCP to Drop-Threshold mappings separately (and on per-interface basis).

!
! CoS values are mapped to 4 queues. Remember queue-id 1 could be set as expedite
!

!
! The next entry maps CoS value 5 to queue 1 and threshold 3 (100%)
!
mls qos srr-queue output cos-map queue 1 threshold 3 5
!
!  VoIP signaling and network management traffic go to queue 2
!
mls qos srr-queue output cos-map queue 2 threshold 3 3 6 7
mls qos srr-queue output cos-map queue 3 threshold 3 2 4
mls qos srr-queue output cos-map queue 4 threshold 2 1
mls qos srr-queue output cos-map queue 4 threshold 3 0

!
! DSCP to queue/threshold mappings
!
mls qos srr-queue output dscp-map queue 1 threshold 3 40 41 42 43 44 45 46 47

mls qos srr-queue output dscp-map queue 2 threshold 3 24 25 26 27 28 29 30 31
mls qos srr-queue output dscp-map queue 2 threshold 3 48 49 50 51 52 53 54 55
mls qos srr-queue output dscp-map queue 2 threshold 3 56 57 58 59 60 61 62 63

mls qos srr-queue output dscp-map queue 3 threshold 3 16 17 18 19 20 21 22 23
mls qos srr-queue output dscp-map queue 3 threshold 3 32 33 34 35 36 37 38 39

!
!  DSCP 8 is CS1 – Scavenger class, mapped to the first threshold of the last queue
!
mls qos srr-queue output dscp-map queue 4 threshold 1 8
mls qos srr-queue output dscp-map queue 4 threshold 2 9 10 11 12 13 14 15
mls qos srr-queue output dscp-map queue 4 threshold 3 0 1 2 3 4 5 6 7

Next, what we need to know is how to configure egress queue buffer spaces along with the threshold settings. With the 3550 we had two options: first, globally configurable buffer levels for FastEthernet ports, assigned per interface; second, one shared buffer pool for each of gigabit ports, with proportions (weights) configured on per-interface level too. With the 3560 things have been changed. Firstly, all ports are now symmetrical in their configurations. Secondly, a concept of queue-set has been introduced. A queue-set defines buffer space partition scheme as well as threshold levels for each of four queues. Only two queue-set are available in the system, and are configured globally. After a queue-set has been defined/redefined it could be applied to an interface. Of course, default queue-set configurations exist as well.

As usual, queue thresholds are defined in percentage of the allocated queue memory (buffer space). However, the 3560 switch introduced another buffer pooling model. There exist two buffer pools – reserved and common. Each interface has some buffer space allocated under the reserved pool. This reserved buffer space, allocated to an interface, could be partitioned between egress interface queues, by assigning a weight value (this resembles the gigabit interface buffer partitioning with the 3550):

!
! Queue-sets have different reserved pool partitioning schemes
!  Each of four queues is given a weight value to allocate some reserved buffer space
!
mls qos queue-set output 1 buffers 10 10 26 54
mls qos queue-set output 2 buffers 16 6 17 61
!
interface gigabitEthernet 0/1
 queue-set 1

What about the common buffer pool? Let’s take a look at the threshold setting command first:

mls qos queue-set output qset-id threshold queue-id drop-threshold1 drop-threshold2 reserved-threshold maximum-threshold

We see two explicitly configured thresholds 1 and 2 – just as with the 3550. However, there is one special threshold called reserved-threshold. What it does, is specifies how much of reserved buffer space is allocated to a queue – i.e. how much of reserved buffers are actually “reserved”. As we know, with the 3560 model every queue on every port has some memory allocated under reserved buffer pool. The reserved-threshold tells how much of this memory to allocate to a queue – from 1 to 100%. The unused amount of the reserved buffer space becomes available to over queues under the common buffer pool mentioned above. The common buffer pool could be used by any queue to borrow buffers above the queue’s reserved space. That allows to set drop-thresholds to values greater than 100%, meaning it’s allowable for queue to take more credit from common pool to satisfy it needs. The maximum-threshold specifies the “borrowing limit” – how much a queue is allowed to grow into the common pool.

Look at the command “mls qos queue-set output 1 threshold 1 138 138 92 138”. It says we shrink reserved buffer space for queue 1 to 92% sending the exceeding space to common pool. All the three drop-thresholds are set to 138% of the queue buffer space (allocated by buffer command), meaning we allow the queue to borrow from the common pool up to the levels specified. Drop thresholds may be set as large as 400% of the configured queue size.

Now we see that this model is a bit more complicated than it was with the 3550. We don’t know the actual sizes of reserved buffer pool, but we are allowed to specify the relative importance of each queue. Additionally, we may give up some reserved buffer space out to a common buffer pool to share with the other queues. Here is a detailed example from AutoQoS settings:

!
! Set thresholds for all four queues in queue-set 1 
!
mls qos queue-set output 1 threshold 1 138 138 92 138
mls qos queue-set output 1 threshold 2 138 138 92 400
mls qos queue-set output 1 threshold 3 36 77 100 318
mls qos queue-set output 1 threshold 4 20 50 67 400

!
! Set thresholds for all four queues in queue-set 2 
!
mls qos queue-set output 2 threshold 1 149 149 100 149
mls qos queue-set output 2 threshold 2 118 118 100 235
mls qos queue-set output 2 threshold 3 41 68 100 272
mls qos queue-set output 2 threshold 4 42 72 100 242

interface gigabitEthernet 0/1
 queue-set 1
!
interface gigabitEthernet 0/2
 queue-set 2

An finally, if you are not sure what you’re doing, don’t play with thresholds and buffer space partitioning – leave the to the default values, or use AutoQoS recommendations. So says the DocCD!

3. Ingress Queues

The unique feature of the 3560 switches is possibility to configure two ingress queues on every port. Of these queues, you may configure any to be an expedite queue, with queue 2 being the default. The priority queue is guaranteed access to the internal ring when the ring is congested. SRR only serves the ingress queues in shared mode. All the weights are configured globally, not per-interface as it was with the egress queues:

!
! Disable the default priority queue and share the bandwidth on the ring in 1/4 proportion
!
mls qos srr-queue input priority-queue 2 bandwidth 0
mls qos srr-queue input bandwidth 25 75

The UniverCD has a very shallow description of how actually SRR algorithm serves the two queues, when one of them is configured as expedite. So far, it looks like that you should configure one queue as expedite, assign the priority bandwidth value to it (from 1 to 40% of the internal ring bandwidth), and also assign the bandwidth values to both of the ingress queues as usual:

mls qos srr-queue input priority-queue 1 bandwidth 10
mls qos srr-queue input bandwidth 50 50

What’s that supposed to mean? Seems like queue 1 is partially serviced as an expedite queue, up the limit set by priority-queue bandwidth. As this counter exhausts, it is then being served on par with the non-expedite queue, using the bandwidth weights assigned. With the example about that mean we have 10% of bandwidth dedicated to priority queue services. As soon as this counter exhausts (say on a per-round basis) SRR continues to service both ingress queues using the remaining bandwidth counter (90%) shared in the proportions of the weights assigned (50 & 50) – that means 45% of the ring bandwidth to each of the queues. Overall, it looks like SRR simulates a policer for priority queue, but rather than dropping the traffic it simply changes the scheduling mode, until enough credits are accumulated to start expedite services again. Now go figures how to use that in real life! Too bad Cisco does no give out to public any internal details on their SRR implementation.

Now, two things remain to consider for the ingress queues: class mappings to the queues, and buffer/threshold settings. The class mapping uses the same syntax as for the egress queues, allowing configuring global mapping of CoS and DSCP do queue-ids and thresholds.

!
! CoS to queue-id/threshold
!
mls qos srr-queue input cos-map queue 1 threshold 3 0
mls qos srr-queue input cos-map queue 1 threshold 2 1
mls qos srr-queue input cos-map queue 2 threshold 1 2
mls qos srr-queue input cos-map queue 2 threshold 2 4 6 7
mls qos srr-queue input cos-map queue 2 threshold 3 3 5

!
! DSCP to queue-id/threshold
!
mls qos srr-queue input dscp-map queue 1 threshold 2 9 10 11 12 13 14 15
mls qos srr-queue input dscp-map queue 1 threshold 3 0 1 2 3 4 5 6 7
mls qos srr-queue input dscp-map queue 1 threshold 3 32
mls qos srr-queue input dscp-map queue 2 threshold 1 16 17 18 19 20 21 22 23
mls qos srr-queue input dscp-map queue 2 threshold 2 33 34 35 36 37 38 39 48
mls qos srr-queue input dscp-map queue 2 threshold 2 49 50 51 52 53 54 55 56
mls qos srr-queue input dscp-map queue 2 threshold 2 57 58 59 60 61 62 63
mls qos srr-queue input dscp-map queue 2 threshold 3 24 25 26 27 28 29 30 31
mls qos srr-queue input dscp-map queue 2 threshold 3 40 41 42 43 44 45 46 47

As it seems reasonable, the classification option (CoS, DSCP) used determines that mapping table for ingress queue.

The buffer partitioning is pretty simple – there is no common pool, and you only specify the relative weight for every queue. Thresholds are also simplified – you configure just two too of them, in percentage of queue size. The third threshold is implicit, as usual.

!
!  Thresholds are set to 8% and 16% for queue 1; 34% and 66% for queue-2 
!  Buffers are partitioned in 67/33 proportion
!
mls qos srr-queue input threshold 1 8 16
mls qos srr-queue input threshold 2 34 66
mls qos srr-queue input buffers 67 33

So far we covered enough for a single post. In the next post we will discuss how classification policing and marking techniques differ between the 3550 and 3560 models.

About Petr Lapukhov, 4xCCIE/CCDE:

Petr Lapukhov's career in IT begain in 1988 with a focus on computer programming, and progressed into networking with his first exposure to Novell NetWare in 1991. Initially involved with Kazan State University's campus network support and UNIX system administration, he went through the path of becoming a networking consultant, taking part in many network deployment projects. Petr currently has over 12 years of experience working in the Cisco networking field, and is the only person in the world to have obtained four CCIEs in under two years, passing each on his first attempt. Petr is an exceptional case in that he has been working with all of the technologies covered in his four CCIE tracks (R&S, Security, SP, and Voice) on a daily basis for many years. When not actively teaching classes, developing self-paced products, studying for the CCDE Practical & the CCIE Storage Lab Exam, and completing his PhD in Applied Mathematics.

Find all posts by Petr Lapukhov, 4xCCIE/CCDE | Visit Website


You can leave a response, or trackback from your own site.

23 Responses to “Bridging the gap between 3550 and 3560 QoS: Part I”

 
  1. Dan K says:

    Excellent! Many thanks. I’m looking forward to part 2!

  2. darra says:

    Outstanding article!!
    Can’t wait to read part 2.

    Thanks

  3. [...] Bridging the gap between 3550 and 3560 QoS: Part I [...]

  4. Steve says:

    Thank you Petr,
    The information contained in your articles answer many questions I usually have when researching these topics myself.

    Which begs a question; as quoted by yourself in this article “The biggest obstacle is absence of any good information source on the 3750/3560 switches QoS, besides the Cisco Documentation site, which has really poor documents regarding both models.”

    The question I have is, how do you go about finding the true facts of these things? I can spend days reading through Cisco docs, all contradicting themselves and at the end, be none the wiser – what’s the trick?

    Big thanks again, look forward to the next article.

    Regards,
    Steve

  5. [...] Compliant Distributed Weighted Random Early Detection Understanding and Configuring MDRR/WRED Bridging the gap between 3550 and 3560 QoS: Part I and Part [...]

  6. Atif says:

    Thanks Petr!! Really great job!

  7. Nicholas Davitashvili says:

    Hi Petr,

    Thanks for the great articles!
    I’ve got a couple of questions which follow, it would be very kind of you to help me out with this.

    Question1:
    The really fun part is that an interface could be configured for SRR shared and shaped mode at the same time. This way, each queue is guaranteed a bandwidth share, and additionally has some upper bound on transmission rate set.”
    Does this imply that the shape and share statements work at the same time on the same queue? or would shaping override and cancel out sharing configuration for a particular single queue?

    Question2:

    interface gigabitethernet0/1
    srr-queue bandwidth shape 10 0 0 0
    srr-queue bandwidth share 10 10 60 20
    !
    ! As expedite queue (queue-id 1) is enabled, it’s weight is no longer honored by the shared mode scheduler
    ! Still, this queue is subject to shaped mode policing, in order to prevent over queues starvation
    !
    priority-queue out

    The following quote is from another article:

    5) When you enable “priority-queue out” on an interface, it turns queue 1 into priority queue, and scheduler effectively does not account for the queue’s weight in calculations. Note that PQ will also ignore shaped mode settings as well, and this may make other queues starve.
    Is it me or do the starvation parts contradict each other?
    Which one is valid?

    Thanks for your time.
    All the best!

  8. Nicholas Davitashvili

    Honestly, I was first confused by the DocCD :) But the latter statement is actually correct: PQ on 3560 is not subject to policing by SRR shaped mode. So i just fixed my older article with respect to that matter. Thanks for pointing this out!

  9. Nicholas Davitashvili says:

    Oh the ever-confusing DocCD :)

    Thanks for the clarification.

    What about Question 1?
    do shaping(upper bound) and sharing(minimum guarantee) work at the same time for a single queue? or would applying non-zero “shape” value invalidate “share” ratio for that queue?

  10. Nicholas Davitashvili

    The trick with SRR shape is that it “guarantees” and “limits” the bandwidth at the same time. That is, if you set shape weight to 20 on a 100Mbps interface, you will restrict the queue to 1/20*100=5Mbps but this will also be the guaranteed rate (i.e. if interface is congested, you wont get less than 5Mbps). The “share” weight is not taken in account when “shape” is specified.

  11. Nicholas Davitashvili says:

    Thank you so much for the quick response and all the information!

  12. Steve says:

    Hi Petr,

    This article is great. You mention that the 3560 loses the internal DSCP method that the 3550 used. However, the DOCcd is still cluttered with references to it in various sections of the 3560 configuration guide. Is this an error on their part (copy and paste?)

    I’m referring to:
    http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_44_se/configuration/guide/swqos.html

    Just search for ‘internal dscp’, as they mention it several times.

    Is this just the usual doc cd errors?

  13. Maarten says:

    Hi,

    We’re in the middle of defining new QoS standards with srr-queuing. We have spended all 4 queues to serveral dscp values. In some queues we want to make differance in priority with buffer allocation and thresholds. For example we have in queue three dscp value 10, 18 and 26. To give 26 priority above 18 and 18 above 10 we decided to devide those with thresholds. Default every threshold in queue-set 1 gets 100% ofthe buffers queue 2 get 200%.

    Can anyone give his view of how the threshold drop setting should be with some explanation?

    Below are some more details
    DSCP 10: management traffic
    DSCP 18: bussinuess critical
    DSCP 26: interactive

    mls qos srr-queue output dscp-map queue 3 threshold 1 10
    mls qos srr-queue output dscp-map queue 3 threshold 2 18
    mls qos srr-queue output dscp-map queue 3 threshold 3 26
    mls qos queue-set output 1 buffers 10 50 20 20

    Thanks in advance,

    Maarten

  14. gnijs says:

    First, very good article. I am trying to figure out the details myself for over 5 days now :-)
    I do have some additional question that i am unable to solve myself:
    1) with default auto-qos, when is it recommended to use Queue-set 1 or queue-set 2 ?? (note: we just had big trouble in our production network with using queue-set 2)

    2) Using the default settings of mls qos:
    srr-queue bandwidth shape 25 0 0 0
    srr-queue bandwidth share 25 25 25 25,
    in a full congested situation, how much bandwidth is assigned to Q2,3 and 4 ?
    Q1 gets 1/25th so 4%. I guess Q2,3,4 gets the remaining bits eqaully devided, but this is not 25% but 96% / 3 = 32%

    3) The maximum-threshold value specifies how much a queue is allowed to grow and can be up to 400% (although my switch allows me to set it up to 3200 (??). Anyway, this 400% does it apply to the reserved buffer size ?
    mls qos queue-set output 1 buffers 10 10 70 10
    Q3 7-times larger than other queues. reservation 50% each queue. Supposse i use 400% growth for every buffer. Does it mean Q1 can grow to maximum 40% of available buffer space in absense of other traffic ? If i want to allow each queue to use 100% of available buffer space in absence of other traffic, to i need to specify: 1000% 1000% 142% 1000% as growth rates ?

  15. Andy M says:

    Your question is not that simple to answer because we do not know what queues your traffic is destined for, however I think Q2,3,4 are not shaped and Petr’s article says “1/weight*{interface speed} gives you the bandwidth. You would have 25/25 *{interface speed} or basically the full interface being the shape limit for Q1. You would however have 25% of the interface available for each queue so the answer would depend on how much traffic was destined for which queue and if priority queuing had been enabled. If it was exactly 25% destined for each queue then all traffic would get an even distribution. If there was no traffic for Q1 then you would have 33% of the bandwidth for Q2,3,4.

    Enabling QOS “mls qos” and leaving the default settings can cause degradation in performance. Cisco assigned a bug to this at some point hence using auto QOS as a starting point is your best bet. Queue-sets 1 and 2 are aimed at different interface speeds but Cisco contradicts themselves as to what these are. There is no one size fits all for a QOS policy but autoqos is a great start for most networks.

    I agree about Cisco’s contradiction in the priority-queue stakes, in that you cannot shape Q1 if PQ is enabled. Indeed I tested this by throwing 30MB traffic at a 10MB interface of which 95% was EF traffic the remaining 5% was DSCP 0 – no DSCP 0 traffic got through even though I configured shaping, if you remove PQ then the shaping did as it was told.

    The other big issue with configuring these buffer space partitioning parameters as detailed by Petr and Cisco agree don’t touch unless you need to. Unfortunately the environment you need to configure these will be slow speed interfaces such as a 10MB MAN link. Here you need to buffer traffic as multiple 100 and even Gig links get pushed up a 10MB link. The 3560/3750 have port ASICS that controls a group of ports, this ASIC is basically the QOS buffering and scheduling ASIC, and how many ethernet ports share an asic depends on what model they are, some are 4 ports/asic some are 26. The buffer on the ASIC is a not particularly large but is a “Cisco confidential” amount of buffer. Unfortunately this also means that whilst one port on the asic is buffering for a slow link that buffer is also not available for an adjacent port, and can under certain circumstances degrade the performance of that adjacent port.

    You also mention your switch allows maximum-threshold up to 3200%, this value was increased in later versions of code from 400% to 3200%. I needed to deploy a config beyond the 400% limit and upgraded to one of these later codes to allow me to deploy this and greatly improved the performance of a converged network.

  16. Andy M says:

    And of course I speaketh rubbish the bandwidth for shaped queue 1 would be 1/25th of the bandwidth and not 25/25. The remaining 3 queues can have a ratio of 25:25:25 or a 3rd of what is left. So 1/25 = 4% of the bandwith for Q1, the remaining queues share the 96 remaining percent which would be 32% each.

    So until I change my mind?
    I love it when you submit something they you think what rubbish I speak.

  17. jeremy says:

    Hi Petr,

    What are drop-threshold1 and 2 means? I haven’t found an explanation anywhere.

    max threshold as you explained is understandable but still confused what are drop-Threshold 1 and drop-threshold 2 means.

  18. Mohammed says:

    Hi Petr

    if we enable priority-queue on the interface, it will starve the other queues. is there any way to enable prioirty-queue and limit the bandwidth at the same time??

    lets say i want to have priority-queue and guarantee it 20% of the bandwidth

    Also, is it possible to combine the ingress and output queues to guarantee(and limit) bandwidth for the priority queue?

    Regards

  19. CiscoSri says:

    Hi Petr,

    You article is really fantastic. I have a basic doubt, I am following the QoS SRND recommendation and have created the below policies and appied to the Access Port 0/5 & 0/9 respectively. I tested the policy which I have a class ftpdata and infact it does affect teh transfer rate however what is really botheing me is why am I unable to see any values in the below output on a Cisco 3560 Switch. Could you kindly explain why ?
    class-map match-all ipcommunicator
    match access-group name ipcommunicator
    class-map match-all http
    match access-group name http
    class-map match-all sccp-signaling
    match access-group name sccp-signaling
    class-map match-all ftpdata
    match access-group name ftpdata
    !
    !
    policy-map testpolicy
    class ipcommunicator
    trust dscp
    class ftpdata
    police 8000 8000 exceed-action drop
    class http
    police 64000 8000 exceed-action drop
    class class-default
    policy-map sccp-signaling
    class sccp-signaling
    police 32000 8000 exceed-action policed-dscp-transmit
    trust cos

    !
    PHONESW#show policy-map interface fastEthernet 0/9
    FastEthernet0/9

    Service-policy input: sccp-signaling

    Class-map: sccp-signaling (match-all)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: access-group name sccp-signaling

    Class-map: class-default (match-any)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: any
    0 packets, 0 bytes
    5 minute rate 0 bps
    !
    PHONESW#show policy-map interface fastEthernet 0/5
    FastEthernet0/5

    Service-policy input: testpolicy

    Class-map: ipcommunicator (match-all)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: access-group name ipcommunicator

    Class-map: ftpdata (match-all)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: access-group name ftpdata

    Class-map: http (match-all)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: access-group name http

    Class-map: class-default (match-any)
    0 packets, 0 bytes
    5 minute offered rate 0 bps, drop rate 0 bps
    Match: any
    0 packets, 0 bytes
    5 minute rate 0 bps
    !
    Regards,
    CiscoSri

  20. Matt says:

    Great article, goes pretty deep without wasting my time on unnecessary stuff…

    I do have one doubt though. If the srr shaping is ignored when the egress PQ is defined, how do I prevent the traffic starvation?

    Thanks

  21. abbas says:

    thanks both Petr and Mark (for pointing to this article in his class)

    Just came across our live switch config.
    every access port has the
    Priority queue out
    command. even the basic mgmt interfaces. e.g..

    interface GigabitEthernet1/0/9
    description voice IVR_1 admin
    switchport access vlan 999
    switchport mode access
    srr-queue bandwidth share 1 70 25 5
    srr-queue bandwidth shape 3 0 0 0
    priority-queue out
    mls qos trust dscp
    end

    does that mean all the traffci from this port will be treated on the proirty queues!!
    I suspect as the access port is not going to tag its packets so it will be treated normally and sent to ordinary queue instead.

    thanks in advance.

    • Hi Abbas,

      It means that only traffic that ends up in that hardware egress queue will receive strict priority. Which Q it is depends on each hardware platform (almost every switch platform is different in where it places the priority Q). How you get the traffic into that Q is completely at your control, and performed globally with mapping commands.

 

Leave a Reply

Categories

CCIE Bloggers