Posts Tagged ‘tuning’
In this post we will quickly discuss the use of most commonly needed IGMP timers. First, as we know, multicast routers periodically query hosts on a segment. If there are two or more routers sharing the same segment, the one with the lowest IP address is the IGMP querier (per IGMPv2 election procedure – as you remember, IGMPv1 let the multicast routing protocol define the querier).
I found it important to make a small post in reply to the following question:
i’m still confused between
mls qos min-reserve and wrr-queue bandwidth
what is the difference between the two
The 3550 WRR (weighted round robin) scheduler algorithm utilizes four configurable queues at each interface of the switch. Let’s consider just FastEthernet ports for simplicity in this post. For each queue, the following important parameters could be configured:
1) WRR scheduling weight. Defines how much attention the queue is given in case of congestion. The weight essentially defines the number of packets taken from queue each time WRR scheduler runs through queues in sequence. WRR weights are defined per-interface using the command wrr-queue bandwidth w1 w2 w3 w4. Theoretically, if each queue holds packets of approximately the same size, the proportion of bandwidth guaranteed to queue number “k” (k=1..4) is: wk=wk/(w1+w2+w3+w4) [this formula does not hold strict if packet sizes are much different]. If queue 4 is turned into priority by using priority-queue out, than it’s weight is ignored in computations (w4 is set to 0 in the above formula). The currently assigned weights could be verified as follows:
SW4: interface FastEthernet 0/1 wrr-queue bandwidth 10 20 30 40 Rack1SW4#show mls qos interface queueing FastEthernet0/1 Egress expedite queue: dis wrr bandwidth weights: qid-weights 1 - 10 2 - 20 3 - 30 4 - 40 ...
2) Queue size. Number of buffers allocated to hold packets when the queue is congested. When a queue grow up to this limit, further packets are dropped. The queue size value is not explicitly configurable per FastEthernet interface. Rather, each queue is mapped to one of eight globally configurable “levels”. Each level, in turn, is assigned the number of buffers available to queues mapped to this level. Therefore, the mapping is as following: queue-id -> global-level -> number-of-buffers. By default, each of eight levels is assigned the value of “100″. This means that every queue mapped to this level will have 100 buffers allocated. The interface-level command to assign a level to a queue is wrr-queue min-reserve Queue-id Global-level. By default, queues 1 through 4 are mapped to levels 1 through 4. Look at the following example and verification:
SW4: mls qos min-reserve 1 10 mls qos min-reserve 2 20 mls qos min-reserve 3 30 mls qos min-reserve 4 40 ! ! Assign 40 buffers to queue 1 ! Assign 30 buffers to queue 2 ! Assign 20 buffers to queue 3 ! Assign 10 buffers to queue 4 ! interface FastEthernet0/1 wrr-queue min-reserve 1 4 wrr-queue min-reserve 2 3 wrr-queue min-reserve 3 2 wrr-queue min-reserve 4 1 Rack1SW4#show mls qos interface fastEthernet 0/1 buffers FastEthernet0/1 Minimum reserve buffer size: 10 20 30 40 100 100 100 100 Minimum reserve buffer level select: 4 3 2 1
3) CoS values to Queue-ID (1,2,3,4) mapping table (per-port). Defines (per-interface) which outgoing packets are mapped to this queue based on the calculated CoS value. The interface-level command to define the mappings is wrr-queue cos-map Queue-ID Cos1 [Cos2] [Cos3] … [Cos8]. For example:
SW4: interface FastEthernet0/1 wrr-queue cos-map 1 0 1 2 wrr-queue cos-map 2 3 4 wrr-queue cos-map 3 6 7 wrr-queue cos-map 4 5 Rack1SW4#show mls qos interface fastEthernet 0/1 queueing FastEthernet0/1 Egress expedite queue: dis wrr bandwidth weights: qid-weights 1 - 10 2 - 20 3 - 30 4 - 40 Cos-queue map: cos-qid 0 - 1 1 - 1 2 - 1 3 - 2 4 - 2 5 - 4 6 - 3 7 - 3
Note that the CoS value is either based on the original CoS field from the incoming frame (if CoS was trusted) or is calculated using the global DSCP to CoS mapping table (for IP packets).
Note that for GigabitEthernet ports on the 3550 platform, the configuration options are more flexible – you can specify queue-depths per-interface, configure drop thresholds, map DSCP value to thresholds and define the drop strategy. However, this topic is for separate post .