Nov
14

While delivering what turned out to be a very successful UCS and Nexus 1000V class a week and a half back, a number of you on the east coast of the US got tragically knocked out of class due to Hurricane/TS Sandy. While the class had to go on, I received a number of emails later on the following week mentioning about the disappointment many of you had based on not being able to ask questions during the live class. So we've decided to run the class again, live, the week after next. So for any who were not able to attend the first time due to this or any other reason, or would just like to attend again (or simply have an opportunity to purchase it for the first time and still attend a live version of this class), we will offer it again beginning on Nov 26 and running through Nov 30.

For anyone who may not know INE's teaching style very well - let me just tell you that you will not be bored to death-by-powerpoint. We'll present a very few slides with key points to remember, but 90% of the content you will see will be live, hands-on configuration and troubleshooting. We test everything the box(es) have to offer. We'll do static pinning, dynamic pinning, and port channels for both LAN and SAN, and verify and fail(over) everything to show exactly what's going on. We don't just do GUI (that'd be too boring). We verify everything in the NX-OS CLI, and the upstream Nexus 5548UPs and MDS 9222Is. We'll boot from local disk, boot from SAN, and build ESXi 5 live on the blades and pizza-box C200 as well, while talking through a number of the recommendations from both Cisco and VMware on the box. Things like number of vNICs, vHBAs, when to enable failover or not to enable failover and on which vNICS - and when should you do these things on ESXi with standard vSwitches vs when you should do them on ESXi running on top of the N1Kv VEM module.

Also, watch this blog as we will soon announce dates for a few new classes I will be holding relating to some very real-world production network training. Things such as UC on UCS (actually building it - not just talking about it), as well as BYOD with Cisco ISE across Wireless, Nexus and Catalyst platforms, as well as an across-the-board QoS class that shows both relevant and similar comparison configurations spanning Catalyst 3550, 3560/3750, 6500 with many various model blades, and Nexus 5500, 7000 and even 1000 platforms. The QoS class is something I've been asked by students to do for a while now, and am quite excited to bring all the hardware together in a single class.

See you in about a week.

Oct
18

One of our most anticipated products of the year - INE's CCIE Service Provider v3.0 Advanced Technologies Class - is now complete!  The videos from class are in the final stages of post production and will be available for streaming and download access later this week.  Download access can be purchased here for $299.  Streaming access is available for All Access Pass subscribers for as low as $65/month!  AAP members can additionally upgrade to the download version for $149.

At roughly 40 hours, the CCIE SPv3 ATC covers the newly released CCIE Service Provider version 3 blueprint, which includes the addition of IOS XR hardware. This class includes both technology lectures and hands on configuration, verification, and troubleshooting on both regular IOS and IOS XR. Class topics include Catalyst ME3400 switching, IS-IS, OSPF, BGP, MPLS Layer 3 VPNs (L3VPN), Inter-AS MPLS L3VPNs, IPv6 over MPLS with 6PE and 6VPE, AToM and VPLS based MPLS Layer 2 VPNs (L2VPN), MPLS Traffic Engineering, Service Provider Multicast, and Service Provider QoS.

Below you can see a sample video from the class, which covers IS-IS Route Leaking, and its implementation on IOS XR with the Routing Policy Language (RPL)

May
22

This blog post reviews and compares two most common types of traffic contracts - single rate and dual-rate agreements and their respective implementations using single-rate and dual-rate (two-rate) policing. We are also going to briefly discuss effects of packet remarking on end-to-end throughput and finally look at some examples of IOS configuration.

What is Traffic Contract

Service-providers network topology typically follows core/aggregation model, where network core has meshed topology and aggregation layers use some variation of tree topology. This design results in bandwidth aggregation when flows converge toward the core. Therefore, to avoid network resource oversubscription, accurate admission control is necessary at the network edge. The admission operation was trivial with circuit-switched TDM-based networks, but became significantly more complicated in packet switched networks. In a packet network, there is no such thing as a constant traffic flow rate, as flows only exist "temporarily" when packets are transmitted. In packet networks, it is common for service providers to connect customer using a sub-rate connection. Sub-rate a connection that provides only a fraction of the maximum possible link bandwidth, e.g. 1Mbps on a 100Mbps connection.

Implementing sub-rate access requires special agreement between service provider and customer - a specification known as "traffic contract". Traffic contracts are enforced both at customer and SP sides by using traffic shaping and policing respectively. Traffic contracts may vary and include multiple QoS parameters, but there are two most common types that we are going to look at today: single-rate and dual-rate traffic contracts.

Single Rate Traffic Contract

Single-rate traffic contract is normally defined for a sub-rate connection over a physical link with maximum transmission rate of AR (Access Rate). There are three main parameters associated with this type of contract:

  • Committed Information Rate (CIR). Defines the average traffic rate that a customer is allowed to send traffic into the network. Notice the term "average" as packets are still being sent at the line rate (AR), and the information rate is defined by averaging measurement over some time interval. Per the definition, CIR is assumed to be less than AR
  • Committed Burst Size (Bc). This value defines the maximum amount of contiguous packets that a customer is allowed to send in a single "batch". Packet traffic is commonly bursty, e.g. TCP connection is normally "clocked" by the rate of incoming TCP ACK's, and the burst size is typically two segments, though the sender may send up to a full TCP window, when possible. After a customer has sent a contiguous block of "Bc" bytes, it must pause for some time before sending the next batch. This is normally implemented using traffic-shaping at customer side.
  • Excessive Burst Size (Be). This is a non-mandatory parameter that could be used to improve admission control "fairness". Let's say a customer connection has been idle for some time, and then customer got new traffic to send. Regardless of the previous idle interval, only Bc bytes could be sent in a single burst, and after that customer needs to wait and accumulate more sending credit. By allowing customer to accumulate up to Be extra bytes during longer than normal wait intervals, it is possible to compensate for idle times and send Bc+Be bytes momentarily for a short time. Notice that longer-term average rate still remains equal to CIR, the excessive bursting mechanism only allows for occasional bursting.

Pay attention to the implicit value of Tc=Bc/CIR - this time interval is known as metering "averaging interval". It defines a time window to count bytes in traffic flow for the purpose of finding the average rate. You may find more information about the use of Tc with traffic policing, look at the following publication: The meaning of Bc with Traffic Policing. Normally, either Tc or Bc value is explicitly defined in the contract, and for IP networks this value should be large enough to allow efficient work of TCP end-to-end between customer locations. Therefore, the lowest value of Tc (or Bc) is effectively based on the SP SLAs and RTT times from one customer site to another. At the very least, the equation Bc >= CIR*RTT should hold, where RTT is the maximum round-trip time site-to-site, per the SLA. At this moment, it is worth reminding that QoS tools are used to control connection quality end-to-end between two different sites connected to the same SP, or different SPs that share some sort of QoS agreements. It is important to point out that that in order to allow for any QoS, complete network should be under the control of a service provider.

What about the Be value? If this one is ever used in contract, it defines the amount of "unused time intervals" that could be "reclaimed" back by the sender. If the customer is allowed to reclaim back N*Tc intervals, then Be could be found simply as N*Bc. Most commonly, if Be is used, N equals to 1, which means the customer may reclaim a single "wasted" Tc interval. The more you grow Be, the more fair would the bandwidth utilization look to a customer, but the sporadic peak rates would tend to exceed CIR more than normal. There is no "best" value for Be, it depends on type of traffic.

Implementation wise, single-rate traffic policing is implemented by tracking the current burst size using token-bucket mechanics, and discarding packets that exceed CIR. The so-called, Single-rate, Three-Color Marker (srTCM) is the RFC name for ingress tool used to implement admission control at the network edge. The "three color" term means that any incoming burst could be classified as either conforming (green, under Bc), exceeding (yellow, over Bc but under Be) or violating (red, over Be). Depending on the implementation, exceeding packets could be admitted, but have their QoS marking changed to show higher drop precedence in the network core. Here is how the srTCM implementation looks like on a diagram:

Pay attention to the fact that there is a single flow of the tokens that fills the Bc bucket (CBS, committed burst size) first and then continues to filling the Be bucket (EBS, excess burst size). The second bucket is only filled if there was enough idle time to let the first bucket fill up completely. Every arriving packet is first compared to CBS and then to the EBS to determine the next action. The "Ti" interval is the special periodic timer that is used to add tokens to the token buckets. Ideally, Ti should be 1/CIR, but this is normally not possible due to limited resolution of hardware clock, so a small enough value of Ti is chosen.

Dual Rate Traffic Contract

The drawback of single-rate traffic contracts is that SP should be cautions assigning CIR bandwidth, and may effectively "undersell" itself, by offering less bandwidth than it can actually service at any given moment of type. The reason for this is the fact that not all customers send traffic simultaneously, so network links may effectively become underutilized even at the "weak spots". This brings the idea of dual-rate traffic contract: supply customer with two sending rates, but only guarantee the smaller one. In case of congestion in the network, discard traffic that exceeds the committed rate more aggressively and signal the customer to slow down to the committed rate. This principle was first widely implemented in Frame-Relay networks, but could be easily replicated using any packet-switching technology. There are four main parameters in a dual-rate traffic contract.

  • Committed information Rate (CIR). Same meaning as with a single-rate contract.
  • Committed burst size (Bc). Same meaning as with a single rate contract, and once again, Tc - the averaging interval - is implicitly defined as Tc=Bc/CIR.
  • Peak Information Rate (PIR). Additional parameter - defines the maximum average sending rate for the customer. Traffic bursts that exceed CIR but remain under PIR are allowed in the network, but marked for more aggressive discarding. Marking depends on the transport technology, e.g. it could be DSCP bits, ATM CLP or Frame-Relay DE bit.
  • Excessive Burst Size (Be). This value has different meaning, compared to a single-rate contract. Be is the maximum size of the packet burst that could be accepted to sustain the PIR rate. Effectively, Be defines the second averaging interval, Te=Be/PIR, the averaging rate for PIR metering. Keep in mind that just like with any packet networks packets are sent at the AR, the actual physical rate - CIR and PIR are just average values.

Compared to a single-rate traffic contract, dual-rate has two major differences. Firstly, incoming traffic bursts are metered and compared to CIR and PIR rates simultaneously, using the corresponding Bc and Be burst sizes. Depending on the comparison results, different actions could be taken with regards to the packets. Normally, if a burst is under CIR, it is admitted into the network without any modifications. If the burst exceeds CIR, but remains under PIR (e.g. current burst is still under Be), the burst has marking changed, but still admitted into the network. Finally, if the burst exceeds PIR, it is typically being discarded. Dual-rate contracts are normally implemented using some sort of two-rate, three color marker (trTCM), that compares incoming bursts to Bc and Be and decides on the color to be assigned: conforming (green, under Bc), exceeding (yellow, under Be) or violating (red, over Be). Traffic bursts that exceed Bc will have their marking changed to signalized higher drop precedence. The values for Bc and Be should be selected to be no less than RTT*CIR and RTT*PIR respectively, to allow for efficient TCP performance end-to-end. Here is how a two-rate three color marker implementation would look like if using a token-bucket model:

As compared to a single-rate model, this one uses two separate flow of tokens filling the CBS (Bc) and EBS (Be) buckets. Overflowing tokens are simply spilled an not stored everywhere, there is no "fairness" mechanism in trTCM. Every incoming packet is compared to the amount of tokens in CBS and EBS, but this time it results in comparing the traffic flow to separate pre-defined rates.

Handling Exceeding Packets in SP Network

The dual-rate contracts result in two interesting problems. Firstly, the exceeding packets should be handled differently in the SP network. Secondly, congestion needs to be signaled to the customer reaction point. Let's start with the packet burst that were marked as exceeding on the reception from the customer. Under congestion, those packets should have more chances of being dropped, as compared to "conforming" packets. This behavior could be implemented in two different ways:

  • Assigning the packets into separate queues, e.g. assign them to a best-effort queue. While this looks logical, it may result in packet reordering. Imagine a TCP flow going between customer site, consuming maximum allowed bandwidth (PIR). Some of the packet burst in this connection may actually conform to CIR, while others may be marked as exceeding. As a result, flow packets may be ordered even under moderate congestion in the network. In result, this will affect TCP performance as packet reordering may trigger TCP Congestion Avoidance process, resulting in less than possible TCP sending rate. The problem is that TCP cannot reliable tell if packet reordering is a result of packet drop or network queueing.
  • Assigning the packets to a lower drop threshold in the same queue as the conforming traffic. This could be implemented in many different ways, e.g. using different WRED thresholds for different DSCP values or having different drop limit for DE-marked Frame-Relay packets. Using this method will reduce chances of packet re-ordering but may increase end-to-end delay for all traffic. Due to the serious impact of packet re-ordering on TCP congestion avoidance mechanism, using this method is the recommended treatment.

Signaling a network congestion might be difficult in some networks. Not every packet switching technology supports this feature, and many of them may support different signaling. In the most simple case, there is no explicit congestion signaling, like in IP networks (ignoring the obsolete ICMP source quench message). In this case, upper-level protocol is supposed to recognize quality degradation and respond by slowing sending rate. This is automatically done by the commonly used TCP protocol. As another example, you may consider the use of RTP and RTCP protocols, where RTCP is used to control call quality and may change codecs in response to network condition degradation. Consider Frame-Relay next - congestion may be signaled using BECN bit to signal traffic source to slow down its sending rate. Notice that in Frame-Relay there are no further hints about congestion, e.g. no indication on how far the sender should go down, so the reaction point may only implement a pre-programmed response. There are numerous other implementation of explicit congestion notification, such as ones used in ATM or Data-Center bridging, or more well-known TCP ECN. Those, however, are out of the scope of this blog post. We will mainly consider the Frame-Relay BECNs and in-built TCP congestion response.

What is the difference between Tc in traffic shaping in policing

The value of Tc is often used in traffic shaping calculating. However, the meaning and the use of the shaping time interval is different from policing Tc. When traffic is metered, Tc defines the length of sliding window over a time axis that is effectively used to find the average traffic rate. With traffic shaping, Tc defines the periodic scheduling interval to be used when emitting traffic bursts by the shaper's leaky bucket algorithm. When you match shaping settings against the ingress policing, you need to make sure that shaping Tc does not exceed the configured policing Tc - otherwise, the shaper may produce bursts that are always rejected by the policer. The same logic applies to the Be burst values used in traffic shaping and policing. There is a substantial difference in Cisco IOS implementation. For traffic shaping, sBc+sBe is the maximum amount of bits you can send during a single interval. This cumulative burst will be compared to either policing pBc value or policing pBe value independently, when using ingress MQC policing. Therefore, you need to make sure that pBe>=sBc+sBe, or the excessive burst may be rejected by the policer. Finally, notice that if shaping Be is set to a value above (AR-CIR)*Tc it will take more than single Tc interval to schedule sending of excessive traffic. Effectively, during a single Tc interval a shaper cannot send more than AR*Tc bits, which means the maximum excessive burst value is (AR-CIR)*Tc during a single Tc. Setting Be over this value will result in excessive bursting split across multiple scheduling intervals.

Implementing Single Rate Traffic Contracts in Cisco IOS

We'll be using Frame-Relay as the SP access technology for our examples. Let's assume that we have a traffic contract for CIR=256Kbps and a normal burst size of Bc=25600 bits. The contract should be implemented on a T1 connection with a bit-rate of 1544000 bps. These values translate in policing averaging interval of Tc=25600/256000=100ms (1/10 of second). The contact needs to be enforced on customer side using traffic shaping with a Tc value <= 100ms to be admitted by SP. Let's also assume that the SP agrees to allow excessive bursting to compensate for a single idle Tc interval. The final values becomes will look as following (notice that policing values are given in bytes, to match Cisco IOS syntax).

Shaping:

CIR=256000 bps
Bc=25600 bits
Be=Bc=25600 bits.

Policing:

CIR=256000 bps
Bc=3200 bytes
Be=6400 bytes (to allow admission of shaping Bc+Be)

Look at how this policy could be implemented on customer side using legacy Frame-Relay Traffic Shaping:

map-class frame-relay SHAPE_256K
frame-relay cir 256000
frame-relay bc 25600
frame-relay be 25600
frame-relay mincir 256000
!
interface Serial 0/0/0
frame-relay traffic-shaping
!
interface Serial 0/0/0.1
frame-relay interface-dlci 101
class SHAPE_256K

Notice that we set MinCIR value in the FRTS map-class to the same value as CIR to ensure that potential QoS policy would use proper absolute bandwidth values. Here is how an ingress SP policy would look like if MQC traffic policing is used:

class-map DLCI_101
match fr-dlci 101
!
policy-map POLICE_INTERFACE
class DLCI_101
police cir 256000 bc 3200 be 6400
conform-action transmit
exceed-action set-frde-transmit
violate-action drop
!
interface Serial 0/0/0
service-policy input POLICE_INTERACE

Notice the use of MQC syntax and class-map matching FR DLCI. Cisco IOS supports feature known as Frame-Relay Traffic Policing (FRTP) that could be used to implement the same functions using the "legacy" map-class syntax, but the use of MQC is more common nowadays. How would the shaping implementation look if we were using MQC for traffic shaping?

policy-map SHAPE_256K
class class-default
shape average 256000 25600 25600
!
map-class frame-relay SHAPE_256K
service-policy output SHAPE_256K
!
interface Serial 0/0/0.1
frame-relay interface-dlci 101
class SHAPE_256K

If you are looking for more information on FRTS flavors, take a look at the following blog post: The four flavors of Frame-Relay Traffic Shaping

Implementing Dual-Rate Traffic Contracts in Cisco IOS

Let's take the traffic contract from previous example with CIR=256Kbps and AR=1544Kbps and normal burst size of 25600 bits. Next, add PIR=512Kbps to these values along with Be=51200. Make a quick list of shaping/policing values:

Shaping:

minCIR=256000 bps
CIR=512000 bps
Bc=51200 bits
Be=0 bits.

Policing:

CIR=256000 bps
PIR=512000 bps
Bc=3200 bytes
Be=6400 bytes

Pay special attention to the shaping parameters. First of all, Be=0, which means sporadic sending of excessive traffic bursts is disabled. Secondly, the CIR is set to 512Kbps, or in other words to the SP's Peak Rate. This means the customer is allowed to send at the rate of 512Kbps at any time. The minCIR is set to 256Kbps, meaning the customer will throttle down to contracted CIR upon reception of BECNs (if configured).The Bc size corresponds to the policing Be size - in case of congestion, the shaper will automatically shrink the burst size down to the value matching the CIR. Here is how a legacy FRTS configuration would look on customer side:

map-class frame-relay SHAPE_256K
frame-relay cir 512000
frame-relay bc 51200
frame-relay be 0
frame-relay mincir 256000
frame-relay adaptive-shaping becn
!
interface Serial 0/0/0
frame-relay traffic-shaping
!
interface Serial 0/0/0.1
frame-relay interface-dlci 101
class SHAPE_256K

Once again, notice that shaping CIR equals the SP's PIR and shaping minCIR corresponds to the actual SP CIR. Also notice that adaptive-shaping is now enabled under the map-class to allow dynamic response to BECN messages. The SP-side configuration would look as following, using MQC syntax:

class-map DLCI_101
match fr-dlci 101
!
policy-map POLICE_INTERFACE
class DLCI_101
police cir 256000 bc 3200 pir 512000 be 6400
conform-action transmit
exceed-action set-frde-transmit
violate-action drop
!
interface Serial 0/0/0
service-policy input POLICE_INTERACE

Which looks very similar to the single-rate example, just now PIR is explicitly configured. Finally, let's see how the shaping configuration would look like when using MQC syntax:

policy-map SHAPE_256K
class class-default
shape average 512000 51200 0
shape adaptive 256000
!
map-class frame-relay SHAPE_256K
service-policy output SHAPE_256K
!
interface Serial 0/0/0.1
frame-relay interface-dlci 101
class SHAPE_256K

Notice the use of adaptive shaping in MQC syntax. This command will only work if you apply shaping using the map-class model, and won't work if you simply apply a policy to interface. It is also possible to re-write the shaping configuration using the "shape peak" command (see more about this command in the blog post titled Understanding the "shape peak" command:

policy-map SHAPE_256K
class class-default
shape peak 256000 25600 25600
shape adaptive 256000
!
map-class frame-relay SHAPE_256K
service-policy output SHAPE_256K
!
interface Serial 0/0/0.1
frame-relay interface-dlci 101
class SHAPE_256K

Using this syntax allows for clearly showing the Bc and Be portions of the traffic contract, even though the result will be the same as if using the "shape average" command.

Summary

This blog post illustrated the two most commonly used types of traffic contracts in their basic forms and explained the concepts of various bursts used in the specification. Furthermore, there are some examples of enforcing the traffic contracts on both customer and SP sides.

Nov
29

In this first of a series of blog posts regarding Catalyst QoS, we will exam the AutoQoS capabilities on the 3560 Catalyst devices. AutoQoS allows for the automation of QoS settings for the switch with an absolute minimum of configuration required from the engineer. In particular, the 3560 AutoQoS features automates the classification and congestion management configurations required in VoIP environments. You should note that the 3560 AutoQoS has much “catching up” to do when you compare this feature to AutoQoS for VoIP and AutoQoS for Enterprise that are both now possible in the pure router class of Cisco devices.

First, the easy part. The interface configuration command required for QoS is simply:

auto qos voip [cisco-phone | cisco-softphone | trust]

Notice the auto qos voip command is used in conjunction with keywords that specify what devices to “trust” when it comes to these important VoIP packets. The cisco-phone keyword instructs the AutoQoS feature to only trust and act upon the incoming voice packets if they are truly sent from a Cisco IP Phone. The phone’s presence is detected thanks to CDP. Similarly, the cisco-softphone keyword instructs the device to only trust and act upon the voice packets if they are sent from a Cisco phone running in software on a PC. Finally, the trust keyword instructs the device to trust markings for VoIP packets that are coming from another switch or router over the port.

In order to view the global configuration commands and interface configuration commands your auto qos voip feature will actually create, you can enable debug auto qos on the device before you use the interface configuration command.

Should you need to reverse the effect that AutoQoS has on your device, use no auto qos voip under the interface to remove the interface-level commands. For the global configuration commands,  use no mls qos to render them useless. Notice that you will have to manually remove the actual commands, however.

As a CCIE (or real close), we should be very aware of the guidelines for usage of this Cat QoS feature

  • The feature works on switch or router ports.
  • Be sure to use AutoQoS for VoIP before making changes to the QoS configuration of the device. Should you need to make modifications, it is preferred to then modify the AutoQoS-generated configurations as appropriate.
  • To modify an AutoQoS generated policy-map or aggregate policer, copy the configuration to notepad, make the necessary changes, then remove the previous configuration from your device. Finally, paste in your new configuration.
  • CDP is required on the port for the Cisco IP Phone and Cisco SoftPhone keywords to work properly.

Well, if all of this sounds a bit “too good to be true”, let us make sure it works by turning to the command line of a 3560 Catalyst switch. As you can tell from the hostname of this device, it is been reset to factory defaults and possesses no user-based configuration whatsoever.

Switch#debug auto qos

Auto QoS debugging is on

Switch#configure terminal

Enter configuration commands, one per line.  End with CNTL/Z.

Switch(config)#interface fa0/10

Switch(config-if)#auto qos voip trust

Switch(config-if)#

*Mar  1 11:30:48.203: mls qos map cos-dscp 0 8 16 24 32 46 48 56

*Mar  1 11:30:48.220: mls qos

*Mar  1 11:30:48.220: no mls qos srr-queue input cos-map

*Mar  1 11:30:48.236: no mls qos srr-queue output cos-map

*Mar  1 11:30:48.245: mls qos srr-queue input cos-map queue 1 threshold 3  0

*Mar  1 11:30:48.253: mls qos srr-queue input cos-map queue 1 threshold 2  1

*Mar  1 11:30:48.261: mls qos srr-queue input cos-map queue 2 threshold 1  2

*Mar  1 11:30:48.261: mls qos srr-queue input cos-map queue 2 threshold 2  4 6 7

*Mar  1 11:30:48.261: mls qos srr-queue input cos-map queue 2 threshold 3  3 5

*Mar  1 11:30:48.270: mls qos srr-queue output cos-map queue 1 threshold 3  5

*Mar  1 11:30:48.270: mls qos srr-queue output cos-map queue 2 threshold 3  3 6 7

*Mar  1 11:30:48.270: mls qos srr-queue output cos-map queue 3 threshold 3  2 4

*Mar  1 11:30:48.278: mls qos srr-queue output cos-map queue 4 threshold 2  1

*Mar  1 11:30:48.278: mls qos srr-queue output cos-map queue 4 threshold 3  0

*Mar  1 11:30:48.278: no mls qos srr-queue input dscp-map

*Mar  1 11:30:48.295: no mls qos srr-queue output dscp-map

*Mar  1 11:30:48.312: mls qos srr-queue input dscp-map queue 1 threshold 2  9 10 11 12 13 14 15

*Mar  1 11:30:48.320: mls qos srr-queue input dscp-map queue 1 threshold 3  0 1 2 3 4 5 6 7

*Mar  1 11:30:48.320: mls qos srr-queue input dscp-map queue 1 threshold 3  32

*Mar  1 11:30:48.329: mls qos srr-queue input dscp-map queue 2 threshold 1  16 17 18 19 20 21 22 23

*Mar  1 11:30:48.329: mls qos srr-queue input dscp-map queue 2 threshold 2  33 34 35 36 37 38 39

*Mar  1 11:30:48.337: mls qos srr-queue input dscp-map queue 2 threshold 2  48 49 50 51 52 53 54 55

*Mar  1 11:30:48.337: mls qos srr-queue input dscp-map queue 2 threshold 2  56 57 58 59 60 61 62 63

*Mar  1 11:30:48.337: mls qos srr-queue input dscp-map queue 2 threshold 3  24 25 26 27 28 29 30 31

*Mar  1 11:30:48.345: mls qos srr-queue input dscp-map queue 2 threshold 3  40 41 42 43 44 45 46 47

*Mar  1 11:30:48.345: mls qos srr-queue output dscp-map queue 1 threshold 3  40 41 42 43 44 45 46 47

*Mar  1 11:30:48.354: mls qos srr-queue output dscp-map queue 2 threshold 3  24 25 26 27 28 29 30 31

*Mar  1 11:30:48.354: mls qos srr-queue output dscp-map queue 2 threshold 3  48 49 50 51 52 53 54 55

*Mar  1 11:30:48.354: mls qos srr-queue output dscp-map queue 2 threshold 3  56 57 58 59 60 61 62 63

*Mar  1 11:30:48.362: mls qos srr-queue output dscp-map queue 3 threshold 3  16 17 18 19 20 21 22 23

*Mar  1 11:30:48.362: mls qos srr-queue output dscp-map queue 3 threshold 3  32 33 34 35 36 37 38 39

*Mar  1 11:30:48.371: mls qos srr-queue output dscp-map queue 4 threshold 1  8

*Mar  1 11:30:48.371: mls qos srr-queue output dscp-map queue 4 threshold 2  9 10 11 12 13 14 15

*Mar  1 11:30:48.371: mls qos srr-queue output dscp-map queue 4 threshold 3  0 1 2 3 4 5 6 7

*Mar  1 11:30:48.379: no mls qos srr-queue input priority-queue 1

*Mar  1 11:30:48.396: no mls qos srr-queue input priority-queue 2

*Mar  1 11:30:48.396: mls qos srr-queue input bandwidth 90 10

*Mar  1 11:30:48.404: mls qos srr-queue input threshold 1 8 16

*Mar  1 11:30:48.404: mls qos srr-queue input threshold 2 34 66

*Mar  1 11:30:48.404: mls qos srr-queue input buffers 67 33

*Mar  1 11:30:48.412: mls qos queue-set output 1 threshold 1 138 138 92 138

*Mar  1 11:30:48.412: mls qos queue-set output 1 threshold 2 138 138 92 400

*Mar  1 11:30:48.412: mls qos queue-set output 1 threshold 3 36 77 100 318

*Mar  1 11:30:48.421: mls qos queue-set output 1 threshold 4 20 50 67 400

*Mar  1 11:30:48.421: mls qos queue-set output 1 buffers 10 10 26 54

*Mar  1 11:30:48.429: mls qos queue-set output 2 threshold 1 149 149 100 149

*Mar  1 11:30:48.429: mls qos queue-set output 2 threshold 2 118 118 100 235

*Mar  1 11:30:48.429: mls qos queue-set output 2 threshold 3 41 68 100 272

*Mar  1 11:30:48.438: mls qos queue-set output 2 threshold 4 42 72 100 242

*Mar  1 11:30:48.438: mls qos queue-set output 2 buffers 16 6 17 61

*Mar  1 11:30:48.480:   mls qos trust cos

*Mar  1 11:30:48.496:  no queue-set 1

*Mar  1 11:30:48.505:  priority-queue out

*Mar  1 11:30:48.505:  srr-queue bandwidth share 10 10 60 20

Switch(config-if)#

"Holy automation Batman!" Yes, AutoQoS on the Catalyst just got real busy for us ensuring that your fragile voice packets receive the proper classification and queuing treatment. In fact, this article certainly serves as a nice preview of blog posts to come...

Switch#debug auto qos
Auto QoS debugging is on
Switch#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Switch(config)#interface fa0/10
Switch(config-if)#auto qos voip trust
Switch(config-if)#
*Mar  1 11:30:48.203: mls qos map cos-dscp 0 8 16 24 32 46 48 56
*Mar  1 11:30:48.220: mls qos
*Mar  1 11:30:48.220: no mls qos srr-queue input cos-map
*Mar  1 11:30:48.236: no mls qos srr-queue output cos-map
*Mar  1 11:30:48.245: mls qos srr-queue input cos-map queue 1 threshold 3  0
*Mar  1 11:30:48.253: mls qos srr-queue input cos-map queue 1 threshold 2  1
*Mar  1 11:30:48.261: mls qos srr-queue input cos-map queue 2 threshold 1  2
*Mar  1 11:30:48.261: mls qos srr-queue input cos-map queue 2 threshold 2  4 6 7
*Mar  1 11:30:48.261: mls qos srr-queue input cos-map queue 2 threshold 3  3 5
*Mar  1 11:30:48.270: mls qos srr-queue output cos-map queue 1 threshold 3  5
*Mar  1 11:30:48.270: mls qos srr-queue output cos-map queue 2 threshold 3  3 6 7
*Mar  1 11:30:48.270: mls qos srr-queue output cos-map queue 3 threshold 3  2 4
*Mar  1 11:30:48.278: mls qos srr-queue output cos-map queue 4 threshold 2  1
*Mar  1 11:30:48.278: mls qos srr-queue output cos-map queue 4 threshold 3  0
*Mar  1 11:30:48.278: no mls qos srr-queue input dscp-map
*Mar  1 11:30:48.295: no mls qos srr-queue output dscp-map
*Mar  1 11:30:48.312: mls qos srr-queue input dscp-map queue 1 threshold 2  9 10 11 12 13 14 15
*Mar  1 11:30:48.320: mls qos srr-queue input dscp-map queue 1 threshold 3  0 1 2 3 4 5 6 7
*Mar  1 11:30:48.320: mls qos srr-queue input dscp-map queue 1 threshold 3  32
*Mar  1 11:30:48.329: mls qos srr-queue input dscp-map queue 2 threshold 1  16 17 18 19 20 21 22 23
*Mar  1 11:30:48.329: mls qos srr-queue input dscp-map queue 2 threshold 2  33 34 35 36 37 38 39
*Mar  1 11:30:48.337: mls qos srr-queue input dscp-map queue 2 threshold 2  48 49 50 51 52 53 54 55
*Mar  1 11:30:48.337: mls qos srr-queue input dscp-map queue 2 threshold 2  56 57 58 59 60 61 62 63
*Mar  1 11:30:48.337: mls qos srr-queue input dscp-map queue 2 threshold 3  24 25 26 27 28 29 30 31
*Mar  1 11:30:48.345: mls qos srr-queue input dscp-map queue 2 threshold 3  40 41 42 43 44 45 46 47
*Mar  1 11:30:48.345: mls qos srr-queue output dscp-map queue 1 threshold 3  40 41 42 43 44 45 46 47
*Mar  1 11:30:48.354: mls qos srr-queue output dscp-map queue 2 threshold 3  24 25 26 27 28 29 30 31
*Mar  1 11:30:48.354: mls qos srr-queue output dscp-map queue 2 threshold 3  48 49 50 51 52 53 54 55
*Mar  1 11:30:48.354: mls qos srr-queue output dscp-map queue 2 threshold 3  56 57 58 59 60 61 62 63
*Mar  1 11:30:48.362: mls qos srr-queue output dscp-map queue 3 threshold 3  16 17 18 19 20 21 22 23
*Mar  1 11:30:48.362: mls qos srr-queue output dscp-map queue 3 threshold 3  32 33 34 35 36 37 38 39
*Mar  1 11:30:48.371: mls qos srr-queue output dscp-map queue 4 threshold 1  8
*Mar  1 11:30:48.371: mls qos srr-queue output dscp-map queue 4 threshold 2  9 10 11 12 13 14 15
*Mar  1 11:30:48.371: mls qos srr-queue output dscp-map queue 4 threshold 3  0 1 2 3 4 5 6 7
*Mar  1 11:30:48.379: no mls qos srr-queue input priority-queue 1
*Mar  1 11:30:48.396: no mls qos srr-queue input priority-queue 2
*Mar  1 11:30:48.396: mls qos srr-queue input bandwidth 90 10
*Mar  1 11:30:48.404: mls qos srr-queue input threshold 1 8 16
*Mar  1 11:30:48.404: mls qos srr-queue input threshold 2 34 66
*Mar  1 11:30:48.404: mls qos srr-queue input buffers 67 33
*Mar  1 11:30:48.412: mls qos queue-set output 1 threshold 1 138 138 92 138
*Mar  1 11:30:48.412: mls qos queue-set output 1 threshold 2 138 138 92 400
*Mar  1 11:30:48.412: mls qos queue-set output 1 threshold 3 36 77 100 318
*Mar  1 11:30:48.421: mls qos queue-set output 1 threshold 4 20 50 67 400
*Mar  1 11:30:48.421: mls qos queue-set output 1 buffers 10 10 26 54
*Mar  1 11:30:48.429: mls qos queue-set output 2 threshold 1 149 149 100 149
*Mar  1 11:30:48.429: mls qos queue-set output 2 threshold 2 118 118 100 235
*Mar  1 11:30:48.429: mls qos queue-set output 2 threshold 3 41 68 100 272
*Mar  1 11:30:48.438: mls qos queue-set output 2 threshold 4 42 72 100 242
*Mar  1 11:30:48.438: mls qos queue-set output 2 buffers 16 6 17 61
*Mar  1 11:30:48.480:   mls qos trust cos
*Mar  1 11:30:48.496:  no queue-set 1
*Mar  1 11:30:48.505:  priority-queue out
*Mar  1 11:30:48.505:  srr-queue bandwidth share 10 10 60 20
Switch(config-if)#
Nov
08

Abstract

This publication discusses the spectrum of problems associated with transporting Constant Bit Rate (CBR) circuits over packet networks, specifically focusing VoIP services. It provides guidance on practical calculation for voice bandwidth allocation in IP networks, including the maximum bandwidth proportion allocation and LLQ queue settings. Lastly, the publication discusses the benefits and drawbacks of transporting CBR flows over packet switched networks and demonstrates some effectiveness criteria.

Introduction

Historically, the main design goal of Packet Switched Networks (PSNs) was optimum bandwidth utilization for low-speed links. Compared to their counterpart, circuit-switched networks (CSNs such as SONET/SDH networks), PSNs use statistical as opposed to deterministic (synchronous) multiplexing. This feature allows PSNs to be very effective for bursty traffic sources, i.e. those that send traffic sporadically. Indeed, with many sources this allows the transmission channel to be optimally utilized by sending traffic only when necessary. Statistical multiplexing is only possible if every node in the network implements packet queueing, because PSNs introduce link contention. One good historical example is ARPANET: the network theoretical foundation has been developed in Kleinrock's work on distributed queueing systems (see [1]).

In PSNs, it is common for the traffic from multiple sources to be scheduled for sending out the same link at the same moment. In such case of contention for the shared resource, exceeding packets are buffered, delayed and possibly dropped. In addition to this, packets could be re-ordered, i.e. packets sent earlier may arrive behind packets that have been sent after them. The latter is normally a result of packets taking different paths in the PSN as a due to routing decisions. Such behavior is OK with bursty, delay insensitive data traffic, but completely inconsistent with the behavior of constant bit rate (CBR), delay/jitter sensitive traffic sources, such as emulated TDM traffic. Indeed, transporting CBR flows over PSNs poses significant challenges. Firstly, emulating a circuit service requires that every node should not buffer the CBR packets (i.e. should not introduce delay or packet drops) and be "flow-aware" to avoid re-ordering. The other challenge is the "packet overhead" tax imposed on emulated CBR circuits. Per their definition, CBR sources produce relatively small burst of data at regular periodic intervals. The more frequent are the intervals, the typically smaller are the bursts. In turn, PSNs apply a header to every transmitted burst of information to implement network addressing and routing, with the header size being often comparable to the CBR payload. This significantly decreases link utilization efficiency when transporting CBRtraffic.

Emulating CBR services over PSN

At first, it may seem that changing queuing discipline in every node will resolve the buffering problem. Obviously, if we distinguish CBR flow packets and service them ahead of all other packets using priority-queue then they would never get buffered. This assumes that link speed is fast enough so that serialization delay is negligible in the context of the given CBR flow. Such delay may vary depending on the CBR source: for example, voice flows typically produce one codec sample every 10ms and based on this, serialization delay at every node should not exceed 10ms, or preferably be less than that (otherwise, the next produced packet will "catch up" the previous one). Serialization problem on slow links could be solved using fragmentation and interleaving mechanics, e.g. as demonstrated in [6]. Despite of priority queueing and fragmentation, situation becomes more complicated with multiple CBR flows transported over the same PSN. The reason is that there is now contention among the CBR flows, since all of them should be serviced on priority basis. This creates queueing issues and intolerable delays. There is only one answer to reduce resource contention PSNs - over-provisioning.

Following the work [2], let's review how the minimum over-provisioning rate could be calculated. We first define the CBR flows as those that cannot tolerate a single delay of their packet in the link queue. Assume there are r equally behaving traffic flows contending for the same link. Pick up a designated flow out of the set. When we randomly "look" at the link, the probability that we see a packet from the designated flow is 1/r, since we assume that all flows are serviced equally by the connected PSN node. Then, the probability that the selected packet does NOT belong to our flow is 1-1/r respectively. If the link can accept at maximum t packets per millisecond, then during an interval of k milliseconds the probability that our designated flow may send a packet over the link without blocking is: P=1-(1-1/r)^tk, where (1-1/r)^tk is the probability of NOT seeing our flows designated packet amount the tk packets. The value P is the probability of any given packet NOT being delayed due to contention. It is important to understand that every delayed packet will cause flow behavior deviation from CBR. Following [2], we define u=tk as the "over-provisioning ratio" where u=1 when the channel can only send one flow packet during a time unit that take a flow to generate the same packet, i.e. when channel rate = flow rate. When u=2 the link is capable of sending twice as much packets during a unit of time compared to the number of packets sent by a single flow during the same interval. With the new variable, the formula becomes P=1-(1-1/r)^u. Fixing the value of P in this equation we obtain:

u=ln(1-P)/ln(1-1/r). (*)

which the minimum over-provision ratio to achieve the desired P probability of successfully transmitting the designated flow's packet when r equal flows are contending for the link. For example, with P=99.9% and r=10 we end up having u=ln(0.001)/ln(0.9)=65.5. That is, in order to provide the guarantee of not delaying 99.9% packets for 10 CBR flows we need to have at least 66 times more bandwidth than a single flow requires. Lowering P to 99% results in u=44 over-provisioning coefficient. It is interesting to look at the r/u ratio, which demonstrate what portion of minimally over-provisioned link bandwidth would be occupied by the "sensitive" flows when they all are transmitted in parallel. If we take the ratio r/u=ln((1-1/r)^r)/ln(1-P) then with large number of r we can replace (1-1/r)^r with 1/e and the link utilization ratio becomes approximated by:

r/u=-1/ln(1-P). (**)

For P=99% we get the ratio of 21%, for P=99.9% the ratio is 14% and for P=90% the ratio becomes 43%. In practice, this means that for moderately large amount of concurrent CBR flows, e.g. over 30, you may allocate no more than specified percentage of the link's bandwidth to CBR traffic based on the target QoS requirement.

Now that we are done with buffering delays, what about packet reordering and packet overhead tax? Reordering problem could be solved at the routing level in PSNs: if every routing node is aware of the flow state it may ensure that all packets belonging to the same flow are sent across the same path. This is typically implemented by deep packet inspection (which by the way violates the end-to-end principle as stated in RFC 1958) and classifying the packets based on the higher-level information. Such implementations are, however, rather effective as inspection and flow classification is typically performed in forwarding path using hardware acceleration. The overhead tax problem has two solution. The first one is, again, over-provisioning. By using high-capacity, low-utilized link we may ignore the bandwidth wastage due to the overhead. The second solution requires adding some state to network nodes: by performing flow inspection at the both ends of a single link we may strip the header information and replace it with a small flow ID. The other end of the link will reconstruct the original headers by matching the flow ID to the locally stored state information. This solution violates the end-to-end principle and has poor scalability as the number of flows grow. Typically it is used on slow-speed links. For example, VoIP services utilize IP/RTP/UDP and possibly TCP header compression to reduce the packet overhead tax.

Practical Example: VoIP Bandwidth Consumption

Let's say we have a 2Mbps link and we want to know how to provision priority-queue settings for G.729 calls. Firstly, we need to know per-flow bandwidth consumption. You may find enough information on this topic referring to [3]. Assuming we are using header compression over Frame-Relay, the per-flow bandwidth is 11.6Kbps and maximum link capacity is roughly 2000Kbps the theoretical maximum over-subscription rate is 2000/11.6=172. We can find the maximum number of flows allowed under the condition of P as

r=1/(1-exp(ln(1-p)/u)) = 1/(1-(1-P)^(1/u)) (***)

setting u=170 and P=0.99. This yields the theoretical limit of 37 concurrent flows. The total bandwidth for that many flows is 37*11.6=429Kbps or about 21% of the link capacity, as predicted by the asymptotic formula (**) above. The remaining bandwidth could be used by other non-CBR applications, as it should be expected from a PSN exhibiting high link utilization efficiency.

Knowing the aggregate bandwidth and maximum number of flows provides us the parameters for admission control tools (e.g. policer rate, RSVP bandwidth and so froth). However, what is left to define yet are the burst settings and the queue depth for LLQ. The maximum theoretical burst size equals to the maximum number of flows multiplied by the voice frame size. From [3] we readily obtain that a compressed G.729 frame size for Frame-Relay connection is 29 bytes. This gives us the burst of 1073 bytes, which we could round up to 1100 bytes for safety. The maximum queue depth could be defined as number of flows minus one, since in the worst case at least one priority packet would be scheduled for serializing while others held in the priority queue for processing. This means the queue depth would be at maximum 36 packets. The resulting IOS configuration would look like:

policy-map TEST
class TEST
priority 430 1100
queue-limit 36 packets


Circuits vs Packets

It is interesting to compare voice call capacity for a digital TDM circuit vs the same circuit being used for packet mode transport. Talking of a E1 circuit, we can transport as many as 30 calls, if one channel is used for associated signaling (e.g. ISDN). Compare this to the 37 G.729 VoIP calls we may obtain if the same circuit is channelized and runs IP - about 20% increase in call capacity. However, it is important to point out the quality of G.729 calls is degraded as compared to digital 64Kbps bearer channels, not to mention that other services could not be delivered over a compressed emulated voice channel. It might be more fair comparing the digital E1 to the packetized E1 circuit carrying G.711 VoIP calls. In this case, the bit rate for a single call running over Frame-Relay encapsulation with IP/RTP/UDP header compression would be (160+2+7)*50*8=67600bps or 67.6Kbps. Maximum over-provision rate is 29 in such case, which ends up with only six (6) VoIP calls allowed for the packetized link with P=99% in-time delivery! Therefore, if you try providing digital call quality over a packet network you end up with extremely inefficient implementation. Finally, consider an intermediate case - G.729 calls without IP/RTP/UDP header compression. This case assumes that complexity belongs to the network edge, as any transit links are not required to implement the header compression procedure. We end up with the following: uncompressed G.729 call over Frame-Relay generates (20+40+7)*50*8=26.8Kbps which results in over-provisioning coefficient of u=74 and r=16 flows - slightly over the half of the number that a E1 could carry.

Using the asymptotic formula (**) we see that for P=99% no more than 21% of the packetized link could be used for CBR services. This implies that the packet compression scheme should reduce the bandwidth of pure CBR flow by more than 5 times to be effectively compared with circuit-switched transport. Based on this, we conclude that PSNs could be more efficient for CBR transportation compared to "native" circuit-networks only if they utilize advanced processing features such as payload/header compression yielding compression coefficient over 5 times. However, we should keep in mind that such compression is not possible for all CBR services, e.g. relaying T1/E1 over IP has to maintain the full bandwidth of the original TDM channels, which is extremely inefficient in terms of resource utilization. Furthermore, the advanced CODEC features require complex equipment at the network edge and possibly additional complexity in the other parts of the network, e.g. in order to implement link header compression.

It could be argued that the remaining 79% of the packetized link could be used for data transmission, but the same is possible with circuit switched networks, provided that packet routers are attached to the edges. All the data packet switching routers need to do is dynamically request transmission circuit from the CSN based on traffic demands and used them for packet transmissions. This approach has been implemented, among others, in GMLPS ([5]).

Conclusions

The above logic demonstrates that PSNs were not really designed to be good at emulating true CBR services. Naturally, as the original intent of PSNs was maximizing the use of scarce link bandwidth. Transporting CBR services not only requires complex queueing disciplines but also ultimately over-provisioning the link bandwidth, thus somewhat defeating the main purpose of PSNs. Indeed if all that a PSN is used for is CBR service emulation, the under-utilization remains very significant. Some cases, like VoIP, allows for effective payload transformation and significant bandwidth reduction, which allows for more efficient use of network resources. On the other hand, such payload transformation requires introducing extra complexity to networking equipment. All this in addition to the fact that packet-switching equipment is inherently more complex and expensive compared to circuit-switched networks especially for very high-speed links. Indeed, packet switching logic requires complex dynamic lookups, large buffer memory and internal interconnection fabric. Memory requirements and dynamic state grow proportionally to the link speed, making high-speed packet-switching routers extremely expensive not only in hardware but also in software, due to advanced control plane requirement and proliferating services. More on this subject could be found in [4]. It is worth mentioning that the packet-switching inefficiency in network core has been realized long time ago, and there have been attempts for integrating circuit-switching core networks with packet-switching networks, most notable being GMPS ([5]). However, so far, the industry inertia did not make any of the proposed integration solutions viable.

Despite of all arguments, VoIP implementations have been highly successful so far, most likely akin to the effectiveness of VoIP codecs. Of course, no one can yet say than VoIP over Internet provides quality comparable to digital phone lines, but at least it is cheap, and that's what market is looking for. VoIP has been highly successful in enterprises, so far, mainly due to the fact that enterprise campus networks are mostly high-speed based on ethernet switched technology, that demonstrates very low general link utilization ratio, within 1-3% of available bandwidth. In such over-provisioned conditions, deploying VoIP should not pose major QoS challenges.

Further Reading

[1] Information Flow in Large Communication Nets, L. Kleinrock
[2] The Case for Service Overlays, Brassil, J.; McGeer, R.; Sharma, P.; Yalagandula, P.; Mark, B.L.; Zhang, S.; Schwab, S.
[3] Voice Bandwidth Consumption, INE Blog
[4] Circuit Switching in the Internet, Pablo Molinero Fernandez
[5] RFC 3945
[6] PPP Multilink Interleaving over Frame-Relay

Oct
05

INE is happy to announce that we now have all 21 Modules of our new CCIE Voice Deep Dive completed --115 hours of recorded class-on-demand style video (no breaks or dead-air in the recordings - that's 115 hours of actual learning!)-- completed and ready for your consumption!

As we mentioned in a previous post, The author and poet Maya Angelou said “Words mean more than what is set down on paper. It takes the human voice to infuse them with deeper meaning.”. Well that is certainly what we have attempted to do with the CCIE Voice Deep Dive self-paced Class on Demand series – that is to bring the human instructional voice element to infuse deeper meaning to what is already fantastic Cisco Documentation. Anyone that has set out and determined to undertake the task of studying for and ultimately passing any CCIE Lab exam, knows that at some point during your studies, the words on paper (Cisco Docs, RFCs, books) – while a absolute phenomenal source of information – can at times seem to loose their impact. Perhaps you have been studying too long, read one too many docs, have the time pressure of your family and friends waiting for you to return to be a part of their life, or perhaps you are just starting out on your adventure and don’t know where to begin. Whatever stage you are at or whatever the case may be, it is certainly helpful to have a tutor and mentor there beside you at times, assisting you in understanding what each complex technology’s documentation is trying to teach you, in possibly a deeper and more insightful way than you can manage on your own.

For each complex topic we have held (or will soon hold) an online class where we dive down deep and explore all the concepts, practical application, and troubleshooting associated with each technology topic. The general format for each Class-on-Demand Deep Dive module spends between 4-7 hours on the given topic for that day, and during that time follows this outlined training methodology:

  • Collectively discuss and teach all concepts involved in the technology
  • Whiteboard concepts to further deepen every participant’s understanding
  • Define a specific set of tasks to be accomplished
  • Demonstrate how the tasks and concepts are implemented and properly configured
  • Test the configuration thoroughly
  • Vary the configuration to understand how different permutations effect the outcome
  • Debug and trace the working configuration to understand what should be seen
  • Break the configuration and troubleshoot with debugs and traces to contrast from the working set

Before we go on with the 21 module outline, here are a few demos of this Deep Dive series:

Demo 1: Module 10 :: Dial Plan :: Globalization Prezi - Theory and Reasons :: Runtime 1 hr

Demo 2: Module 10 :: Dial Plan :: Inbound Calling Party Localization :: Runtime 30 mins

Demo 3: Module 12 :: CUBE :: Conforming to ITSP Reqs: SIP Header Conversions :: Runtime 51 mins

Demo 4: Module 13 :: Unified Mobility :: Mobile Connect Access Lists and Exclusivity :: Runtime 20 mins

Here is the outline for the complete Deep Dive series:

Network Infrastructure

Module 1 :: Network Infrastructure, RSVP CAC, and LAN & WAN Quality of Service :: Runtime 10.5hrs

  • NTP
  • VLANs
  • TFTP
  • DHCP
  • Multicast (Infrastructure)
  • LAN QoS - including:
  • Catalyst 3560/3750 Classification and Marking
  • Catalyst 3560/3750 Conditional Trust
  • Catalyst 3560/3750 Ingress Interface Mapping
  • Catalyst 3560/3750 Ingress Interface Queuing
  • Catalyst 3560/3750 Ingress Interface Expedite Queue
  • Catalyst 3560/3750 L2 CoS to L3 DSCP Mapping
  • Catalyst 3560/3750 Egress Interface Mapping
  • Catalyst 3560/3750 Egress Interface Queuing
  • Catalyst 3560/3750 Interface Queue Memory Allocation
  • Catalyst 3560/3750 Egress Queue-Set Templates
  • Catalyst 3560/3750 Weighted Tail Drop (WTD) Buffer Allocation
  • Catalyst 3560/3750 Egress Interface Expedite Queue
  • Catalyst 3560/3750 Egress Interface Sharing
  • Catalyst 3560/3750 Egress Interface Shaping
  • Catalyst 3560/3750 Scavenger Traffic Policing
  • CUCM RSVP-Based Locations for Call Admission Control
  • WAN QoS Classification
  • WAN QoS Low Latency Queuing (CBWFQ-PQ)
  • WAN QoS Traffic Shaping
  • WAN QoS Frame-Relay Fragmentation

Unified Communications Manager

Module 02 :: CUOS GUI and CLI Admin :: Runtime 3.6 hrs

  • CUCM WebUI: Service Activation and Stop/Start/Reset
  • CUCM WebUI: Bulk Administration Tool (Import/Export, Phone Reports, etc)
  • CUCM WebUI: DB Replication Status
  • CUCM WebUI: Trace Files
  • CUOS CLU: TFTP Files Management
  • CUOS CLU: Status and Hostname
  • CUOS CLU: DB Replication Assurance
  • CUOS CLU: DB Replication Repair and Cluster Reset
  • CUOS CLU: Trace Files
  • CUOS CLU: RIS DB Search
  • CUOS CLU: Performance Monitor (PerfMon)
  • RTMT: Trace Files
  • RTMT: Performance Monitor (PerfMon)

Module 03 :: CUCM System and Phone - SCCP and SIP Fundamentals :: Runtime 4.4 hrs

  • CUCM Services
  • UC Servers and Groups
  • Date/Time with NTP Reference
  • Regions and Codecs
  • Location-Based Call Admission Control
  • SRST References
  • Device Pools
  • System Parameters
  • Enterprise Parameters
  • Phone Button Templates
  • Softkey Templates
  • SCCP Phone Basics
  • SIP Phone Basics

Module 04 :: Users, Credentials, Multi-Level Roles and LDAP Internetworking :: Runtime 3.6 hrs

  • CUCM User Credentials and Policies
  • LDAP Synchronization for CUCM and Unity Connection
  • LDAP Authentication for CUCM and Unity Connection
  • CUCM End Users
  • CUCM User Roles
  • CUCM Multi-Level Administration
  • CUCM Device/Phone/Line User Association
  • UCCX and CUP Basic Users

Module 05 :: Call Features - In-Depth :: Runtime 5.3 hrs

  • SCCP and SIP Phone Display
  • Phone Firmware
  • Phone Logging
  • Ring Settings
  • Basic and Advanced Call Forwarding Display
  • Auto-Answer Options
  • CallBack (Camp-On)
  • Intercom
  • Advanced Call Hold Options
  • Call Park
  • Directed Call Park
  • Advanced Call Park Settings
  • Call Pickup
  • Group Call Pickup
  • Other Call Pickup
  • Directed Call Pickup
  • Call Pickup Attributes
  • Shared Line
  • Barge and cBarge (Conference Barge)
  • Privacy
  • Built-In IP Phone Bridge

Module 06 :: Media Resources - MTPs, Conf Bridges, Annunciator and Music on Hold :: Runtime 5.6 hrs

  • IOS Software MTP
  • IOS Conference Bridge
  • IOS Transcoding
  • Media Preference and Redundancy
  • Meet-Me Conferencing
  • Ad-Hoc Conferencing
  • Annunciator
  • Unicast Music on Hold
  • Traditional Multicast Music on Hold
  • Alternate Multicast Music on Hold

Module 07 :: Expert Gateways & Trunks :: Runtime 5.9 hrs

  • ISDN Switch Types and Advanced CNAM options
  • ISDN Information Elements
  • SIP Trunks - Fundamental and Advanced Options
  • H.323 Gateways - Fundamental and Advanced Options
  • MGCP Gateways - Fundamental and Advanced Options

Module 08 :: Expert H.323 Gatekeeper :: Runtime 7.1 hrs

  • Provisioning IOS H.323 Gatekeeper
  • Registering CUCM with H.323 Gatekeeper
  • Registering CUCME with H.323 Gatekeeper
  • Routing Calls from CUCME to CUCM via Gatekeeper in Multiple Zones with Dynamic E.164 Aliases
  • Routing Calls from CUCM to CUCME via Gatekeeper in Multiple Zones with Multiple Tech Prefixes
  • Routing Calls from CUCME to CUCM via Gatekeeper in Multiple Zones with Multiple Tech Prefixes
  • Routing Calls from CUCME to CUCM via Gatekeeper in Multiple Zones with Static E.164 Aliases
  • Routing Calls from CUCM to CUCME and Back via Gatekeeper in One Zone with One Tech Prefix
  • Gatekeeper Call Admission Control
  • Routing Calls from CUCM to CUCME and Back via Alternate Gatekeeper Clustering in Multiple Zones with Multiple Tech Prefixes using GUP

Module 09 :: Dial Plan - Line Device Approach and the Not-So-Basic Fundamentals :: Runtime 7 hrs

  • Class of Service: Calling Search Spaces and Partitions
  • Gateways, Route Groups, Local Route Groups/Device Pools
  • Route Lists and Standard Local Route Groups
  • Route Patterns and Translation Patterns
  • Digit Manipulation: Calling & Called Party Transformations and IOS Dial Peers
  • Private Line Automatic Ringdown (PLAR)

Module 10 :: Dial Plan - Globalization & Localization of both the Calling and the Called Numbers, and with Mapping the Global Number to the Local Variant :: Runtime 6.3 hrs

  • Inbound PSTN Calls (Ingress from PSTN, Egress to Phones): Calling Party Globalization :: GW Incoming Calling Party Settings
  • Inbound PSTN Calls (Ingress from PSTN, Egress to Phones): Calling Party Localization :: Phone Calling Party Transformations
  • Outbound PSTN Calls (Ingress from Phones, Egress to PSTN): Called Party Globalization :: PSTN Patterns - a.k.a. "Translation Patterns are the *New* Route Patterns"
  • Outbound PSTN Calls (Ingress from Phones, Egress to PSTN): Called Party Localization :: Digit Manipulation: Calling & Called Party Transformations and IOS Voice Translation Rules & Dial Peers
  • Mapping the Global Number to the Local Variant :: + Dialing and One-Button Missed Call DialBack

Module 11 :: Dial Plan - Unlocking the Full Potential of Globalization & Localization :: Runtime 4.2 hrs

  • System & User Speed Dials and Corporate Directory
  • Call Forward on Unregister
  • Automated Alternate Routing Made So Simple
  • Multiple Backup Gateways for Every Site using only Standard Local Route Group
  • National and International Tail End Hop Off (TEHO) Made Easy
  • Globalized Call Routing using H.323 ICTs

Module 12 :: Dial Plan - Cisco Unified Border Element (CUBE) with SIP Normalization :: Runtime 6.5 hrs

  • SIP Trunk to SIP ITSP for PSTN Call Routing
  • Conforming to ITSP Reqs: Various SIP-Attributes
  • Conforming to ITSP Reqs: SIP Header Conversions
  • Advanced Call Admission Control Mechanisms with CUBE
  • Skype SIP Trunk for Branch2 Site
  • Testing Supplementary Features

Module 13 :: Unified Mobility - Getting the Most out of Single Number Reach and Direct Inward System Access :: Runtime 6.8 hrs

  • Mobile Connect Basics
  • Mobile Connect Ring Schedule
  • Mobile Connect Localization
  • Mobile Connect Access Lists and Exclusivity
  • Mobile Connect Interaction with Local Route Group
  • Mobile Voice Access - Inbound Call Recognition and Display
  • Mobile Voice Access and Direct Inward System Access (DISA)
  • Mobile Connect Mid-Call Features - Supplementary Services

Module 14 :: Device Mobility & Extension Mobility - What They Have in Common and When To Use Each :: Runtime 4 hrs

  • Device Mobility - Between Sites but Within a Country
  • Device Mobility - Between Sites and Between Countries
  • Extension Mobility
  • Device and Extension Mobility and TEHO Interactions

Unified Communications Manager Express

Module 15 :: CUCME System and Phone Basics - SCCP and SIP :: Runtime 5 hrs

  • IOS DHCP
  • IOS Clock and Network Time
  • IOS TFTP Server
  • SIP CME Server Setup
  • SIP CME Phone and DN Setup
  • SCCP CME Server Setup
  • SCCP CME Phone and DN Setup
  • CME Directory Services
  • SCCP CME Server Redundancy
  • Endpoint Registration with External SIP Proxy Server
  • CME Templates for SCCP and SIP Phones and DNs
  • CME Phone Customization

Module 16 :: CUCME Dialplans, Class of Restrictions (COR) & Media Resources :: Runtime 5.6 hrs

  • PSTN Dialing from CME
  • IOS Voice Translation Rules in CME
  • Load Balancing Calls in CME
  • Class of Restrictions for CME
  • Multicast Music on Hold for CME
  • IOS Transcoding for CME
  • IOS Hardware Conference Bridge for CME
  • Multicast Broadcast-Paging in CME
  • MTP and DSP Resources in CME
  • Speed Dials in CME

Module 17 :: CUCME Advanced Call Features :: Runtime 4.2 hrs

  • Shared Lines and Feature Ring with SCCP Phones
  • Shared Lines with Barge and Privacy for SCCP Phones
  • Intercom for SIP and SCCP Phones
  • Night Service Bell for SCCP Phones
  • Call Park for SIP and SCCP Phones
  • Call Blocking for SIP and SCCP Phones
  • CallerID Blocking for SIP and SCCP Phones
  • Call Transfer and Forwarding for SIP and SCCP Phones

Module 18 :: CUCME Call Coverage and Survivable Remote Site Telephony :: Runtime 3.5 hrs

  • CME as SRST Fallback Mode (SIP, SCCP and MGCP Fallback)
  • Traditional SRST Fallback Mode (SIP, SCCP and MGCP Fallback)
  • 4-Digit Reachability from CUCM & SRST While in Fallback Mode
  • Call-Coverage - Call Pickup Groups
  • Call-Coverage - Basic-Automatic Call Distribution (B-ACD)
  • Call-Coverage - SIP and SCCP Hunt Groups and IOS Hunting

Unified Contact Center Express

Module 19 :: Unified Contact Center Express – Integration, CSQ Provisioning and Custom Scripting :: Runtime 5.1 hrs

  • UCCX Setup and Integration and Troubleshooting Being "Locked-Out"
  • CSQ Setup with Preferential Agent Choice
  • Basic Custom Scripting – Examination of UCCX Editor and “Simple Queuing.aef” Default Script
  • Basic Custom Scripting – Day of Week & Time of Day
  • Basic Custom Scripting – Reroute to Voicemail or Proceed to Queue
  • Basic Custom Scripting – MoH in Queue
  • Basic Custom Scripting – How Many Times Through Queue with Option to Go to Voicemail
  • Basic Custom Scripting – Nested Queues for More Available Agents
  • Basic Custom Scripting – Agent-Based Routing
  • Basic Custom Scripting – Testing and Debugging

Unified Messaging

Module 20 :: Messaging - Unity Connection & Unity Express :: Runtime 2.4 hrs

  • Unity Connection
  • Unity Express

Unified Presence

Module 21 :: CUCM Presence and Cisco Unified Presence Server - Integration and Client Usage :: Runtime 4 hrs

  • CUCM-Only Presence with Subscribe CSS
  • CUCM-Only Presence with Presence Groups
  • CUPS & CUCM Integration and CUPC Personal Communicator Provisioning
  • CUPS and IP Phone Messenger (IPPM)

And for those that couldn't get enough of this trailer for this series the first time, here it is again:

Jul
01

Try these questions on for size! Learn all this and much more in the new QoS class - woohoo!

1. Based on the following configuration, what traffic will be policed?
class-map C_MUSIC
match protocol kazaa2
match protocol napster
!
class-map match-any C_WEB
match protocol http
match class-map C_MUSIC
!
policy-map P_WEB
class C_WEB
police 64000
!
interface serial 0/0
service-policy output P_WEB
A. All Kazaa version 2 traffic is policed
B. All Napster traffic is policed
C. All web traffic is policed
D. All Kazaa version 2, Napster, and web traffic is policed
E. No traffic is policed
2. You are configuring a Cisco Catalyst 3550 switch port to trust CoS markings if, and only if, the marking originated from a Cisco IP Phone. In an attempt to perform this configuration, you enter the mls qos trust device cisco-phone command. However, your configuration does not seem to be working properly. Why is the switch not trusting CoS markings coming from an attached Cisco IP Phone?
A. A Cisco Catalyst 3550 switch supports the mls qos trust device cisco-phone command, but the Cisco Catalyst 2950 does not support this command.
B. The mls qos trust cos command is missing.
C. The mls qos trust extend command is missing.
D. The mls qos cos 5 command is missing.
3. You administer a network that transports both voice and interactive video traffic. Since these traffic types are both latency-sensitive, you decide to implement the following configuration. Which statement is true regarding the configuration?
class-map C_VOICE
match protocol rtp audio
class-map C_VIDEO
match protocol rtp video
!
policy-map P_HIGH_PRIORITY
class C_VOICE
priority percent 15
class C_VIDEO
priority percent 35
class class-default
fair-queue
!
interface serial 0/0
service-policy output P_HIGH_PRIORITY
A. The configuration results in three queues, one for the C_VOICE class, one for the C_VIDEO class, and one queue for the class-default class.
B. The configuration results in two queues, one priority queue and one queue for the class-default class.
C. The class-default class uses FIFO as its queuing mechanism for traffic flows within its queue.
D. The two priority queues use WFQ for queuing traffic within those queues.
4. CB-WRED is configured using the random-detect command. Which two of the following statements are true concerning the random-detect command? (Choose 2)
A. The random-detect command cannot be issued for the class-default class.
B. The random-detect command cannot be issued for the priority class(es).
C. The random-detect command must be issued in conjunction with the bandwidth command (with the exception of the class-default class).
D. The random-detect command should be issued in conjunction with the priority command.
5. Consider the following configuration:
class-map TRANSACTIONAL
match protocol http
!
policy-map CBPOLICING
class TRANSACTIONAL
police 128000 conform-action set-dscp-transmit af11 exceed-action set-dscp-transmit af13 violate-action drop
!
interface serial 0/1
service-policy input CBPOLICING
What type of class-based policing configuration is represented by this configuration?
A. Single rate, single bucket
B. Single rate, dual bucket
C. Dual rate, single bucket
D. Dual rate, dual bucket
6. You configure CB-Shaping by issuing the command shape peak 8000 2000 2000. This configuration shapes to what peak rate?
A. 4000 bps
B. 8000 bps
C. 16000 bps
D. 32000 bps
7. You are configuring Multilink PPP (MLP) as your Link Fragmentation and Interleaving (LFI) mechanism for a WAN link. Identify the correct statements regarding the configuration of MLP. (Choose 2)
A. The configuration of Multilink PPP requires at least two physical links (e.g. two serial interfaces).
B. The IP address is removed from any serial interface that makes up the MLP bundle.
C. Any policy-map that was previously assigned to a physical interface should be reassigned to the multilink interface, that the physical interface is associated with, in order for the policy to take effect.
D. The virtual multilink interface does not use an IP address. Rather, it uses the IP unnumbered feature which allows the multilink interface to share an IP address with the multilink bundle member that has the highest IP address.

1. Based on the following configuration, what traffic will be policed?

class-map C_MUSIC
match protocol kazaa2
match protocol napster
!
class-map match-any C_WEB
match protocol http
match class-map C_MUSIC
!
policy-map P_WEB
class C_WEB
police 64000
!
interface serial 0/0
service-policy output P_WEB

A. All Kazaa version 2 traffic is policed

B. All Napster traffic is policed

C. All web traffic is policed

D. All Kazaa version 2, Napster, and web traffic is policed

E. No traffic is policed

Answer:

C

Explanation:

The C_MUSIC class-map does not specify the match-any or match-all option. The default is match-all. Therefore, for traffic to be classified in the C_MUSIC class-map, a packet would simultaneously have to be a Kazaa version 2 packet and a Napster packet, which isn’t possible.

The C_WEB class-map uses the match-any option, meaning that traffic will be classified in this class-map if it is HTTP traffic or if it is traffic that was classified in the C_MUSIC class-map. Since, no traffic will be classified in the C_MUSIC class-map, as described above, the only traffic that will be classified by the C_WEB class-map is HTTP traffic.

The policy-map P_WEB is configured to police (i.e. rate limit) traffic classified by the C_WEB class-map to a bandwidth of 64 kbps. (NOTE: The default conform-action is transmit, and the default exceed-action is drop.) Since only HTTP (i.e. web) traffic is matched by the C_WEB class-map, web traffic is the only traffic that is policed.

2. You are configuring a Cisco Catalyst 3560 switch port to trust CoS markings if, and only if, the marking originated from a Cisco IP Phone. In an attempt to perform this configuration, you enter the mls qos trust device cisco-phone command. However, your configuration does not seem to be working properly. Why is the switch not trusting CoS markings coming from an attached Cisco IP Phone?

A. A Cisco Catalyst 2950 switch supports the mls qos trust device cisco-phone command, but the Cisco Catalyst 3560 does not support this command

B. The mls qos trust cos command is missing

C. The mls qos trust extend command is missing

D. The mls qos cos 5 command is missing

E. The PC attached to the phone is overriding the CoS markings

Answer:

B

Explanation:

A Cisco Catalyst 2950 switch port can be configured to trust Class of Service (CoS) markings, Differentiated Services Code Point (DSCP), or CoS markings originating from a Cisco IP Phone. The switch port can detect that a CoS marking is coming from a Cisco IP Phone via the Cisco Discovery Protocol (CDP). The mls qos trust device cisco-phone command does indeed tell the switch to trust a marking if, and only if, the marking comes from a Cisco IP Phone. However, the mls qos trust device cisco-phone command by itself does not tell the switch port which marking (i.e. CoS or DSCP) coming from the Cisco IP Phone to trust. Therefore, the mls qos trust cos command is also required.

3. You administer a network that transports both voice and interactive video traffic. Since these traffic types are both latency-sensitive, you decide to implement the following configuration. Which statement is true regarding the configuration?

class-map C_VOICE
match protocol rtp audio
!
class-map C_VIDEO
match protocol rtp video
!
policy-map P_HIGH_PRIORITY
class C_VOICE
priority percent 15
class C_VIDEO
priority percent 35
class class-default
fair-queue
!
interface serial 0/0
service-policy output P_HIGH_PRIORITY

A. The configuration results in three queues, one for the C_VOICE class, one for the C_VIDEO class, and one queue for the class-default class

B. The configuration results in two queues, one priority queue and one queue for the class-default class

C. The class-default class uses FIFO as its queuing mechanism for traffic flows within its queue

D. The two priority queues use WFQ for queuing traffic within those queues

Answer:

B

Explanation:

While priority treatment (i.e. LLQ treatment) can be assigned to more than one class-map, an interface only has one priority queue. Therefore, in the above configuration, traffic classified in the C_VOICE and C_VIDEO class-maps shares the same priority queue. A second queue contains traffic classified in the class-default class-map. Therefore, the configuration only results in two queues, one shared priority queue and one queue for the class-default class. On most models of routers, only the class-default queue can be configured to use WFQ queuing for flows within the queue, while other queues use FIFO queuing for traffic within those queues.

4. CB-WRED is configured using the random-detect command. Which two of the following statements are true concerning the random-detectcommand? (Choose 2)

A. The random-detect command cannot be issued for the class-default class.

B. The random-detect command cannot be issued for the priority class(es).

C. The random-detect command must be issued in conjunction with the bandwidth command (with the exception of the class-default class).

D. The random-detect command should be issued in conjunction with the priority command.

Answer:

B, C

Explanation:

Weighted Random Early Detection (WRED) is effective for TCP flows, because WRED can cause some TCP flows to enter TCP slow start. When configuring class-based WRED (i.e. CB-WRED), the random-detect command is issued in policy-map-class configuration mode. While the random-detect command can be used with the class-default class, random-detect cannot be issued in policy-map-class configuration mode for a class configured with the priority keyword. Also, with the exception of the class-default class, the random-detect command must be issued along with the bandwidth command.

5. Consider the following configuration:

class-map TRANSACTIONAL
match protocol http
!
policy-map CBPOLICING
class TRANSACTIONAL
police 128000 conform-action set-dscp-transmit af11 exceed-action set-dscp-transmit af13 violate-action drop
!
interface serial 0/1
service-policy input CBPOLICING

What type of class-based policing configuration is represented by this configuration?

A. Single rate, single bucket

B. Single rate, dual bucket

C. Dual rate, single bucket

D. Dual rate, dual bucket

Answer:

B

Explanation:

Cisco IOS supports single rate, single bucket; single rate, dual bucket; and dual rate, dual bucket policers. With a single rate policer, only a committed information rate (CIR) is specified, as in this question. With a dual rate policer, both a CIR and a peak information rate (PIR) are specified. Also, a single rate policer is a single bucket policer, unless the violate action is specified. If the violate action is specified, as it is in this question, the single rate policer uses two buckets, a Bc bucket and a Be bucket. However, a dual rate policer always uses two buckets, one bucket to transmit traffic at the CIR and one bucket to transmit traffic at the PIR.

6. You configure CB-Shaping by issuing the command shape peak 8000 2000 2000. This configuration shapes to what peak rate?

A. 4000 bps

B. 8000 bps

C. 16000 bps

D. 32000 bps

Answer:

C

Explanation:

In the syntax, the 8000 represents the Committed Information Rate (CIR). The first 2000 is the Committed Burst (Bc), and the second 2000 is the Excess Burst (Be). When configuring CB-Shaping, you can either shape to “average” or shape to “peak.” When shaping to average, traffic rates don’t exceed the CIR. However, when shaping to peak, traffic rates can burst above the CIR, while some of that excess traffic could be dropped by the service provider. When shaping to peak, the peak shaping rate is calculated by the formula:

peak_rate = CIR * (1 + Be/Bc)

In this example: peak_rate = 8000 * (1 + 2000/2000) = 16,000 bps. Note that if the Bc and Be values are calculated by IOS rather than being statically configured, Bc will always equal Be, which means that the peak rate will be twice the CIR.

7. You are configuring Multilink PPP (MLP) as your Link Fragmentation and Interleaving (LFI) mechanism for a WAN link. Identify the correct statements regarding the configuration of MLP. (Choose 2)

A. The configuration of Multilink PPP requires at least two physical links (e.g. two serial interfaces)

B. The IP address is removed from any serial interface that makes up the MLP bundle

C. Any policy-map that was previously assigned to a physical interface should be reassigned to the multilink interface, that the physical interface is associated with, in order for the policy to take effect

D. The virtual multilink interface does not use an IP address. Rather, it uses the IP unnumbered feature which allows the multilink interface to share an IP address with the multilink bundle member that has the highest IP address

Answer:

B, C

Explanation:

Multilink PPP (MLP) is a Link Fragmentation and Interleaving (LFI) mechanism for PPP links. Interestingly, even though the term “multilink” is in the title of this mechanism, MLP can be configured on a single link. Specifically, a virtual multilink interface is created. Then, one or more physical interfaces are added as members of a multilink bundle, all of which act as the single multilink interface. As a result, the virtual multilink interface is assigned an IP address, while the one or more physical interface member(s) do not have an IP address. Additionally, since the packets are logically transmitted over the virtual multilink interface, in order to apply a policy-map to the traffic using the virtual interface, the service-policy command should be applied to the virtual multilink interface, as opposed to the member interfaces.

Jun
29

Try these questions on for size! Learn all this and much more in the new QoS class - woohoo!

1. Based on the following configuration, what traffic will be policed?
class-map C_MUSIC
match protocol kazaa2
match protocol napster
!
class-map match-any C_WEB
match protocol http
match class-map C_MUSIC
!
policy-map P_WEB
class C_WEB
police 64000
!
interface serial 0/0
service-policy output P_WEB
A. All Kazaa version 2 traffic is policed
B. All Napster traffic is policed
C. All web traffic is policed
D. All Kazaa version 2, Napster, and web traffic is policed
E. No traffic is policed
2. You are configuring a Cisco Catalyst 3550 switch port to trust CoS markings if, and only if, the marking originated from a Cisco IP Phone. In an attempt to perform this configuration, you enter the mls qos trust device cisco-phone command. However, your configuration does not seem to be working properly. Why is the switch not trusting CoS markings coming from an attached Cisco IP Phone?
A. A Cisco Catalyst 3550 switch supports the mls qos trust device cisco-phone command, but the Cisco Catalyst 2950 does not support this command.
B. The mls qos trust cos command is missing.
C. The mls qos trust extend command is missing.
D. The mls qos cos 5 command is missing.
3. You administer a network that transports both voice and interactive video traffic. Since these traffic types are both latency-sensitive, you decide to implement the following configuration. Which statement is true regarding the configuration?
class-map C_VOICE
match protocol rtp audio
class-map C_VIDEO
match protocol rtp video
!
policy-map P_HIGH_PRIORITY
class C_VOICE
priority percent 15
class C_VIDEO
priority percent 35
class class-default
fair-queue
!
interface serial 0/0
service-policy output P_HIGH_PRIORITY
A. The configuration results in three queues, one for the C_VOICE class, one for the C_VIDEO class, and one queue for the class-default class.
B. The configuration results in two queues, one priority queue and one queue for the class-default class.
C. The class-default class uses FIFO as its queuing mechanism for traffic flows within its queue.
D. The two priority queues use WFQ for queuing traffic within those queues.
4. CB-WRED is configured using the random-detect command. Which two of the following statements are true concerning the random-detect command? (Choose 2)
A. The random-detect command cannot be issued for the class-default class.
B. The random-detect command cannot be issued for the priority class(es).
C. The random-detect command must be issued in conjunction with the bandwidth command (with the exception of the class-default class).
D. The random-detect command should be issued in conjunction with the priority command.
5. Consider the following configuration:
class-map TRANSACTIONAL
match protocol http
!
policy-map CBPOLICING
class TRANSACTIONAL
police 128000 conform-action set-dscp-transmit af11 exceed-action set-dscp-transmit af13 violate-action drop
!
interface serial 0/1
service-policy input CBPOLICING
What type of class-based policing configuration is represented by this configuration?
A. Single rate, single bucket
B. Single rate, dual bucket
C. Dual rate, single bucket
D. Dual rate, dual bucket
6. You configure CB-Shaping by issuing the command shape peak 8000 2000 2000. This configuration shapes to what peak rate?
A. 4000 bps
B. 8000 bps
C. 16000 bps
D. 32000 bps
7. You are configuring Multilink PPP (MLP) as your Link Fragmentation and Interleaving (LFI) mechanism for a WAN link. Identify the correct statements regarding the configuration of MLP. (Choose 2)
A. The configuration of Multilink PPP requires at least two physical links (e.g. two serial interfaces).
B. The IP address is removed from any serial interface that makes up the MLP bundle.
C. Any policy-map that was previously assigned to a physical interface should be reassigned to the multilink interface, that the physical interface is associated with, in order for the policy to take effect.
D. The virtual multilink interface does not use an IP address. Rather, it uses the IP unnumbered feature which allows the multilink interface to share an IP address with the multilink bundle member that has the highest IP address.

1. Based on the following configuration, what traffic will be policed?

class-map C_MUSIC
match protocol kazaa2
match protocol napster
!
class-map match-any C_WEB
match protocol http
match class-map C_MUSIC
!
policy-map P_WEB
class C_WEB
police 64000
!
interface serial 0/0
service-policy output P_WEB

A. All Kazaa version 2 traffic is policed

B. All Napster traffic is policed

C. All web traffic is policed

D. All Kazaa version 2, Napster, and web traffic is policed

E. No traffic is policed

2. You are configuring a Cisco Catalyst 3560 switch port to trust CoS markings if, and only if, the marking originated from a Cisco IP Phone. In an attempt to perform this configuration, you enter the mls qos trust device cisco-phone command. However, your configuration does not seem to be working properly. Why is the switch not trusting CoS markings coming from an attached Cisco IP Phone?

A. A Cisco Catalyst 2950 switch supports the mls qos trust device cisco-phone command, but the Cisco Catalyst 3560 does not support this command

B. The mls qos trust cos command is missing

C. The mls qos trust extend command is missing

D. The mls qos cos 5 command is missing

E. The PC attached to the phone is overriding the CoS markings

3. You administer a network that transports both voice and interactive video traffic. Since these traffic types are both latency-sensitive, you decide to implement the following configuration. Which statement is true regarding the configuration?

class-map C_VOICE
match protocol rtp audio
!
class-map C_VIDEO
match protocol rtp video
!
policy-map P_HIGH_PRIORITY
class C_VOICE
priority percent 15
class C_VIDEO
priority percent 35
class class-default
fair-queue
!
interface serial 0/0
service-policy output P_HIGH_PRIORITY

A. The configuration results in three queues, one for the C_VOICE class, one for the C_VIDEO class, and one queue for the class-default class

B. The configuration results in two queues, one priority queue and one queue for the class-default class

C. The class-default class uses FIFO as its queuing mechanism for traffic flows within its queue

D. The two priority queues use WFQ for queuing traffic within those queues

4. CB-WRED is configured using the random-detect command. Which two of the following statements are true concerning the random-detect command? (Choose 2)

A. The random-detect command cannot be issued for the class-default class.

B. The random-detect command cannot be issued for the priority class(es).

C. The random-detect command must be issued in conjunction with the bandwidth command (with the exception of the class-default class).

D. The random-detect command should be issued in conjunction with the priority command.

5. Consider the following configuration:

class-map TRANSACTIONAL
match protocol http
!
policy-map CBPOLICING
class TRANSACTIONAL
police 128000 conform-action set-dscp-transmit af11 exceed-action set-dscp-transmit af13 violate-action drop
!
interface serial 0/1
service-policy input CBPOLICING

What type of class-based policing configuration is represented by this configuration?

A. Single rate, single bucket

B. Single rate, dual bucket

C. Dual rate, single bucket

D. Dual rate, dual bucket

6. You configure CB-Shaping by issuing the command shape peak 8000 2000 2000. This configuration shapes to what peak rate?

A. 4000 bps

B. 8000 bps

C. 16000 bps

D. 32000 bps

7. You are configuring Multilink PPP (MLP) as your Link Fragmentation and Interleaving (LFI) mechanism for a WAN link. Identify the correct statements regarding the configuration of MLP. (Choose 2)

A. The configuration of Multilink PPP requires at least two physical links (e.g. two serial interfaces)

B. The IP address is removed from any serial interface that makes up the MLP bundle

C. Any policy-map that was previously assigned to a physical interface should be reassigned to the multilink interface, that the physical interface is associated with, in order for the policy to take effect

D. The virtual multilink interface does not use an IP address. Rather, it uses the IP unnumbered feature which allows the multilink interface to share an IP address with the multilink bundle member that has the highest IP address

Jun
17

We know from the 5-Day QoS bootcamp that Differentiated Services is one of the three major overall approaches to providing Quality of Service in an enterprise. The other options are Integrated Services and Best Effort.

When we studied Differentiated Services, we saw that the primary marking technology approach was the Differentiated Services Code Point (DSCP) concept. These are the high order 6 bits in the IP packet ToS Byte. But how can MPLS use these markings in order to provide QoS treatment (Per Hop Behaviors (PHBs)) to various traffic forms?

The first major issue to solve is the fact that Label Switch Routers (LSRs) rely solely on the MPLS header when making forwarding decisions. These devices will no longer analyze the IP Header information, thus negating the use of the ToS Byte. This was solved through the creation of the Experimental Bits field  in the MPLS header. The IETF has now renamed the field to the Traffic Class field.  See RFC 5462.

But now there is another issue. There are 6 bits used for DSCP (providing 64 classifications), while there are only 3 Traffic Class bits (providing a mere 8 classifications).

It turns out there are two approaches to dealing with this issue. If you should happen to require less than 8 Per Hop Behaviours, just use the EXP Bits (Traffic Class). In fact, these bits are mapped to IP Precedence by default in Cisco's implementation, so they should be populated appropriately for QoS classification by default. This approach is called E-LSPs in official MPLS terminology. E stands for EXP-inferred in this case.

The second option is when we need more than 8 classifications in our network. Obviously, the three EXP bits fall far short of providing the necessary markings. In this case, the label itself is used to help mark traffic! In this approach, both the EXP bits and the label are used for the PHB. Typically the marking in the label will assign the congestion management treatment, while the EXP bits will control drop priority. This approach is called L-LSP. Here the L stands for label-inferred.

Thanks for reading this blog supplement to the QoS course, and you can expect many more over the coming months. Happy studies!

Jun
14

In this short blog post, we are going to give condensed overview of the four main flavors of Frame-Relay Traffic Shaping (FRTS). Historically, as IOS evolved with time, different methods have been introduced, having various level of feature support. Two main features, specific to Frame-Relay Traffic-Shaping are per-VC shaping and queueing and adaptive shaping in response to Frame-Rleay congestion notifications (e.g. BECNs). You'll see that not every flavor supports these two features. We begin with the «fossil» known as Generic Traffic Shaping.

Generic Traffic Shaping

This feature was initially designed to shape packet traffic sent over any media, be it Ethernet, Frame-Relay, PPP etc. The command syntax is traffic-shape {rate|group} and allows specifying traffic scope using an access-list (notice that different ACL types are supported). You may tune the Bc/Be values as well as the shaping queue depth (amount of buffers). If the shaper delays traffic, the queue service strategy would be fixed to WFQ with the queue size equal to the buffer space allocated. Additional WFQ parameters such as number of flows and congestive discard threshold could not be tuned and set based on the shaper rate automatically.

An unique feature of GTS is the ability to apply multiple shapers to a single interface. However, shapers are not cascaded, but rather a packet is assigned to the first matching shaper rule. In the example below, there are three rules, with the last one being "fallback", matching all packets that didn't match access-lists 100 and 101. Unlike using the legacy CAR feature (rate-limit command) you cannot «cascade» multiple traffic-shape statements on the same interface, i.e. there is no "continue" action.

traffic-shape group 100 128000
traffic-shape group 101 64000
traffic-shape group 199 256000
!
access-list 199 permit ip any any


You cannot apply GTS per-VC unless you have created a subinterface for this particular PVC. You may, however, enable shaping that adapts to FR BECNs, using the syntax traffic-shape adaptive {rate} along with traffic-shape rate. Notice that if multiple PVCs map to interface, reception of a BECN on any of the VCs will trigger speed throttling.

Legacy Frame-Relay Traffic-Shaping

This feature uses the map-class frame-relay syntax and was initially designed to implement specifically Frame-Relay Traffic Shaping and Policing (FRTS and FRTP) in Cisco IOS routers. This is still probably the most widely used form of FRTS. You specify all parameters under a map-class and then apply this map-class to a specific PVC or interface using the syntax similar to the following:

map-class frame-relay DLCI_101
frame-relay cir 256000
frame-relay bc 2560
frame-relay be 0
frame-relay mincir 192000
frame-relay adaptive-shaping becn
!
interface Serial 0/0
frame-relay traffic-shaping
!
interface Serial 0/0.1
frame-relay interface-dlci 101
class DLCI_101

A mandatory command that goes with Legacy FRTS is frame-relay traffic-shaping applied to an interface (there is also frame-relay policing command to implement traffic policing). With this command applied, all PVC CIRs are set to default 56Kbps unless you change that value in a map-class that you apply. Additionally, interface software queue is turned into FIFO, and every PVC gets its own, second-level, logical queue realized as shaper's queue. Therefore, FRTS enables "true" FRTS by allowing for two-level queueing hierarchy.

Legacy FRTS allows for various queueing methods on per-VC level. You may use any of legacy techniques such as Custom Queueing, Priority Queueing, FIFO and WFQ/IP RTP Priority. More importantly, you may enable CBWFQ on per-VC level, by using the map-class command service-policy output along with the policy-map implementing CBWFQ logic. Note that if you apply CBWFQ per-VC, the maximum avaiable bandwidth is based on minCIR settings for a VC, not the CIR.

All the above QoS mechanisms could be combined with Per-VC fragmentation enabled solely by using the map-class command frame-relay fragment. Normally, you need to enable fragmentation on every PVC terminated on the interface, so make sure you configure all map-classes properly. As soon as fragmentation is enabled, the interface-level FIFO queue is turned into special «truncated» Priority Queue used for interleaving fragments and voice packets. Only High and Normal queues out of the PQ are used to implement interleaving and you can inspect queue utilization using the command show queueing.

Obviously enough, you cannot use GTS on the same interface where FRTS has been enabled and vice-versa - IOS will reject the command. Compared to GTS, legacy FRTS provide a lot of syntactical consistency - you apply all features using the map-class commands plus you have a rich selection of QoS mechanisms along with that.

(BTW, if you're wondering the purpose of frame-relay tc command under a map-class, it is used in zero-CIR scenarios for traffic policing. With zero-CIR policing all traffic is considered exceeding, but you may want to specify the peak rate by setting Be and Tc together).

MQC Frame-Relay Traffic-Shaping

This was Сisco's attempt to leverage MQC syntax for the purpose of traffic-shaping. The problem was that the syntax is still combined with the "legacy" style map-class syntax. Here's how it works:

  • You create a «first-level» policy-map implementing shaping for a VC. You can only use «class-default» at this level, and apply shaping parameters using the commands shape average, shape peak, and shape adaptive. For example:
    policy-map SHAPE
    class class-default
    shape average 512000 5120 0
    shape adaptive 256000
  • You create «nested» or «second-level» policy that implements CBWFQ. Note that CBWFQ/LLQ is the only queueing method supported with MQC-based FRTS (no CQ or PQ allowed, though you can emulate WFQ using MQC syntax).
    policy-map CBWFQ
    class VOICE
    priority 128
    class class-default
    fair-queue
  • Combine both policy-maps and attach the parent policy to a map class. You should not enable the interface-level command frame-relay traffic-shaping with MQC FRTS.
    policy-map SHAPE
    class class-default
    service-policy CBWFQ
    !
    map-class frame-relay DLCI_101
    service-policy output SHAPE
    !
    interface Serial 0/0.1
    frame-relay interface-dlci 101
    class DLCI_101

From the above configuration example it is apparent that PVC shaping settings are now defined using the MQC shape average and shape adaptive commands. Similar to the use of CBWFQ with legacy FRTS, available CBWFQ bandwidth is based on the shape adaptive setting. Also, as mentioned before, you should not use the command frame-relay traffic-shaping with MQC FRTS. In fact, legacy FRTS and MQC FRTS are incompatible.

Frame-Relay Fragmentation (FRF.12) is supported with MQC FRTS, however you have to enable it at the interface level using the command frame-relay fragment X end-to-end. Interleaving queue is created automatically and cannot be seen using IOS show commands. Fragmentation is enabled on all PVCs terminated at the interface.
Compared to legacy FRTS, the MQC equivalent has some unique features, known as «Voice Adaptive Shaping» and «Voice Adaptive Fragmentation». The first feature activates adaptive shaping when packets are detected in the LLQ queue (if there is one configured) and the second feature activates fragmentation under the same condition of traffic present in the LLQ. The first feature is useful in oversubscription scenario, where you want to slow down from peak rate to committed rate when sending VoIP traffic to ensure better voice quality. The second feature is more useful, and allows you enabling traffic fragmentation only when it's really needed, i.e. when a voice calls is active.

Class Based Generic Traffic Shaping (CB-GTS)

The last method is based purely on MQC syntax using the generic commad shape average. In many senses, it's similar to the legacy GTS but uses newer syntax and supports more granular application. Have a look at the following example:

policy-map SHAPE_DLCI_101
class class-default
shape average 256000
!
! You can match DLCI's in class-maps
!
class-map DLCI_202
match fr-dlci 202
!
policy-map SHAPE_DLCI_202
!
policy-map INTERFACE_POLICY
class DLCI_202
service-policy SHAPE_DLCI_202
!
interface Serial 0/0
service-policy output INTERFACE_POLICY
!
interface Serial 0/0.1
frame-relay interface-dlci 101
service-policy output SHAPE_DLCI_101

The above example shows you two approaches to implementing VC-specific shaping. The first one uses a specific subinterface and the second one uses a class-map matching specific DLCI. The first method more reminds the legacy GTS, while the second allows pushing all VC policy under a single policy-map. Of course, you can always nest another MQC policy under a shaped class and implement CBWFQ and/or traffic marking/policing, just like you would do with any normal MQC configuration. FRF.12 fragmentation is also supported by means of the interface-level command frame-relay fragment, the same used with MQC FRTS.

Now for the CB-GTS limitation. There is a bunch, unfortunately. first of all, adaptive shaping does not work with CB-GTS i.e. the command shape average does not have any effect. Secondly, you may enable fragmentation at the interface level, but you cannot use any of MQC-based FRTS features, such as voice-adaptive fragmentation and shaping. Therefore, CB-GTS is not exactly a Frame-Relay Traffic Shaping solution, though it allows for generic shaping on per-VC basis.

Summary

As IOS software was evolving, multiple approaches to FRTS have been developed. Possibly the most commonly used one nowaday is Legacy FRTS, which supports practically all features with except to adaptive shaping/fragmentation availabled with MQC FRTS only. Even though from today's perspective MQC CB-GTS seems to be the most reasonable method, it does yet lack support of such important feature as adaptive shaping. It's up to you to select the best method, but be aware of their limitations.

Subscribe to INE Blog Updates