Mar
19

I just finished up 2 weeks of a really great CCNP Voice bootcamp, covering all 5 of the latest 8.0 version exams from Cisco (CVOICE, CIPT1, CIPT2, CAPPS, & TVOICE). All in all we ended up with 62 completely brand-new hours of informative video that we are sure you will be excited to watch when they are posted to our streaming and download sites here in probably just about a week. We went fairly in-depth on most every topic, one of them being MGCP during our TVOICE section of class.

BTW, with this new 62 hours of CCNP Voice video, this brings INE to 320 hours of total CCNA-to-CCIE Voice video-on-demand training. Far, far more than any other vendor. And it is all up-to-date and taught by me, not by subcontracted instructors.

You may recall that in my last post related to MGCP Troubleshooting, we took a basic look at the MGCP commands that a Call-Agent (server) instructs a Gateway (client) to preform - something the RFC refers to as "verbs".

In this post, we are going to take a look at the output of the "debug mgcp packets" command for a single call, and then break down each section of the output into "transactions" (i.e. Command and Response).


So to begin with, here is the complete output a of single call from "debug mgcp packet":


ISDN Se0/0/0:23 Q931: RX <- SETUP pd = 8 callref = 0x00FF
Bearer Capability i = 0x8090A2
Standard = CCITT
Transfer Capability = Speech
Transfer Mode = Circuit
Transfer Rate = 64 kbit/s
Channel ID i = 0xA18381
Preferred, Channel 1
Progress Ind i = 0x8583 - Origination address is non-ISDN
Display i = 'Seattle US Phone'
Calling Party Number i = 0x4180, '2065015111'
Plan:ISDN, Type:Subscriber(local)
Called Party Number i = 0xC1, '2065011002'
Plan:ISDN, Type:Subscriber(local)

MGCP Packet received from 177.1.10.12:2427--->
CRCX 256 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
X: 3
L: p:20, a:G.729, s:off, t:b8, fxr/fx:t38
M: recvonly
R: D/[0-9ABCD*#]
Q: process,loop
<---

MGCP Packet sent to 177.1.10.12:2427--->
200 256 OK
I: 31

v=0
o=- 49 0 IN IP4 177.1.254.1
s=Cisco SDP 0
c=IN IP4 177.1.254.1
t=0 0
m=audio 18714 RTP/AVP 18 100
a=rtpmap:18 G.729/8000
a=fmtp:18 annexb=no
a=rtpmap:100 X-NSE/8000
a=fmtp:100 200-202
a=X-sqn:0
a=X-cap: 1 audio RTP/AVP 100
a=X-cpar: a=rtpmap:100 X-NSE/8000
a=X-cpar: a=fmtp:100 200-202
a=X-cap: 2 image udptl t38
<---

ISDN Se0/0/0:23 Q931: TX -> CALL_PROC pd = 8 callref = 0x80FF
Channel ID i = 0xA98381
Exclusive, Channel 1

ISDN Se0/0/0:23 Q931: TX -> ALERTING pd = 8 callref = 0x80FF
Progress Ind i = 0x8088 - In-band info or appropriate now available

MGCP Packet received from 177.1.10.10:2427--->
RQNT 257 S0/SU0/DS1-0/1@Branch2 MGCP 0.1
X: 1
R: D/[0-9ABCD*#]
S: G/rt
Q: process,loop
<---

MGCP Packet sent to 177.1.10.10:2427--->
200 257 OK
<---

ISDN Se0/0/0:23 Q931: TX -> CONNECT pd = 8 callref = 0x80FF
ISDN Se0/0/0:23 Q931: RX <- CONNECT_ACK pd = 8 callref = 0x00FF

MGCP Packet received from 177.1.10.12:2427--->
MDCX 258 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
I: 31
X: 3
L: p:20, a:PCMU, s:off, t:b8, fxr/fx:t38
M: sendrecv
R: D/[0-9ABCD*#], FXR/t38
S:
Q: process,loop

v=0
o=- 49 0 IN EPN S0/SU0/DS1-0/3@corphq.voice.ine.com
s=Cisco SDP 0
t=0 0
m=audio 18214 RTP/AVP 0
c=IN IP4 177.1.11.26
a=X-sqn:0
a=X-cap:1 image udptl t38
<---

MGCP Packet sent to 177.1.10.12:2427--->
200 258 OK
<---

ISDN Se0/0/0:23 Q931: RX <- DISCONNECT pd = 8 callref = 0x00FF
Cause i = 0x8290 - Normal call clearing

MGCP Packet received from 177.1.10.12:2427--->
MDCX 259 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
I: 31
X: 3
M: recvonly
R: D/[0-9ABCD*#]
Q: process,loop
<---

MGCP Packet sent to 177.1.10.12:2427--->
200 259 OK
<---

MGCP Packet received from 177.1.10.12:2427--->
DLCX 260 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
I: 31
X: 3
S:
<---

MGCP Packet sent to 177.1.10.12:2427--->
250 260 OK
P: PS=974, OS=155840, PR=981, OR=156960, PL=1, JI=3, LA=0
<---

 
 
So now just looking at the first transaction, we see an Inbound call (thanks to the inclusion of the 'debug isdn q931' output) and that the call-agent (UCM) instructs the gateway to "CRCX" or "CreateConnection", and the Gateway responds with very simple "200 OK", while also including the SDP (IETF Session Description Protocol) pertaining to such things as audio codec and DTMF. Notice how after each call-agent instructed verb (command) and gateway response there is a 3 digit number that corresponds. This is the transaction number, and use of it essentially ensures that the response is specific to the command (as there may be many commands and responses being issued on a heavily populated MGCP gateway). We can clearly see in this initial command that the call-agent is instructing the use of the "a:G.729" audio codec on the "L" line, and the gateway is obliging with the SDP response that will use the RTP/AVP (Audio Video Profile) codec number 18, G.729 without Annex B. More on this line reveals other things about the call such as "p" being the packet or sampling size of 20ms, "s" being VAD, "t" being the RFC 2474 ToS/DSCP bits (b8 is hex for 10111000 or DSCP EF), and fax capabilities. You also might note that the gateway was instructed to "M: recvonly" or as far as the "connectionMode" is concerned, that the gateway should only Receive packets, not send them. The "C" line is the global CallID and the "X" line is essentially the local callID. The "R" line is the supported DTMF.


MGCP Packet received from 177.1.10.12:2427--->
CRCX 256 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
X: 3
L: p:20, a:G.729, s:off, t:b8, fxr/fx:t38
M: recvonly
R: D/[0-9ABCD*#]
Q: process,loop
<---

MGCP Packet sent to 177.1.10.12:2427--->
200 256 OK
I: 31

v=0
o=- 49 0 IN IP4 177.1.254.1
s=Cisco SDP 0
c=IN IP4 177.1.254.1
t=0 0
m=audio 18714 RTP/AVP 18 100
a=rtpmap:18 G.729/8000
a=fmtp:18 annexb=no
a=rtpmap:100 X-NSE/8000
a=fmtp:100 200-202
a=X-sqn:0
a=X-cap: 1 audio RTP/AVP 100
a=X-cpar: a=rtpmap:100 X-NSE/8000
a=X-cpar: a=fmtp:100 200-202
a=X-cap: 2 image udptl t38
<---

 
 
The next transaction shows us the ISDN is telling us the phone out on the PSTN is now alerting (ringing) and the MGCP Request Notify tells the gateway with the "S" line to play the "rt" or ringback tone.


ISDN Se0/0/0:23 Q931: TX -> ALERTING pd = 8 callref = 0x80FF
Progress Ind i = 0x8088 - In-band info or appropriate now available

MGCP Packet received from 177.1.10.10:2427--->
RQNT 257 S0/SU0/DS1-0/1@Branch2 MGCP 0.1
X: 1
R: D/[0-9ABCD*#]
S: G/rt
Q: process,loop
<---

MGCP Packet sent to 177.1.10.10:2427--->
200 257 OK
<---

 
 
In this next transaction we see the ISDN output showing that the call is now connecting and the MGCP output that the call-agent us informing the gateway to "MDCX" or "ModifyConnection" again, this time to change its RTP connectionMode to send and receive packets "M: sendrecv". This is where the call was answered, and audio now commences.


ISDN Se0/0/0:23 Q931: TX -> CONNECT pd = 8 callref = 0x80FF
ISDN Se0/0/0:23 Q931: RX <- CONNECT_ACK pd = 8 callref = 0x00FF

MGCP Packet received from 177.1.10.12:2427--->
MDCX 258 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
I: 31
X: 3
L: p:20, a:PCMU, s:off, t:b8, fxr/fx:t38
M: sendrecv
R: D/[0-9ABCD*#], FXR/t38
S:
Q: process,loop

v=0
o=- 49 0 IN EPN S0/SU0/DS1-0/3@corphq.voice.ine.com
s=Cisco SDP 0
t=0 0
m=audio 18214 RTP/AVP 0
c=IN IP4 177.1.11.26
a=X-sqn:0
a=X-cap:1 image udptl t38
<---

MGCP Packet sent to 177.1.10.12:2427--->
200 258 OK

 
 
In this next transaction we see from the ISDN output that the call is being disconnected and the call-agent is informing the gateway to "MDCX" or "ModifyConnection" again, this time to change its RTP connectionMode back to receive only. This is the call-agent prepping the gateway that the call is being torn down.


ISDN Se0/0/0:23 Q931: RX <- DISCONNECT pd = 8 callref = 0x00FF
Cause i = 0x8290 - Normal call clearing

MGCP Packet received from 177.1.10.12:2427--->
MDCX 259 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
I: 31
X: 3
M: recvonly
R: D/[0-9ABCD*#]
Q: process,loop
<---

MGCP Packet sent to 177.1.10.12:2427--->
200 259 OK
<---

 
 
In this final (and immediately following the previous) section, we see the call-agent instructing the gateway to "DLCX" or "DeleteConnection". The gateway obliges, and also provides some useful statistics (ConnectionParameters to be specific) about the call, namely "P: PS=974, OS=155840, PR=981, OR=156960, PL=1, JI=3, LA=0". (PS=PacketsSent, OS=OctetsSent, PR=PacketsReceived, OR=OctetsReceived, PL=PacketsLost, JI=Jitter, LA=Latency). BTW, if we had anything such as no audio or a one-way audio issue, we could clearly see that by either both PS & PR or in the case of only one-way audio only PS or PR having a value of 0 (no packets sent and/or received).


MGCP Packet received from 177.1.10.12:2427--->
DLCX 260 S0/SU0/DS1-0/3@corphq.voice.ine.com MGCP 0.1
C: D0000000028182cf000000F500000015
I: 31
X: 3
S:
<---

MGCP Packet sent to 177.1.10.12:2427--->
250 260 OK
P: PS=974, OS=155840, PR=981, OR=156960, PL=1, JI=3, LA=0
<---

 
 
Seeing as we have just covered SDP, it will only make sense to move on to looking as SIP, as it uses SDP for audio information as well.

 

Related Posts

%RELATEDPOSTS%

Feb
11

This isn't exactly the latest news, and doesn't effect the CCIE Voice Lab exam (although it very well may effect the new CCNP Voice exams), however I am hearing more and more how people are upgrading their Voice routers with newer 15.x IOS code, and not realizing how existing (working) VoIP calls are being broken due to new, intelligent feature default configurations.

Last July, Cisco decided (wisely, IMHO) to create a new style of Toll-Fraud prevention to keep would-be dishonest people from defrauding a company by placing calls through their misconfigured voice gateway(s), at the company's expense. This new mechanism works by preventing unintended TDM (FXO/CAS/PRI) and VoIP (H.323 & SIP) calls from being able to be placed through a given company's voice gateway(s), by simply blocking all unknown traffic. Beginning in IOS 15.1(2)T, Cisco added a new application to the default IOS stack of apps that compares all source IP address with an explicitly configured list in the IOS running config, and if the IP address(es) or subnets do not match, all VoIP traffic is denied. Also, the new default for all POTS voice-ports is to not allow secondary dial-tone, making direct-inward-dial the default for CAS/PRI, and PLAR necessary for FXO.

We can trust our VoIP sources with a few, very easy commands.
If we wanted to trust only our CUCM Publisher and Subscribers servers on our GradedLabs Voice Racks, we would add:

voice service voip
ip address trusted list
ipv4 177.1.10.10 255.255.255.255
ipv4 177.1.10.20 255.255.255.255





Or possibly if we wanted to trust the entire subnet that our servers were on, we would add:

voice service voip
ip address trusted list
ipv4 177.1.10.0 255.255.255.0



We also have the ability to go back to pre-15.1(2)T behavior by simply doing either this:

voice service voip
ip address trusted list
ipv4 0.0.0.0 0.0.0.0




Or this:

voice service voip
no ip address trusted authenticate





Also, we have the ability to configure the router for pre-15.1(2)T behavior as it relates to inbound POTS calls.
For inbound ISDN calls we would add:

voice service pots
no direct-inward-dial isdn




And for inbound FXO calls we would add:

voice-port 0/0
secondary dialtone



One nice thing is that when booting an IOS router with this toll-fraud functionality, a message is displayed on boot-up, letting us know about it - essentially warning us that we need to configure something if we wish VoIP calls to work.

A link to Cisco's tech note describing this new functionality can be found here.

In summary, when upgrading a previously working H.323 or SIP VoIP gateway to IOS 15.1(2)T or later, until the proper configuration changes have been added to allow the proper VoIP source traffic into your voice gateway, all VoIP calls will cease to function properly. In general, this shouldn't break FXO/CAS/PRI for most configurations out there - as most folks are likely to have their routers configured properly to handle inbound POTS traffic (i.e. PLAR on their FXO ports and DID on their CAS/PRI port - or so we should hope) - I suppose YMMV depending on each unique configuration.

Let me know if you think this is a good thing that Cisco has done.

 

Related Posts

%RELATEDPOSTS%

Feb
09

Lab's 1 & 2 from INE's current CCIE Voice Volume II Workbook have been completely re-written from the ground-up, and have now been pushed to all subscribers in their INE Members area.

These two new labs both contain video-based solutions, which walk you -- task-by-task -- through every step of the necessary configuration, along with plenty of live troubleshooting, just as we should expect when sitting for the actual CCIE Voice Lab exam.

For the first two labs alone, we have recorded over forty hours of video-based solutions.

We are in the process completely of re-writing every lab in the Voice Volume II Workbook, and will be releasing them and posting announcements here about each new lab as they are completed. We will also be releasing new CCIE Voice Deep Dives here shortly. So stay tuned for much, much more from INE's Voice program.

Happy Labbing!

Mark

Nov
08

Abstract

This publication discusses the spectrum of problems associated with transporting Constant Bit Rate (CBR) circuits over packet networks, specifically focusing VoIP services. It provides guidance on practical calculation for voice bandwidth allocation in IP networks, including the maximum bandwidth proportion allocation and LLQ queue settings. Lastly, the publication discusses the benefits and drawbacks of transporting CBR flows over packet switched networks and demonstrates some effectiveness criteria.

Introduction

Historically, the main design goal of Packet Switched Networks (PSNs) was optimum bandwidth utilization for low-speed links. Compared to their counterpart, circuit-switched networks (CSNs such as SONET/SDH networks), PSNs use statistical as opposed to deterministic (synchronous) multiplexing. This feature allows PSNs to be very effective for bursty traffic sources, i.e. those that send traffic sporadically. Indeed, with many sources this allows the transmission channel to be optimally utilized by sending traffic only when necessary. Statistical multiplexing is only possible if every node in the network implements packet queueing, because PSNs introduce link contention. One good historical example is ARPANET: the network theoretical foundation has been developed in Kleinrock's work on distributed queueing systems (see [1]).

In PSNs, it is common for the traffic from multiple sources to be scheduled for sending out the same link at the same moment. In such case of contention for the shared resource, exceeding packets are buffered, delayed and possibly dropped. In addition to this, packets could be re-ordered, i.e. packets sent earlier may arrive behind packets that have been sent after them. The latter is normally a result of packets taking different paths in the PSN as a due to routing decisions. Such behavior is OK with bursty, delay insensitive data traffic, but completely inconsistent with the behavior of constant bit rate (CBR), delay/jitter sensitive traffic sources, such as emulated TDM traffic. Indeed, transporting CBR flows over PSNs poses significant challenges. Firstly, emulating a circuit service requires that every node should not buffer the CBR packets (i.e. should not introduce delay or packet drops) and be "flow-aware" to avoid re-ordering. The other challenge is the "packet overhead" tax imposed on emulated CBR circuits. Per their definition, CBR sources produce relatively small burst of data at regular periodic intervals. The more frequent are the intervals, the typically smaller are the bursts. In turn, PSNs apply a header to every transmitted burst of information to implement network addressing and routing, with the header size being often comparable to the CBR payload. This significantly decreases link utilization efficiency when transporting CBRtraffic.

Emulating CBR services over PSN

At first, it may seem that changing queuing discipline in every node will resolve the buffering problem. Obviously, if we distinguish CBR flow packets and service them ahead of all other packets using priority-queue then they would never get buffered. This assumes that link speed is fast enough so that serialization delay is negligible in the context of the given CBR flow. Such delay may vary depending on the CBR source: for example, voice flows typically produce one codec sample every 10ms and based on this, serialization delay at every node should not exceed 10ms, or preferably be less than that (otherwise, the next produced packet will "catch up" the previous one). Serialization problem on slow links could be solved using fragmentation and interleaving mechanics, e.g. as demonstrated in [6]. Despite of priority queueing and fragmentation, situation becomes more complicated with multiple CBR flows transported over the same PSN. The reason is that there is now contention among the CBR flows, since all of them should be serviced on priority basis. This creates queueing issues and intolerable delays. There is only one answer to reduce resource contention PSNs - over-provisioning.

Following the work [2], let's review how the minimum over-provisioning rate could be calculated. We first define the CBR flows as those that cannot tolerate a single delay of their packet in the link queue. Assume there are r equally behaving traffic flows contending for the same link. Pick up a designated flow out of the set. When we randomly "look" at the link, the probability that we see a packet from the designated flow is 1/r, since we assume that all flows are serviced equally by the connected PSN node. Then, the probability that the selected packet does NOT belong to our flow is 1-1/r respectively. If the link can accept at maximum t packets per millisecond, then during an interval of k milliseconds the probability that our designated flow may send a packet over the link without blocking is: P=1-(1-1/r)^tk, where (1-1/r)^tk is the probability of NOT seeing our flows designated packet amount the tk packets. The value P is the probability of any given packet NOT being delayed due to contention. It is important to understand that every delayed packet will cause flow behavior deviation from CBR. Following [2], we define u=tk as the "over-provisioning ratio" where u=1 when the channel can only send one flow packet during a time unit that take a flow to generate the same packet, i.e. when channel rate = flow rate. When u=2 the link is capable of sending twice as much packets during a unit of time compared to the number of packets sent by a single flow during the same interval. With the new variable, the formula becomes P=1-(1-1/r)^u. Fixing the value of P in this equation we obtain:

u=ln(1-P)/ln(1-1/r). (*)

which the minimum over-provision ratio to achieve the desired P probability of successfully transmitting the designated flow's packet when r equal flows are contending for the link. For example, with P=99.9% and r=10 we end up having u=ln(0.001)/ln(0.9)=65.5. That is, in order to provide the guarantee of not delaying 99.9% packets for 10 CBR flows we need to have at least 66 times more bandwidth than a single flow requires. Lowering P to 99% results in u=44 over-provisioning coefficient. It is interesting to look at the r/u ratio, which demonstrate what portion of minimally over-provisioned link bandwidth would be occupied by the "sensitive" flows when they all are transmitted in parallel. If we take the ratio r/u=ln((1-1/r)^r)/ln(1-P) then with large number of r we can replace (1-1/r)^r with 1/e and the link utilization ratio becomes approximated by:

r/u=-1/ln(1-P). (**)

For P=99% we get the ratio of 21%, for P=99.9% the ratio is 14% and for P=90% the ratio becomes 43%. In practice, this means that for moderately large amount of concurrent CBR flows, e.g. over 30, you may allocate no more than specified percentage of the link's bandwidth to CBR traffic based on the target QoS requirement.

Now that we are done with buffering delays, what about packet reordering and packet overhead tax? Reordering problem could be solved at the routing level in PSNs: if every routing node is aware of the flow state it may ensure that all packets belonging to the same flow are sent across the same path. This is typically implemented by deep packet inspection (which by the way violates the end-to-end principle as stated in RFC 1958) and classifying the packets based on the higher-level information. Such implementations are, however, rather effective as inspection and flow classification is typically performed in forwarding path using hardware acceleration. The overhead tax problem has two solution. The first one is, again, over-provisioning. By using high-capacity, low-utilized link we may ignore the bandwidth wastage due to the overhead. The second solution requires adding some state to network nodes: by performing flow inspection at the both ends of a single link we may strip the header information and replace it with a small flow ID. The other end of the link will reconstruct the original headers by matching the flow ID to the locally stored state information. This solution violates the end-to-end principle and has poor scalability as the number of flows grow. Typically it is used on slow-speed links. For example, VoIP services utilize IP/RTP/UDP and possibly TCP header compression to reduce the packet overhead tax.

Practical Example: VoIP Bandwidth Consumption

Let's say we have a 2Mbps link and we want to know how to provision priority-queue settings for G.729 calls. Firstly, we need to know per-flow bandwidth consumption. You may find enough information on this topic referring to [3]. Assuming we are using header compression over Frame-Relay, the per-flow bandwidth is 11.6Kbps and maximum link capacity is roughly 2000Kbps the theoretical maximum over-subscription rate is 2000/11.6=172. We can find the maximum number of flows allowed under the condition of P as

r=1/(1-exp(ln(1-p)/u)) = 1/(1-(1-P)^(1/u)) (***)

setting u=170 and P=0.99. This yields the theoretical limit of 37 concurrent flows. The total bandwidth for that many flows is 37*11.6=429Kbps or about 21% of the link capacity, as predicted by the asymptotic formula (**) above. The remaining bandwidth could be used by other non-CBR applications, as it should be expected from a PSN exhibiting high link utilization efficiency.

Knowing the aggregate bandwidth and maximum number of flows provides us the parameters for admission control tools (e.g. policer rate, RSVP bandwidth and so froth). However, what is left to define yet are the burst settings and the queue depth for LLQ. The maximum theoretical burst size equals to the maximum number of flows multiplied by the voice frame size. From [3] we readily obtain that a compressed G.729 frame size for Frame-Relay connection is 29 bytes. This gives us the burst of 1073 bytes, which we could round up to 1100 bytes for safety. The maximum queue depth could be defined as number of flows minus one, since in the worst case at least one priority packet would be scheduled for serializing while others held in the priority queue for processing. This means the queue depth would be at maximum 36 packets. The resulting IOS configuration would look like:

policy-map TEST
class TEST
priority 430 1100
queue-limit 36 packets


Circuits vs Packets

It is interesting to compare voice call capacity for a digital TDM circuit vs the same circuit being used for packet mode transport. Talking of a E1 circuit, we can transport as many as 30 calls, if one channel is used for associated signaling (e.g. ISDN). Compare this to the 37 G.729 VoIP calls we may obtain if the same circuit is channelized and runs IP - about 20% increase in call capacity. However, it is important to point out the quality of G.729 calls is degraded as compared to digital 64Kbps bearer channels, not to mention that other services could not be delivered over a compressed emulated voice channel. It might be more fair comparing the digital E1 to the packetized E1 circuit carrying G.711 VoIP calls. In this case, the bit rate for a single call running over Frame-Relay encapsulation with IP/RTP/UDP header compression would be (160+2+7)*50*8=67600bps or 67.6Kbps. Maximum over-provision rate is 29 in such case, which ends up with only six (6) VoIP calls allowed for the packetized link with P=99% in-time delivery! Therefore, if you try providing digital call quality over a packet network you end up with extremely inefficient implementation. Finally, consider an intermediate case - G.729 calls without IP/RTP/UDP header compression. This case assumes that complexity belongs to the network edge, as any transit links are not required to implement the header compression procedure. We end up with the following: uncompressed G.729 call over Frame-Relay generates (20+40+7)*50*8=26.8Kbps which results in over-provisioning coefficient of u=74 and r=16 flows - slightly over the half of the number that a E1 could carry.

Using the asymptotic formula (**) we see that for P=99% no more than 21% of the packetized link could be used for CBR services. This implies that the packet compression scheme should reduce the bandwidth of pure CBR flow by more than 5 times to be effectively compared with circuit-switched transport. Based on this, we conclude that PSNs could be more efficient for CBR transportation compared to "native" circuit-networks only if they utilize advanced processing features such as payload/header compression yielding compression coefficient over 5 times. However, we should keep in mind that such compression is not possible for all CBR services, e.g. relaying T1/E1 over IP has to maintain the full bandwidth of the original TDM channels, which is extremely inefficient in terms of resource utilization. Furthermore, the advanced CODEC features require complex equipment at the network edge and possibly additional complexity in the other parts of the network, e.g. in order to implement link header compression.

It could be argued that the remaining 79% of the packetized link could be used for data transmission, but the same is possible with circuit switched networks, provided that packet routers are attached to the edges. All the data packet switching routers need to do is dynamically request transmission circuit from the CSN based on traffic demands and used them for packet transmissions. This approach has been implemented, among others, in GMLPS ([5]).

Conclusions

The above logic demonstrates that PSNs were not really designed to be good at emulating true CBR services. Naturally, as the original intent of PSNs was maximizing the use of scarce link bandwidth. Transporting CBR services not only requires complex queueing disciplines but also ultimately over-provisioning the link bandwidth, thus somewhat defeating the main purpose of PSNs. Indeed if all that a PSN is used for is CBR service emulation, the under-utilization remains very significant. Some cases, like VoIP, allows for effective payload transformation and significant bandwidth reduction, which allows for more efficient use of network resources. On the other hand, such payload transformation requires introducing extra complexity to networking equipment. All this in addition to the fact that packet-switching equipment is inherently more complex and expensive compared to circuit-switched networks especially for very high-speed links. Indeed, packet switching logic requires complex dynamic lookups, large buffer memory and internal interconnection fabric. Memory requirements and dynamic state grow proportionally to the link speed, making high-speed packet-switching routers extremely expensive not only in hardware but also in software, due to advanced control plane requirement and proliferating services. More on this subject could be found in [4]. It is worth mentioning that the packet-switching inefficiency in network core has been realized long time ago, and there have been attempts for integrating circuit-switching core networks with packet-switching networks, most notable being GMPS ([5]). However, so far, the industry inertia did not make any of the proposed integration solutions viable.

Despite of all arguments, VoIP implementations have been highly successful so far, most likely akin to the effectiveness of VoIP codecs. Of course, no one can yet say than VoIP over Internet provides quality comparable to digital phone lines, but at least it is cheap, and that's what market is looking for. VoIP has been highly successful in enterprises, so far, mainly due to the fact that enterprise campus networks are mostly high-speed based on ethernet switched technology, that demonstrates very low general link utilization ratio, within 1-3% of available bandwidth. In such over-provisioned conditions, deploying VoIP should not pose major QoS challenges.

Further Reading

[1] Information Flow in Large Communication Nets, L. Kleinrock
[2] The Case for Service Overlays, Brassil, J.; McGeer, R.; Sharma, P.; Yalagandula, P.; Mark, B.L.; Zhang, S.; Schwab, S.
[3] Voice Bandwidth Consumption, INE Blog
[4] Circuit Switching in the Internet, Pablo Molinero Fernandez
[5] RFC 3945
[6] PPP Multilink Interleaving over Frame-Relay

Oct
17

Computing voice bandwidth is usually required for scenarios where you provision LLQ queue based on the number of calls and VoIP codec used. You need to account for codec rate, Layer 3 overhead (IP, RTP and UDP headers) and Layer 2 overhead (Frame-Relay, Ethernet, HDLC etc. headers). Accounting for Layer 2 overhead is important, since the LLQ policer takes this overhead in account when enforcing maximum rate.

We are going to consider two codecs for bandwidth computation: G.729 and G.711. By default, both codecs generate 50 VoIP packets per second. However, the codec framing rate is 10ms (100 packets per second). Therefore, each VoIP packet carries two frames with VoIP samples. The frame sizes are 10 bytes and 80 bytes for G.729 and G.711 codecs respectively.

Based on this, G.729 generates [10*2]*50*8=8000bps and G.711 generates [80*2]*50*8=64000bps of “payload” rate – no Layer 3 or Layer 2 overheads.

RTP header size is 12 bytes and UDP header size is 8 bytes. Typical IP header (no options) is 20 bytes in lenght. Therefore, the Layer 3 overhead is 40 bytes, if we don’t use header compression.

The following are the formats for WAN frames commonly used to transport voice. Note that these formats remain the same with or without FRF.12/MLP fragmentation schemes, since voice packets are never fragmented with good design.

Highlighted in green are the portions of Layer 2 frames that Cisco IOS queue scheduler accounts for when computing actual frame size. Note that the scheduler does not account for full Layer 2 overhead, but you need to provision more bandwidth for LLQ so that other class may not be configured with too much bandwidth. As we can see, both Cisco and IETF Frame-Relay encapsulations add 7 bytes of Layer 2 overhead to VoIP packets. The same holds true for HDLC encapsulation (which is not very common but added here for sake of completeness). PPP over Frame-Relay adds 9 bytes of overhead – the maximum overhead of all presented encapsulation types.

Using the information above, you can compute bandwidth usage for uncompressed voice traffic flow across any WAN connection. For example, let’s compute bandwidth consumption for a G.729 call across Frame-Relay link with FRF.12 fragmentation. First, FRF.12 does not fragment voice packets if configured properly. Next, the size of payload + Layer 3 overhead is 2x10 bytes + 40 bytes = 60 bytes. Based on the 50 pps rate and adding the 7 bytes overhead we end up with the bandwidth value of:

(20+40+7)*50*8=26800bps.

If you want to use G.711 codec, then replace the 20 bytes payload with 160 bytes. The result is:

(160+40+7)*50*8=82800bps.

Another thing to consider is IP/RTP/UDP headers compression. Cisco’s implementation reduces the total overhead of 40 bytes (12+8+20) down to 2 bytes (no UDP checksum). This limits the Layer 3 overhead to just 2 bytes. Let’s compute the bandwidth usage for G.729 call over MLPoFR with UDP header compression (9 bytes of Layer 2 overhead):

(20+2+9)*50*8=12400bps.

The same computations for compressed G.729 over Frame-Relay with or without FRF.12 bring the following result:

(20+2+7)*50*8=11600bps.

Now a few words about running VoIP traffic across Ethernet. Usually you don’t use CBWFQ/LLQ on fast connections on small to mid.range routers to guarantee bandwidth to VoIP traffic. Most of these routers are not capable of sending traffic at such rate that they oversubscribe 100Mbs interface. However, you may occasionally use Ethernet as "WAN" connection using Class-Based Shaping for sub-rate access. So just in case, Layer 2 overhead for typical Ethernet frame is 18 bytes – 14 bytes for Ethernet header and 4 bytes for FCS (32 bits). If the frame carries VLAN tag, add another 4 bytes here for 22 bytes of total overhead. Note that you will typically see G.711 codec used over LAN links.

May
02

A voice lab rack usually utilizes dedicated piece of hardware to simulate PSTN switch. Commonly, you can find a Cisco router in this role, with a number of E1/T1 cards set to emulate ISDN network side. It perfectly suits the function, switching ISDN connections between the endpoints. Additionally, it is often required to have an “independent” PSTN phone connected to the PSTN switch, in order to represent “outside” dialing patterns - such as 911, 999, 411 1-800/900 numbers. The most obvious way to do this is to enable a CallManager Express on the PSTN router, and register either hardware IP Phone or any of IP Soft-phones (such as IP Blue or CIPC) with the CME system.

However, there is another way to accomplish the same goal using IOS functionality solely. It relies on the IP-to-IP gateway feature, called “RTP loopback” session target. It is intended to be used for VoIP call testing, but could be easily utilized to loopback incoming PSTN calls to themselves. Let’s say we want PSTN router to respond to incoming calls to an emergency number 911. Here is how a configuration would look like:

PSTN:
voice service voip
allow-connections h323 to h323
!
interface Loopback0
ip address 177.254.254.254 255.255.255.255
!
dial-peer voice 911 voip
destination-pattern 911
session target ipv4:177.254.254.254
incoming called-number 999
tech-prefix 1#
!
dial-peer voice 1911 voip
destination-pattern 1#911
session target loopback:rtp
incoming called-number 1#911

The trick is that only IP-to-IP calls could be looped back. Because of that, we need to redirect the incoming PSTN call to the router itself first, in order to establish an incoming VoIP call leg.

While this approach permits VoIP call testing, it lacks one important feature, available with the “real” PSTN phone: placing calls from the PSTN phone to the in-rack phones. However, you can always use “csim start” command on the PSTN router to overcome this obstacle. Have fun!

Feb
26

Catalyst QoS configuration for IP Telephony endpoints is one of the CCIE Voice labs topics. Many people have issues with that one, because of need to memorize a lot of SRND recommendations to do it right. The good news is that during the lab exam you have full access to the QoS SRND documents and UniverCD content. The bad news is that you won’t probably have enough time to navigate the UniverCD with comfort plus the reference configurations often have a lot of typos and mistakes in them.

Here are the three main goals you need to accomplish with the Catalyst QoS:

1) Remark voice signaling and bearer traffic on the server ports (CCMs & Unity) to ensure compliance with QoS SRND.

2) Classify & mark voice/signaling traffic on Cisco IP Phones switch-ports. Apply scavenger markdown if required.

3) If required, ensure proper class to interface queue mappings and WRR weight assignments. Provision expedite queue if needed.

4) Trust marking on uplinks to the routers (to retain the marking for traffic entering from the WAN). Apply DSCP mutations if needed.

The first thing you should always keep in mind – don’t do the things you are not asked to do. For example, if they require you to enforce traffic marking in the Catalyst switches, but don’t ask for PQ/WRR weights tuning – don’t even bother with the latter task.

The second point – never type your configs right into the switch CLI. Copy-paste them from DocCD and edit in notepad. Save you switch running config and then paste. Practice this long enough to have good speed and typing accuracy.

OK, to begin with, all the configuration examples you need (for every major switch model) could be found here:

UniverCD > Voice/Telephony > Cisco CallManager > 4.1 > SRND > IP Telephony Endpoints

We start with 6500 & IP Phones. Copy-paste the stuff they have on the documentation page and then remove all the leftovers (Press Ctrl-H to search & replace in notepad). This is what they have on the DocCD for CCM 4.x:

#
# CoS->DSCP map according to 4.x model
# (note that CoS 3 maps to CS3 not AF31 for signaling)
#
set qos cos-dscp-map 0 8 16 24 32 46 48 56

#
# DSCP markdown settings.
#
# Note that on DocCD they put spaces between the
# DCSP values and commas - remove those
#
set qos policed-dscp-map 0,24,26,46:8

#
# They have policers set up for everything.
# Depending on your task you may not need all of them
#
set qos policer aggregate VVLAN-VOICE rate 128 burst 8000 drop

set qos policer aggregate VVLAN-CALL-SIGNALING rate 32 burst 8000 policed-dscp

set qos policer aggregate VVLAN-ANY rate 5000 burst 8000 policed-dscp

set qos policer aggregate PC-DATA rate 5000 burst 8000 policed-dscp

#
# Policers are applied using QoS ACLs on 6500.
#
# Don’t forget to replace
# "Voice_IP_Subnet/Subnet_Mask"
# with your actual voice VLAN subnet e.g. 177.1.101.0/24
#
set qos acl ip IPPHONE-PC dscp 46 aggregate VVLAN-VOICE udp 177.1.101.0 255.255.255.0 any range 16384 32767

set qos acl ip IPPHONE-PC dscp 24 aggregate VVLAN-CALL-SIGNALING tcp 177.1.101.0 255.255.255.0 any range 2000 2002

set qos acl ip IPPHONE-PC dscp 0 aggregate VVLAN-ANY 177.1.101.0 255.255.255.0 any

set qos acl ip IPPHONE-PC dscp 0 aggregate PC-DATA any

#
# Commit the ACL and apply it to respective voice-ports
#
commit qos acl IPPHONE-PC

set port qos mod/port trust-device ciscoipphone
set qos acl map IPPHONE-PC mod/port

Configure 3550 for policing and re-marking on Cisco IP Telephone ports. Use the same copy-paste trick. Watch for typos, tons of them in Cisco example (e.g. missing dashes, two DSCP on separate lines in the voice-signaling class-map etc).

!
! Replace vvlan_id and dvlan_id in
! text with your values e.g. 101 & 201
!

!
! CoS->DSCP map per CS3 usage for signaling
!
mls qos map cos-dscp 0 8 16 24 34 46 48 56

!
! Markdown everything to CS1 (scavenger)
!
mls qos map policed-dscp 0 24 26 46 to 8

!
! ACL to match any IP traffic - misses dash in the
! keyword "access-list"
!
ip access-list standard ACL_ANY
permit any

!
! Voice bearer
!
class-map match-all VOICE
match ip dscp 46

!
! VoIP signaling
!
class-map match-any CALL-SIGNALING
match ip dscp 24 26

!
! Per-VLAN: Voice Bearer & Signaling
!
class-map match-all VVLAN-VOICE
match vlan 101
match class-map VOICE

class-map match-all VVLAN-CALL-SIGNALING
match vlan 101
match class-map CALL-SIGNALING

!
! DocCD has incorrect acl name "ACL_Name" here,
! replace with ACL_ANY
!
class-map match-all ANY
match access-group name ACL_ANY

!
! Anything else on Voice and Data VLAN
!
class-map match-all VVLAN-ANY
match vlan 101
match class-map ANY

!
! Anything on Data VLAN
!
class-map match-all DVLAN-ANY
match vlan 201
match class-map ANY

!
! The actual Per-Port Per-VLAN policy map
!

!
! Voice Traffic policed hard to 128Kps
!
policy-map IPPHONE-PC
class VVLAN-VOICE
set ip dscp 46
police 128000 8000 exceed-action drop

!
! Sinaling traffic is remarked on exceed
!
class VVLAN-CALL-SIGNALING
set ip dscp 24
police 32000 8000 exceed-action policed-dscp-transmit

!
! Anything else on Voice VLAN
!
class VVLAN-ANY
set ip dscp 0
police 32000 8000 exceed-action policed-dscp-transmit

!
! They use the name DVLAN-VOICE on DocCD should be
! DVLAN-ANY
!

!
! Data traffic is remarked to CS1 when exceeds 5Mbsp
!
class DVLAN-ANY
set ip dscp 0
police 5000000 8000 exceed-action policed-dscp-transmit

!
! Apply the policy
!
interface FastEthernet 0/1
switchport voice vlan 101
switchport access vlan 201
mls qos trust device cisco-phone
service-policy input IPPHONE-PC

Next we need to enforce marking on servers traffic. For this one, you’d better memorize all the voice signaling ports. Use the following link as your reference

TCP and UDP Ports Used by Cisco CallManager 3.3

However, if you suddenly find you forgot some of the ports, dont panic. Use the command show ip nbar port-map to find the port numbers assigned to the protocol in questions (e.g. MGCP or H.323).

Mostly likely you will have servers connected to 6500. In addition to that, CatOS ACL syntax is a bit more unfamiliar to most of us, so we are going to come with an example of QoS ACL for CatOS.

clear qos acl SERVERS
commit qos acl SERVERS

#
# SCCP/Skinny
#
set qos acl ip SERVERS dscp 24 tcp any any range 2000 2002
set qos acl ip SERVERS dscp 24 tcp any range 2000 2002 any

#
# SIP
#
set qos acl ip SERVERS dscp 24 tcp any any eq 5060
set qos acl ip SERVERS dscp 24 udp any any eq 5060

#
# H.323 RAS (discovery & response/reply)
#
set qos acl ip SERVERS dscp 24 udp any any range 1718 1719

#
# H.323 Signaling
#
set qos acl ip SERVERS dscp 24 tcp any any eq 1720

#
# H.245 Media Negotiation
#
set qos acl ip SERVERS dscp 24 tcp any any range 11000 65535

#
# MGCP PRI backhaul/signaling
#
set qos acl ip SERVERS dscp 24 tcp any any eq 2428
set qos acl ip SERVERS dscp 24 tcp any eq 2428 any
set qos acl ip SERVERS dscp 24 udp any any eq 2427
set qos acl ip SERVERS dscp 24 udp any eq 2427 any
#
# Voice bearer
#
set qos acl udp SERVERS dscp 46 udp any any range 16384 32767

#
# Apply the ACL to all server ports
#
commit qos acl SERVERS
set port qos 2/1 port-based
set qos acl map SERVERS 2/1

Note that in the above configuration we match application ports for flows to/from the servers. This is not needed in all cases, but usually it's safe to leave the configuration like this, just to save some time thinking on the optimal access-list structure :)

The last thing needed to be done - trusting DSCP on the uplinks to routers. This is just a one-line configuration on 3550. However, not all 6500 linecards support DSCP trust feature on switch port. You may need to use the QoS ACL trick for that:

clear qos acl TRUNK
commit qos acl TRUNK

#
set qos acl ip TRUNK trust-dscp any
#
commit qos acl TRUNK

set port qos 2/5 port-based
set qos acl map TRUNK 2/5

This is an example of applying a fairly complicated configuration without having memorizing a lot of crazy stuff. Just keep in mind that you still need to practice this quite enough not to get lost in the lab. Note that we did not discuss the CoS to Queue-Id mappings, WRR weights and things like that - because you can quickly get a working example by applying the auto-qos macro to any switchport.

Jan
26

To begin with, why whould anyone need to run Multilink PPP (MLPPP or MLP) with Interleaving over Frame-Relay? Well, back in days, when Frame-Relay and ATM were really popular, there was a need to interwork the two technologies: that is, transparently pass encapsulated packets between FR and ATM PVCs. (This is similar in concept with modern L2 VPN interworking, however it was specific to ATM and Frame-Relay). Let's imagine a situation where we have slow ATM and Frame-Relay links, used to transport a mix of VoIP and data traffic. As we know, some sort of fragmentation and interleaving scheme should be implemented, in order to keep voice quality under control. Since there was no fragmentation scheme common to both ATM and Frame-Relay, people came with idea to run PPP (yet another L2 tech) over Frame-Relay and ATM PVCs and use PPP multilink and interleave feature to implement fragmentation. (Actually there was no good scheme for native fragmentation and interleaving with VoIP over ATM - the cell mode technology - how ironic!)

Before coming up with a configuration example, let's discuss briefly how PPP Multilink and Interleave works. MLPPP is defined under RFC 1990, and it's purpose is to group a number of physical links into one logical channel with larger "effective" bandwidth. As we discussed before, MLPPP uses a fragmentation algorithm, where one large frame is being split at Layer2 and replaced with a bunch of sequenced (by the use of additional MLPPP header) smaller frames which are then being sent over multiple physical links in parallel. The receiving side will then accept fragments, reorder some of them if needed, and assemble the pieces into complete frame using the sequence numbers.

So here comes the interleave feature: small voice packets are not fragmented by MLPPP (no MLPPP header and sequence number added) and are simply inserted (intermixed) among the fragments of large data packet. Of course, a special interleaving priority queue is used for this purpose, as we have discussed before.

To summarize:

1) MLPPP uses fragmentation scheme where large packets are sliced in pieces and sequence numbers are added using special MLPPP headers
2) Small voice packets are interleaved with fragments of large packets using a special priority queue

We see that MLPPP was originally designed to work with multiple physical links at the same time. However, PPP Multilink Interleave only works with one physical link. The reason is that voice (small) packets are being sent without sequence numbers. If we were using multiple physical links, the receiving side may start accepting voice packets out of their original order (due to different physical link latencies). And since voice packets bear no fragmentation headers, there is no way to reorder them. In effect, packets may arrive to their final destination out of order, degrading voice quality.

To overcome this obstacle, Multiclass Multilink PPP (MCMLPPP or MCMLP) has been introduced in RFC 2886. Under this RFC, different "fragment streams" or classes are supported at sending and receiving sides, using independent sequence numbers. Therefore, with MCMLPPP voice packets may be sent using MLPPP header with separate sequence numbers space. In result, MCMPPP permits the use of fragmentation and interleaving over multiple physical links at time.

Now back to our MLPPPoFR example. Let's imagine the situation where we have two routers (R1 and R2) connected via FR cloud, with physical ports clocked at 512Kpbs and PVC CIR values equal to 384Kbps (There is no ATM interworking in this example). We need to provide priority treatment to voice packets and enable PPP Multilink and Interleave to decrease serialization delays.

[R1]---[DLCI 112]---[Frame-Relay]---[DLCI 211]---[R2]

Start by defining MQC policy. We need to make sure that software queue gives voice packets priority treatmet, or else interleaving will be useless

R1 & R2:

!
! Voice bearer
!
class-map VOICE
match ip dscp ef

!
! Voice signaling
!
class-map SIGNALING
match ip dscp cs3

!
! CBWFQ: priority treatment for voice packets
!
policy-map CBWFQ
class VOICE
priority 48
class SIGNALING
bandwidth 8
class class-default
fair-queue

Next create a Virtual-Template interface for PPPoFR. We need to calculate the fragment size for MLPPP. Since physical port speed is 512Kpbs, and required serialization delay should not exceed 10ms (remember, fragment size is based on physical port speed!), the fragment size must be set to 512000/8*0,01=640 bytes. How is the fragment size configured with MLPPP? By using command ppp multilink fragment delay - however, IOS CLI takes this delay value (in milliseconds) and multiplies it by configured interface (virtual-template) bandwidth (in our case 384Kbps). We can actually change the virtual-template bandwidth to match the physical interface speed, but this would affect the CBWFQ weights! Therefore, we take the virtual-template bandwidth (384Kpbs) and adjust the delay to make sure the fragment size matches the physical interace rate is 512Kpbs. This way, the "effective" delay value would be set to "640*8/384 = 13ms" (Fragment_Size/CIR*8) to accomodate the physical and logical bandwidth discrepancy. (This may be unimportant if our physical port speed does not differ much from PVC CIR. However, if you have say PVC CIR=384Kbps and port speed 768Kbps you may want to pay attention to this issue)

R1:
interface Loopback0
ip address 177.1.101.1 255.255.255.255
!
interface Virtual-Template 1
encapsulation ppp
ip unnumbered Loopback 0
bandwidth 384
ppp multilink
ppp multilink interleave
ppp multilink fragment delay 13
service-policy output CBWFQ

R2:
interface Loopback0
ip address 177.1.102.1 255.255.255.255
!
interface Virtual-Template 1
encapsulation ppp
ip unnumbered Loopback 0
bandwidth 384
ppp multilink
ppp multilink interleave
ppp multilink fragment delay 13
service-policy output CBWFQ

Next we configure PVC shaping settings by using legacy FRTS configuration. Note that Bc is set to CIR*10ms.

R1 & R2:
map-class frame-relay SHAPE_384K
frame-relay cir 384000
frame-relay mincir 384000
frame-relay bc 3840
frame-relay be 0

Finally we apply all the settings to the Frame-Relay interfaces:

R1:
interface Serial 0/0/0:0
encapsulation frame-relay
frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/0:0.1 point-to-point
no ip address
frame-relay interface-dlci 112 ppp virtual-template 1
class SHAPE_384K

R2:
interface Serial 0/0/1:0
encapsulation frame-relay
frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/1:0.1 point-to-point
no ip address
no frame-relay interface-dlci 221
frame-relay interface-dlci 211 ppp virtual-Template 1
class SHAPE_384K

Verification

Two virtual-access interfaces have been cloned. First for the member link:

R1#show interfaces virtual-access 2
Virtual-Access2 is up, line protocol is up
Hardware is Virtual Access interface
Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Link is a member of Multilink bundle Virtual-Access3 <---- MLP bundle member
PPPoFR vaccess, cloned from Virtual-Template1
Vaccess status 0x44
Bound to Serial0/0/0:0.1 DLCI 112, Cloned from Virtual-Template1, loopback not set
Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset
Last input 00:00:52, output never, output hang never
Last clearing of "show interface" counters 00:04:17
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo <---------- FIFO is the member link queue
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
75 packets input, 16472 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
86 packets output, 16601 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

Second for the MLPPP bundle itself:

R1#show interfaces virtual-access 3
Virtual-Access3 is up, line protocol is up
Hardware is Virtual Access interface
Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Open: IPCP
MLP Bundle vaccess, cloned from Virtual-Template1 <---------- MLP Bundle
Vaccess status 0x40, loopback not set
Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset
Last input 00:01:29, output never, output hang never
Last clearing of "show interface" counters 00:03:40
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: Class-based queueing <--------- CBWFQ is the bundle queue
Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/1/128 (active/max active/max total)
Reserved Conversations 1/1 (allocated/max allocated)
Available Bandwidth 232 kilobits/sec
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
17 packets input, 15588 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
17 packets output, 15924 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

Verify the CBWFQ policy-map:

R1#show policy-map interface
Virtual-Template1

Service-policy output: CBWFQ

Service policy content is displayed for cloned interfaces only such as vaccess and sessions
Virtual-Access3

Service-policy output: CBWFQ

Class-map: VOICE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp ef (46)
Queueing
Strict Priority
Output Queue: Conversation 136
Bandwidth 48 (kbps) Burst 1200 (Bytes)
(pkts matched/bytes matched) 0/0
(total drops/bytes drops) 0/0

Class-map: SIGNALING (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: ip dscp cs3 (24)
Queueing
Output Queue: Conversation 137
Bandwidth 8 (kbps) Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0
(depth/total drops/no-buffer drops) 0/0/0

Class-map: class-default (match-any)
17 packets, 15554 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
Queueing
Flow Based Fair Queueing
Maximum Number of Hashed Queues 128
(total queued/total drops/no-buffer drops) 0/0/0

Check PPP multilink status:

R1#ping 177.1.102.1 source loopback 0 size 1500

Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 177.1.102.1, timeout is 2 seconds:
Packet sent with a source address of 177.1.101.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 64/64/64 ms

R1#show ppp multilink

Virtual-Access3, bundle name is R2
Endpoint discriminator is R2
Bundle up for 00:07:49, total bandwidth 384, load 1/255
Receive buffer limit 12192 bytes, frag timeout 1000 ms
Interleaving enabled <------- Interleaving enabled
0/0 fragments/bytes in reassembly list
0 lost fragments, 0 reordered
0/0 discarded fragments/bytes, 0 lost received
0x34 received sequence, 0x34 sent sequence <---- MLP sequence numbers for fragmented packets
Member links: 1 (max not set, min not set)
Vi2, since 00:07:49, 624 weight, 614 frag size <------- Fragment Size
No inactive multilink interfaces

Verify the interleaving queue:

R1#show interfaces serial 0/0/0:0
Serial0/0/0:0 is up, line protocol is up
Hardware is GT96K Serial
MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation FRAME-RELAY, loopback not set
Keepalive set (10 sec)
LMI enq sent 10, LMI stat recvd 11, LMI upd recvd 0, DTE LMI up
LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0
LMI DLCI 1023 LMI type is CISCO frame relay DTE
FR SVC disabled, LAPF state down
Broadcast queue 0/64, broadcasts sent/dropped 4/0, interface broadcasts 0
Last input 00:00:05, output 00:00:02, output hang never
Last clearing of "show interface" counters 00:01:53
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: dual fifo <--------- Dual FIFO
Output queue: high size/max/dropped 0/256/0 <--------- High Queue
Output queue: 0/128 (size/max) <--------- Low (fragments) queue
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
47 packets input, 3914 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
1 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
47 packets output, 2149 bytes, 0 underruns
0 output errors, 0 collisions, 4 interface resets
0 output buffer failures, 0 output buffers swapped out
1 carrier transitions
Timeslot(s) Used:1-24, SCC: 0, Transmitter delay is 0 flags

Further Reading

Reducing Latency and Jitter for Real-Time Traffic Using Multilink PPP
Multiclass Multilink PPP
Using Multilink PPP over Frame Relay

Subscribe to INE Blog Updates