Computing voice bandwidth is usually required for scenarios where you provision LLQ queue based on the number of calls and VoIP codec used. You need to account for codec rate, Layer 3 overhead (IP, RTP and UDP headers) and Layer 2 overhead (Frame-Relay, Ethernet, HDLC etc. headers). Accounting for Layer 2 overhead is important, since the LLQ policer takes this overhead in account when enforcing maximum rate.

We are going to consider two codecs for bandwidth computation: G.729 and G.711. By default, both codecs generate 50 VoIP packets per second. However, the codec framing rate is 10ms (100 packets per second). Therefore, each VoIP packet carries two frames with VoIP samples. The frame sizes are 10 bytes and 80 bytes for G.729 and G.711 codecs respectively.

Based on this, G.729 generates [10*2]*50*8=8000bps and G.711 generates [80*2]*50*8=64000bps of “payload” rate – no Layer 3 or Layer 2 overheads.

RTP header size is 12 bytes and UDP header size is 8 bytes. Typical IP header (no options) is 20 bytes in lenght. Therefore, the Layer 3 overhead is 40 bytes, if we don’t use header compression.

The following are the formats for WAN frames commonly used to transport voice. Note that these formats remain the same with or without FRF.12/MLP fragmentation schemes, since voice packets are never fragmented with good design.

Highlighted in green are the portions of Layer 2 frames that Cisco IOS queue scheduler accounts for when computing actual frame size. Note that the scheduler does not account for full Layer 2 overhead, but you need to provision more bandwidth for LLQ so that other class may not be configured with too much bandwidth. As we can see, both Cisco and IETF Frame-Relay encapsulations add 7 bytes of Layer 2 overhead to VoIP packets. The same holds true for HDLC encapsulation (which is not very common but added here for sake of completeness). PPP over Frame-Relay adds 9 bytes of overhead – the maximum overhead of all presented encapsulation types.

Using the information above, you can compute bandwidth usage for uncompressed voice traffic flow across any WAN connection. For example, let’s compute bandwidth consumption for a G.729 call across Frame-Relay link with FRF.12 fragmentation. First, FRF.12 does not fragment voice packets if configured properly. Next, the size of payload + Layer 3 overhead is 2x10 bytes + 40 bytes = 60 bytes. Based on the 50 pps rate and adding the 7 bytes overhead we end up with the bandwidth value of:


If you want to use G.711 codec, then replace the 20 bytes payload with 160 bytes. The result is:


Another thing to consider is IP/RTP/UDP headers compression. Cisco’s implementation reduces the total overhead of 40 bytes (12+8+20) down to 2 bytes (no UDP checksum). This limits the Layer 3 overhead to just 2 bytes. Let’s compute the bandwidth usage for G.729 call over MLPoFR with UDP header compression (9 bytes of Layer 2 overhead):


The same computations for compressed G.729 over Frame-Relay with or without FRF.12 bring the following result:


Now a few words about running VoIP traffic across Ethernet. Usually you don’t use CBWFQ/LLQ on fast connections on small to mid.range routers to guarantee bandwidth to VoIP traffic. Most of these routers are not capable of sending traffic at such rate that they oversubscribe 100Mbs interface. However, you may occasionally use Ethernet as "WAN" connection using Class-Based Shaping for sub-rate access. So just in case, Layer 2 overhead for typical Ethernet frame is 18 bytes – 14 bytes for Ethernet header and 4 bytes for FCS (32 bits). If the frame carries VLAN tag, add another 4 bytes here for 22 bytes of total overhead. Note that you will typically see G.711 codec used over LAN links.


As I am sure you have already seen from the blog on setting up the security device as a Layer 2 device, there are many interesting changes that occur on a PIX or ASA when configured for transparent operations. This blog highlights the major changes and guidelines that you should keep in mind when you opt for this special mode of operation.

  • Number of interfaces - perhaps on of the biggest things you will want to keep in mind is the fact that you are going to be limited on the number of traffic forwarding interfaces you can use when in Layer 2 mode. When you switch to transparent mode, you are limited to the use of two traffic forwarding interfaces. On some ASA models, you may also use your dedicated management interface, but of course, the use of this port is limited for management traffic. Remember also, when in multiple context mode, you cannot share interfaces between contexts like you can when in routed mode.
  • IP addressing - here is another major difference of course. In Layer 2 mode, you will assign a single IP address to the device in Global Configuration mode. This address is for remote management purposes and is required before the device will forward traffic. Once the address is assigned, all interfaces start "listening" on this address to ensure the device is responsive to its administrator. This global IP addressed assigned to the device must be in the same subnet that the forwarding interfaces are participating in. Remember, the transparent firewall is not adding a new network (subnet) to your topology.
  • Default gateway - for traffic sourced from the security device itself, you can configure a default gateway on the transparent device. You can do this with the route 0 0 command.
  • IPv6 support - the transparent firewall does not support IPv6.
  • Non-IP traffic - you can pass non-IP traffic through the Layer 2 Mode device. Note that this is not possible on a security appliance in its default Layer 3 mode.
  • More unsupported features - the Layer 2 mode device does not support - Quality of Service (QoS) or Network Address Translation (NAT).
  • Multicast - the transparent mode device does not offer multicast support, but you can configure Access Control Lists (ACLs) in order to pass multicast traffic through the device.
  • Inspection - with the Layer 2 mode device you can inspect traffic at Layer 2 and above. With the classic routed mode configuration, you can only inspect at Layer 3 and above.
  • VPN support - the transparent mode device does support a site to site VPN configuration, but only for its management traffic.

Can you please help me understand use of Frame-Relay Interface-dlci command. It’s getting mysterious for me day by day as I am studying FR. The reason being is I earlier thought that I should only use this command on FR point to point subinterface. As Point to Point subinterface don’t allow us to put Frame relay map statements. Also in such case Inverse arp should be turned off. But while I was going through Cisco’s FR documentation on website I saw that in almost all examples they used interface dlci command on interface not on sub interface and also without turning off inverse arp. So the question now is if inverse arp is turned on then as per my understanding we need not to put this command as it will discover dlci settings through lmi signals automatically.

Kindly explain Interface Dlci command to me…

When I saw this post, I got to thinking a little bit.  Mostly about the fact that the interface-dlci appears to be a much more misunderstood command than I ever gave it credit for!  (poor thing...)

The quick answer is that the "frame-relay inteface-dlci" command simply says "This DLCI goes here" to the router.

On a physical interface, this command is largely irrelevant (more in a minute) because ALL DLCIs are assigned to the physical interface by default.  If you are ever interested, concerned or otherwise bored, just check out "show frame-relay pvc" and you will see where they are assigned.

So in the case of sub-interfaces, there is no automagical assignment of DLCI numbers.  Even if your subinterface number and DLCI number are the same.  That's just a sign of being anal-retentive (or as we consultants call it, "good at documentation") or a little OCD.  But you can technically have DLCI 100 on subinterface Serial 0/0.223.  Kinda strange, but perfectly workable!

So whenever you have a subinterface, you need to do SOMETHING to tell the router "this DLCI goes here".

So now let's look at the next portion:  Mapping.  Layer3 to Layer2 mapping in particular.   We can learn about L3-L2 mapping via Inverse ARP.  This is on by default, but frowned upon in the CCIE Realm!  "Show frame-relay map" will let you know if you have learned any addresses dynamically or not.

So if we DID allow Inverse ARP, whether our subinterfaces were point-to-point or multipoint ones, we COULD just use the "frame-relay interface-dlci" command and nothing else.  (Yes, I know inverse ARP requests are not sent by default on subinterfaces, but responses still are.  Watch your debugs!)  :)

So the Interface-DLCI command assigned the PVC to a subinterface.  Inverse ARP then took care of the mapping.  What if we aren't allowed to use Inverse ARP?  Like for the CCIE lab?  Ok, what are our options?  Well, the "frame-relay map" command is the most obvious and well known.  That works very well.  The "frame-relay map" command both assigned L3-L2 mapping AND says "this DLCI goes here" all in one command!

Unless of course you are on a point-to-point subinterface.  As you pointed out, you can't use the map command there!  But that's ok, it's not needed anyway!  Point-to-point links have a different way of thinking.  They view the world as "If it's not my address it must be yours" and sends things out.

So that covers our two primary issues with PVC operations in Frame Relay.  #1 is assigning the DLCI to an interface (no magic).  #2 is the L3-L2 mapping to make IP actually work!

The last part I want to add (re: my "more in a minute" above) was that the "frame-relay inteface-dlci" command also serves another purpose which sometimes gets confusing in terms of where we just got through with things!

So where we left things with "frame-relay interface-dlci" commands:

1.  Definitely used on point-to-point subinterfaces
2.  Can be used on multipoint subinterfaces if Inverse ARP works
3.  Not used on physical interfaces because all DLCIs belong there by default.

Now, just to mess with that logic a bit.  When studying frame-relay, and particularly by the time you get into QoS configurations you will become familiar with frame-relay map-classes.  Map classes can be assigned to an interface or subinterface without any problem.  When this occurs, the information in the map-class gets appled to EVERY DLCI on that interface or subinterface.

So what happens if you have different QoS parameters for different PVCs that just happen to be on the same interface/subinterface?  Hmmmm...  Well, in comes the "frame-relay inteface-dlci" command again!  See, it really IS a cool command!

The "frame-relay map" command does not have any parameter for adding a map-class.  After you hit enter on your "frame-relay interface-dlci" command though, you'll get a new sub-command prompt.  Try using "?" here.  You'll see that you have the opportunity to specify a separate map-class for each and every DLCI that you have.

So if you see "frame-relay interface-dlci" commands on a physical interface.  Or if you see them AND a "frame-relay map" command under a multipoint subinterface, this is the reason why.  If you use the "frame-relay inteface-dlci" command AND the "frame-relay map" command for the same PVC, you will need to make sure the "frame-relay map" command comes first.  Otherwise the router will express its displeasure!

So there are some very simple, but also some very powerful things the little "frame-relay interface-dlci" command does.  Hopefully that will help you take some of the mystery out of things!


For the sake of simplicity and enabling a wider audience we decided to post our regular CCIE brainteasers to the blog.  The winner will get a coupon worth 10% off the price of any of our training packages for R&S, Security, Voice or Service Provider or a $250 Amazon.com gift card! Note that the 10% off discount can not be used with any other discount code you may already have. Please post your solution under the comments for this blog entry - the first person to post the correct solution is the winner. Make sure you provide the correct email address in your response so we can contact you in the event you won.  On Tuesday (August 12th) we will post the solution and announce the winner.

For today the task is an easy one or at least appears to be ;-) Imagine a simple topology made of 3 switches:

STP topology

All switches are running STP for VLAN123 with SW3 being the root.  Your task is to configure the network in such a way so that SW1 port fa0/13 is the root port and SW1 port fa0/16 is the alternate port for VLAN 123.  Sound easy?  Here are the requirements:

1) Do not change any STP link cost

2) SW3 must remain the root for VLAN 123

3) The port types must be access

4) Do not use the switchport backup interface command

5) Do not try to use SPAN or RSPAN

6) Do not disable STP

Good luck!

The correct solution is:

1) Configure SW2 to tunnel STP BPDUs between SW1 and SW3. This will make SW1 thinking that that SW3 is directly connected with cost 19. STP is still active on SW2, but SW2 considers itself the root.

interface FastEthernet 0/13
l2protocol-tunnel stp
interface FastEthernet 0/16
l2protocol-tunnel stp

2) Configure SW3 port Fa0/16 with lower STP priority than SW3 Fa 0/13. This will make SW1 select its connection to SW2 as the root port and the other uplink is alternate: both uplinks have equal costs, the upstream port priority is the tiebreaker.

interface FastEthernet 0/16
spanning-tree port-priority 64

Below is a summarization of some of the close but not quite correct approaches people submitted:

1) Change interface bandwidth/speeds. This is not allowed, since the requirement was not to change spanning-tree costs.

2) Use dot1q tunnel on SW2 – this was prohibited by requirement to set port modes to access

3) Filter spanning-tree BPDUs coming to SW1 from SW3. This would break the requirement for Fa 0/16 port to be alternate path to root. Aside from that, that would result in STP loop, since this is a circular topology.

4) Disabling STP in SW2 explicitly which is prohibited by the requirements

5) Incorrectly assuming that port-priority on SW1 may influence root port selection

6) One complicated MSTP solution submitted by two people actually works but was submitted after the above solution was posted.  The solution is based on differentiation between regional root and CIST root.  Not the simplest solution but it works.  The two people that posted this solution also deserve credit for their MSTP knowledge.  We'll do a post on MSTP inter-region operations here on the blog in the next few days.

The winner is: "Roman”


Within the scope of Metro Ethernet services, it is often beneficial to provide customers "point-to-point" VLAN service, where VLAN (multipoint service in essence) is effectively set up to emulate ethernet "pseudowire", by disabling MAC-address learning. The benefit comes from saving metro switches CAM tables address space, thus improving overall scalability (which is far from perfect with Ethernet). There is special command, mac address-table learning available on Cisco Metro swtiches (e.g. ME 3400) which allows to disable MAC-address learning per specific VLAN. However, many commonly used switches does not have this feature implemented. Still, there is a way to disable MAC-address learning on a group of ports, by using RSPAN VLAN feature. By it's functional design, RSPAN VLAN does not learn MAC addresses. However, we are not allowed to assign this type of VLAN directy to switch access ports. Still, we may overcome this issue by configuring switchports as trunk with a single allowed VLAN (RSPAN VLAN) which is also configured as native:

vtp mode transparent
vlan 555
interface range Fa 0/1 - 3
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk allowed vlan 555
switchport trunk native vlan 555

This configuration is applicable to any switch that supports RSPAN functionality. Specifically, it was verified on Catalyst 3550 series.


The need for fragmentation

We are going to briefly discuss Layer2 fragmentation schemes, their purpose and configuration examples. Let's start with a general discussion. Usually, Layer2 fragmentation is used to accomplish one of two goals:

a) Link aggregation, e.g. making a number of physical channels look like one logical link from Layer2 standpoint. A good example is PPP Multilink, which breaks large packets into smaller pieces, and send them other multiple physical links simulataneously. Another example is FRF.16 (Multilink Frame-Relay).

b) Decrease large packets serialization delay on slow links. By "slow link", we mean a link with "physical" speed (e.g. clock-rate) less than 1 Mbps. The issue is usually to have a mix of bulk data and delay-sensitive traffic (e.g. voice) on the same link. This is because large bulky packets (say 1500 bytes in size) may block the interface transmission queue for a long time (with slow links), making small voice packets (e.g. 60 bytes) to wait for more than maximum tolerable threshold (say 10ms).

For example, if physical interface has clock rate of 384000bps, large 1500 byte packet would take 1500*8/384000 > 30ms to serialize. So here comes the solution: break large packets into small pieces at layer2, to decrease the serialization delay. Say if we break one 1500 packet into 3x500 byte frames on a 384Kpbs link, we'll get 10ms transmission delay for each fragment. Look at the following picture ([V] is a voice packet, and [D] is a data packet)

Before fragmentation:


After fragmentation:


There is still something wrong here: Small pieces of a large packet are being sent in a row, effectively blocking the transmission qeueue the same way it was before. So just fragmenting alone is not enough - we need a way to make sure the fragments of large packets are "mixed" with voice packets. The technique is called "interleaving", and it always accompanies fragmentation. With interleaving we get a picture like this:


That is, voice packets are not separated by large "islands" of data packets.

So how does interleaving work? Usually, it is accomplished by inserting a special "interleaving" queue before interface transmission (FIFO) queue. Interleaving queue usually has two parts: "high" and "low" FIFO queues. Small packets (packets smaller than configured fragment size) go to "high" queue, and large packets are first fragmented, and then assigned to "low" queue. With this strategy, "high" queue is a priority queue - it's always get emptied first, and only then the "low" queue gets served.

[Interface Software Queue, e.g. WFQ] -->

If(Packet.Size lt FRAGMENT_SIZE)


{ put to High_Queue }


{ Fragment and put fragments to Low_Queue }

--> { Service (High_Queue) then Service(Low_Queue) } --> [Interface Xmit Queue]

We are not over yet! You've probably noticed "Interface Software Queue" on the diagram above. It plays an important role too. Say, if this is a simple FIFO queue, and a bunch of large data packets sit there ahead of small voice packets. The data packets will get dequeud first, fragmented, and since "high" interleaving queue is empty, will be sent in line on their own. Therefore, the last component to make fragmentation and interleaving work properly, is a software interface queue that give voice packets priority treatment. This could be legacy WFQ or modern CBWFQ/LLQ - just remember that voice packets should be taken from software queue first!

So here are the important things to remember about fragmentation:

1) Fragmentation is not effective without interleaving
2) Interleaving is accomplished by use of additional priority queue
3) Decision on where to put a packet to "high" interleaving queue is based on packet size solely
4) Interleaving is inefficient without a software queue that gives small (e.g. voice) packets priority treatment

Situation becomes more complicated, when we have multiple logical channels (e.g. PVCs) multiplexed over the same physical channel. For example, with a number of Frame-Relay PVCs, assigned to the same physical interface, we get multiple software queues - one per each PVC. They all share the same interleaving queue at physical interface level. Due to the fact that large packets of one PVC may affect small packets serialization delay of the other PVC, fragmentation should be turned on for all PVCs simultaneously.


Hello Brian,Can you explain how PPP over Frame Relay works? Also what are the advantages and disadvantages of using it over normal Frame Relay configuration?Thanks and regards,


Hi Yaser,

Frame Relay does not natively support features such as authentication, link quality monitoring, and reliable transmission. Based on this it is sometimes advantageous to encapsulate an additional PPP header between the normal layer 2 Frame Relay encapsulation and the layer 3 protocol. By running PPP over Frame Relay (PPPoFR) we can then implement authentication of Frame Relay PVCs, or even bind multiple PVCs together using PPP Multilink.

PPPoFR is configure in Cisco IOS through the usage of a Virtual-Template interface. A Virtual-Template is a PPP encapsulated interface that is designed to spawn a “template” of configuration down to multiple member interfaces. The traditional usage of this interface has been on dial-in access servers, such as the AS5200, to support multiple PPP dialin clients terminating their connection on a single interface running IP.

The first step in configuring PPPoFR is to create the Virtual-Template interface. This interface is where all logical options, such as IP address and PPP authentication will be configured. The syntax is as follows:

interface Virtual-Template1
ip address
ppp chap hostname ROUTER6
ppp chap password 0 CISCO

Note the lack of the “encapsulation ppp” command on the Virtual-Template. This command is not needed as a Virtual-Template is always running PPP. This can be seen by looking at the “show interface virtual-template1” output in the IOS. Additionally in this particular case the remote end of this connection will be challenging the router to authenticate via PPP CHAP. The “ppp chap” subcommands have instructed the router to reply with the username ROUTER6 and an MD5 hash value of the PPP magic number and the password CISCO.

Our next step is to configure the physical Frame Relay interface, and to bind the Virtual-Template to the Frame Relay PVC. This is accomplished as follows:

interface Serial0/0
encapsulation frame-relay
frame-relay interface-dlci 201 ppp Virtual-Template1

Note that the “no frame-relay inverse-arp” command is not used on this interface. Since our IP address is located on the Virtual-Template interface the Frame Relay process doesn’t actually see IP running over the link. Instead it simply sees a PPP header being encapsulated on the link, while the IPCP protocol of PPP takes care of all the IP negotiation for us. Note that the order that these steps are performed in is significant. If a Virtual-Template interface is applied to a Frame Relay PVC before it is actually created you may see difficulties with getting the link to become active.

Also when using a Virtual-Template interface it’s important to understand that a Virtual-Access “member” interface is cloned from the Virtual-Template interface when the PPP connection comes up. Therefore the Virtual-Template interface itself will always be in the down/down state. This can affect certain network designs such as using the backup interface command on a Virtual-Template. In our particular case we can see from the below output this effect:

R6#show ip interface brief | include
Virtual-Access1 YES TFTP up up
Virtual-Template1 YES manual down down

Aside from this there is no other configuration that directly relates to Frame Relay for PPP. Other options such as authentication, reliability, and multilink would be configured under the Virtual-Template interface.

Subscribe to INE Blog Updates

New Blog Posts!