The need for fragmentation

We are going to briefly discuss Layer2 fragmentation schemes, their purpose and configuration examples. Let’s start with a general discussion. Usually, Layer2 fragmentation is used to accomplish one of two goals:

a) Link aggregation, e.g. making a number of physical channels look like one logical link from Layer2 standpoint. A good example is PPP Multilink, which breaks large packets into smaller pieces, and send them other multiple physical links simulataneously. Another example is FRF.16 (Multilink Frame-Relay).

b) Decrease large packets serialization delay on slow links. By “slow link”, we mean a link with “physical” speed (e.g. clock-rate) less than 1 Mbps. The issue is usually to have a mix of bulk data and delay-sensitive traffic (e.g. voice) on the same link. This is because large bulky packets (say 1500 bytes in size) may block the interface transmission queue for a long time (with slow links), making small voice packets (e.g. 60 bytes) to wait for more than maximum tolerable threshold (say 10ms).

For example, if physical interface has clock rate of 384000bps, large 1500 byte packet would take 1500*8/384000 > 30ms to serialize. So here comes the solution: break large packets into small pieces at layer2, to decrease the serialization delay. Say if we break one 1500 packet into 3×500 byte frames on a 384Kpbs link, we’ll get 10ms transmission delay for each fragment. Look at the following picture ([V] is a voice packet, and [D] is a data packet)

Before fragmentation:


After fragmentation:


There is still something wrong here: Small pieces of a large packet are being sent in a row, effectively blocking the transmission qeueue the same way it was before. So just fragmenting alone is not enough – we need a way to make sure the fragments of large packets are “mixed” with voice packets. The technique is called “interleaving”, and it always accompanies fragmentation. With interleaving we get a picture like this:


That is, voice packets are not separated by large “islands” of data packets.

So how does interleaving work? Usually, it is accomplished by inserting a special “interleaving” queue before interface transmission (FIFO) queue. Interleaving queue usually has two parts: “high” and “low” FIFO queues. Small packets (packets smaller than configured fragment size) go to “high” queue, and large packets are first fragmented, and then assigned to “low” queue. With this strategy, “high” queue is a priority queue – it’s always get emptied first, and only then the “low” queue gets served.

[Interface Software Queue, e.g. WFQ] --> 

If(Packet.Size lt FRAGMENT_SIZE) 


{ put to High_Queue } 


{ Fragment and put fragments to Low_Queue } 

--> { Service (High_Queue) then Service(Low_Queue) } --> [Interface Xmit Queue]

We are not over yet! You’ve probably noticed “Interface Software Queue” on the diagram above. It plays an important role too. Say, if this is a simple FIFO queue, and a bunch of large data packets sit there ahead of small voice packets. The data packets will get dequeud first, fragmented, and since “high” interleaving queue is empty, will be sent in line on their own. Therefore, the last component to make fragmentation and interleaving work properly, is a software interface queue that give voice packets priority treatment. This could be legacy WFQ or modern CBWFQ/LLQ – just remember that voice packets should be taken from software queue first!

So here are the important things to remember about fragmentation:

1) Fragmentation is not effective without interleaving
2) Interleaving is accomplished by use of additional priority queue
3) Decision on where to put a packet to “high” interleaving queue is based on packet size solely
4) Interleaving is inefficient without a software queue that gives small (e.g. voice) packets priority treatment

Situation becomes more complicated, when we have multiple logical channels (e.g. PVCs) multiplexed over the same physical channel. For example, with a number of Frame-Relay PVCs, assigned to the same physical interface, we get multiple software queues – one per each PVC. They all share the same interleaving queue at physical interface level. Due to the fact that large packets of one PVC may affect small packets serialization delay of the other PVC, fragmentation should be turned on for all PVCs simultaneously.

About Petr Lapukhov, 4xCCIE/CCDE:

Petr Lapukhov's career in IT begain in 1988 with a focus on computer programming, and progressed into networking with his first exposure to Novell NetWare in 1991. Initially involved with Kazan State University's campus network support and UNIX system administration, he went through the path of becoming a networking consultant, taking part in many network deployment projects. Petr currently has over 12 years of experience working in the Cisco networking field, and is the only person in the world to have obtained four CCIEs in under two years, passing each on his first attempt. Petr is an exceptional case in that he has been working with all of the technologies covered in his four CCIE tracks (R&S, Security, SP, and Voice) on a daily basis for many years. When not actively teaching classes, developing self-paced products, studying for the CCDE Practical & the CCIE Storage Lab Exam, and completing his PhD in Applied Mathematics.

Find all posts by Petr Lapukhov, 4xCCIE/CCDE | Visit Website

You can leave a response, or trackback from your own site.

9 Responses to “Link Efficiency: Fragmentation”

  1. Truly outstanding post – keep up the good work

  2. Alfonso López says:

    An excellent way of explaining interleaving and fragmentation. You make it simple and understandable. Thanks.

  3. burs says:

    wonderful article. You used simple language for understand everybody

  4. Jesse says:


    Are you saing in this article that the dual FIFO queue created by FRF.12 (or used to be created by FRF.12 prior to the adoption of the HQF code), is not the actual software interface-level queue but is yet another layer of queues that drains into the software interface level queue?

    This topic has always been hard for me to grasp. Before I read your article, my understanding was that FRF.12 created dual FIFO queues IN PLACE of the software interface-level queue. In other words, by enabling FRF.12 on an interface, we transformed the software interface-level queue, which is either WFQ (on serial interfaces slower than 2.0 Mbps) or FIFO (on serial interfaces faster than 2.0 Mbps), into a Dual FIFO queue. In fact, when you enable FRF.12 on a serial interface and issue the “show interfaces serial x/y” command, you could see that the software interface-level queue is Dual FIFO, whereas prior to this command being configured, the “show interface serial x/y” command showed that the software interface level queue is either WFQ or FIFO.

    On the other hand, there is yet another queue, called the hardware interface-level queue (or Tx-ring), which is always a FIFO queue. This queue is where the problem of too many data packets in front of a voice packet could cause serialization delays for the voice packet, which leads to jitter. I thought that the way to solve this problem was to reduce the size of the hardware interface-level queue (Tx-ring) to about 3 packets, so that no more than 3 fragmented data packets would be waiting to be serialized in front of a voice packet at any time. If there are more than three data packets that need to be sent out of the interface, the rest of the data packets would be buffered in the non-priority software interface-level queue created by FRF.12. In the mean time, if there are packets buffered in both the priority and the non-priority queues created by the FRF.12, waiting to get into tx-ring, the next packet dequeued into tx-ring would be a packet from the priority queue, and therefore, when both voice and data are present, voice would be getting priority in the tx-ring placement.

    Could you comment on this?

    • @Jesse

      Fragmentation & Interleaving always require two components:

      1) PVC queue that provides priority treatment for VoIP packets
      2) Interface Software queue that provides priority treatment for “small” packets

      You may see that both FRF.12 and MLPPP implement that. With FRF.12 interface software queue is dual-FIFO, which is the “interleaving” queue (made from 4-queue PQ). MLPPP “hides” the interleaving queue, but it is still there. As for TX-ring, its sole purpose is to ensure the interface physical line alway get smooth flow of packets to sustain maximum sending rate.

      HQF implements similar mechanics, but it is all hidden now, so you can’t use any show commands to discover the underlying queuing solution.

  5. will ham says:


    Excellent post. But got a question. You said that “to make fragmentation and interleaving work properly, is a software interface queue that give voice packets priority treatment. This could be legacy WFQ or modern CBWFQ/LLQ.” So are you saying that WFQ has priority queueing integrated into it? Basically, Legacy WFQ has a separate Priority Queue? And that the other queue is just a FIFO queue? But normally, WFQ doesn’t have a priority queue though right? Am I understanding this correct?

  6. [...] #1: http://blog.ine.com/2008/01/25/link-efficiency-fragmentation/ By my interpretation, the above link indicates that when this feature is enabled, there are [...]


Leave a Reply to FR Fragmentation on the interface - So Do You Want to be a CCIE? - 2bccie.com

Click here to cancel reply.


CCIE Bloggers