Jan
26

To begin with, why whould anyone need to run Multilink PPP (MLPPP or MLP) with Interleaving over Frame-Relay? Well, back in days, when Frame-Relay and ATM were really popular, there was a need to interwork the two technologies: that is, transparently pass encapsulated packets between FR and ATM PVCs. (This is similar in concept with modern L2 VPN interworking, however it was specific to ATM and Frame-Relay). Let’s imagine a situation where we have slow ATM and Frame-Relay links, used to transport a mix of VoIP and data traffic. As we know, some sort of fragmentation and interleaving scheme should be implemented, in order to keep voice quality under control. Since there was no fragmentation scheme common to both ATM and Frame-Relay, people came with idea to run PPP (yet another L2 tech) over Frame-Relay and ATM PVCs and use PPP multilink and interleave feature to implement fragmentation. (Actually there was no good scheme for native fragmentation and interleaving with VoIP over ATM – the cell mode technology – how ironic!)

Before coming up with a configuration example, let’s discuss briefly how PPP Multilink and Interleave works. MLPPP is defined under RFC 1990, and it’s purpose is to group a number of physical links into one logical channel with larger “effective” bandwidth. As we discussed before, MLPPP uses a fragmentation algorithm, where one large frame is being split at Layer2 and replaced with a bunch of sequenced (by the use of additional MLPPP header) smaller frames which are then being sent over multiple physical links in parallel. The receiving side will then accept fragments, reorder some of them if needed, and assemble the pieces into complete frame using the sequence numbers.

So here comes the interleave feature: small voice packets are not fragmented by MLPPP (no MLPPP header and sequence number added) and are simply inserted (intermixed) among the fragments of large data packet. Of course, a special interleaving priority queue is used for this purpose, as we have discussed before.

To summarize:

1) MLPPP uses fragmentation scheme where large packets are sliced in pieces and sequence numbers are added using special MLPPP headers
2) Small voice packets are interleaved with fragments of large packets using a special priority queue

We see that MLPPP was originally designed to work with multiple physical links at the same time. However, PPP Multilink Interleave only works with one physical link. The reason is that voice (small) packets are being sent without sequence numbers. If we were using multiple physical links, the receiving side may start accepting voice packets out of their original order (due to different physical link latencies). And since voice packets bear no fragmentation headers, there is no way to reorder them. In effect, packets may arrive to their final destination out of order, degrading voice quality.

To overcome this obstacle, Multiclass Multilink PPP (MCMLPPP or MCMLP) has been introduced in RFC 2886. Under this RFC, different “fragment streams” or classes are supported at sending and receiving sides, using independent sequence numbers. Therefore, with MCMLPPP voice packets may be sent using MLPPP header with separate sequence numbers space. In result, MCMPPP permits the use of fragmentation and interleaving over multiple physical links at time.

Now back to our MLPPPoFR example. Let’s imagine the situation where we have two routers (R1 and R2) connected via FR cloud, with physical ports clocked at 512Kpbs and PVC CIR values equal to 384Kbps (There is no ATM interworking in this example). We need to provide priority treatment to voice packets and enable PPP Multilink and Interleave to decrease serialization delays.


[R1]---[DLCI 112]---[Frame-Relay]---[DLCI 211]---[R2]

Start by defining MQC policy. We need to make sure that software queue gives voice packets priority treatmet, or else interleaving will be useless


R1 & R2:

!
! Voice bearer
!
class-map VOICE
 match ip dscp ef

!
! Voice signaling
!
class-map SIGNALING
 match ip dscp cs3

!
! CBWFQ: priority treatment for voice packets
!
policy-map CBWFQ
 class VOICE
  priority 48
 class SIGNALING
  bandwidth 8
 class class-default
  fair-queue

Next create a Virtual-Template interface for PPPoFR. We need to calculate the fragment size for MLPPP. Since physical port speed is 512Kpbs, and required serialization delay should not exceed 10ms (remember, fragment size is based on physical port speed!), the fragment size must be set to 512000/8*0,01=640 bytes. How is the fragment size configured with MLPPP? By using command ppp multilink fragment delay – however, IOS CLI takes this delay value (in milliseconds) and multiplies it by configured interface (virtual-template) bandwidth (in our case 384Kbps). We can actually change the virtual-template bandwidth to match the physical interface speed, but this would affect the CBWFQ weights! Therefore, we take the virtual-template bandwidth (384Kpbs) and adjust the delay to make sure the fragment size matches the physical interace rate is 512Kpbs. This way, the “effective” delay value would be set to “640*8/384 = 13ms” (Fragment_Size/CIR*8) to accomodate the physical and logical bandwidth discrepancy. (This may be unimportant if our physical port speed does not differ much from PVC CIR. However, if you have say PVC CIR=384Kbps and port speed 768Kbps you may want to pay attention to this issue)


R1:
interface Loopback0
 ip address 177.1.101.1 255.255.255.255
!
interface Virtual-Template 1
 encapsulation ppp
 ip unnumbered Loopback 0
 bandwidth 384
 ppp multilink
 ppp multilink interleave
 ppp multilink fragment delay 13
 service-policy output CBWFQ

R2:
interface Loopback0
 ip address 177.1.102.1 255.255.255.255
!
interface Virtual-Template 1
 encapsulation ppp
 ip unnumbered Loopback 0
 bandwidth 384
 ppp multilink
 ppp multilink interleave
 ppp multilink fragment delay 13
 service-policy output CBWFQ

Next we configure PVC shaping settings by using legacy FRTS configuration. Note that Bc is set to CIR*10ms.


R1 & R2:
map-class frame-relay SHAPE_384K
frame-relay cir 384000
frame-relay mincir 384000
frame-relay bc 3840
frame-relay be 0

Finally we apply all the settings to the Frame-Relay interfaces:


R1:
interface Serial 0/0/0:0
 encapsulation frame-relay
 frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/0:0.1 point-to-point
 no ip address
 frame-relay interface-dlci 112 ppp virtual-template 1
  class SHAPE_384K

R2:
interface Serial 0/0/1:0
 encapsulation frame-relay
 frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/1:0.1  point-to-point
 no ip address
 no frame-relay interface-dlci 221
 frame-relay interface-dlci 211 ppp virtual-Template 1
  class SHAPE_384K

Verification

Two virtual-access interfaces have been cloned. First for the member link:


R1#show interfaces virtual-access 2
Virtual-Access2 is up, line protocol is up
  Hardware is Virtual Access interface
  Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
  MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation PPP, LCP Open, multilink Open
  Link is a member of Multilink bundle Virtual-Access3   <---- MLP bundle member
  PPPoFR vaccess, cloned from Virtual-Template1
  Vaccess status 0x44
  Bound to Serial0/0/0:0.1 DLCI 112, Cloned from Virtual-Template1, loopback not set
  Keepalive set (10 sec)
  DTR is pulsed for 5 seconds on reset
  Last input 00:00:52, output never, output hang never
  Last clearing of "show interface" counters 00:04:17
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo       <---------- FIFO is the member link queue
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     75 packets input, 16472 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     86 packets output, 16601 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
     0 carrier transitions

Second for the MLPPP bundle itself:


R1#show interfaces virtual-access 3
Virtual-Access3 is up, line protocol is up
  Hardware is Virtual Access interface
  Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
  MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation PPP, LCP Open, multilink Open
  Open: IPCP
  MLP Bundle vaccess, cloned from Virtual-Template1   <---------- MLP Bundle
  Vaccess status 0x40, loopback not set
  Keepalive set (10 sec)
  DTR is pulsed for 5 seconds on reset
  Last input 00:01:29, output never, output hang never
  Last clearing of "show interface" counters 00:03:40
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing    <--------- CBWFQ is the bundle queue
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/128 (active/max active/max total)
     Reserved Conversations 1/1 (allocated/max allocated)
     Available Bandwidth 232 kilobits/sec
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     17 packets input, 15588 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     17 packets output, 15924 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
     0 carrier transitions

Verify the CBWFQ policy-map:


R1#show policy-map interface
 Virtual-Template1 

  Service-policy output: CBWFQ

    Service policy content is displayed for cloned interfaces only such as vaccess and sessions
 Virtual-Access3 

  Service-policy output: CBWFQ

    Class-map: VOICE (match-all)
      0 packets, 0 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: ip dscp ef (46)
      Queueing
        Strict Priority
        Output Queue: Conversation 136
        Bandwidth 48 (kbps) Burst 1200 (Bytes)
        (pkts matched/bytes matched) 0/0
        (total drops/bytes drops) 0/0

    Class-map: SIGNALING (match-all)
      0 packets, 0 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: ip dscp cs3 (24)
      Queueing
        Output Queue: Conversation 137
        Bandwidth 8 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 0/0
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: class-default (match-any)
      17 packets, 15554 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: any
      Queueing
        Flow Based Fair Queueing
        Maximum Number of Hashed Queues 128
        (total queued/total drops/no-buffer drops) 0/0/0

Check PPP multilink status:


R1#ping 177.1.102.1 source loopback 0 size 1500

Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 177.1.102.1, timeout is 2 seconds:
Packet sent with a source address of 177.1.101.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 64/64/64 ms

R1#show ppp multilink

Virtual-Access3, bundle name is R2
  Endpoint discriminator is R2
  Bundle up for 00:07:49, total bandwidth 384, load 1/255
  Receive buffer limit 12192 bytes, frag timeout 1000 ms
  Interleaving enabled            <------- Interleaving enabled
    0/0 fragments/bytes in reassembly list
    0 lost fragments, 0 reordered
    0/0 discarded fragments/bytes, 0 lost received
    0x34 received sequence, 0x34 sent sequence   <---- MLP sequence numbers for fragmented packets
  Member links: 1 (max not set, min not set)
    Vi2, since 00:07:49, 624 weight, 614 frag size <------- Fragment Size
No inactive multilink interfaces

Verify the interleaving queue:


R1#show interfaces serial 0/0/0:0
Serial0/0/0:0 is up, line protocol is up
  Hardware is GT96K Serial
  MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation FRAME-RELAY, loopback not set
  Keepalive set (10 sec)
  LMI enq sent  10, LMI stat recvd 11, LMI upd recvd 0, DTE LMI up
  LMI enq recvd 0, LMI stat sent  0, LMI upd sent  0
  LMI DLCI 1023  LMI type is CISCO  frame relay DTE
  FR SVC disabled, LAPF state down
  Broadcast queue 0/64, broadcasts sent/dropped 4/0, interface broadcasts 0
  Last input 00:00:05, output 00:00:02, output hang never
  Last clearing of "show interface" counters 00:01:53
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: dual fifo                        <--------- Dual FIFO
  Output queue: high size/max/dropped 0/256/0         <--------- High Queue
  Output queue: 0/128 (size/max)                      <--------- Low (fragments) queue
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     47 packets input, 3914 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     1 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     47 packets output, 2149 bytes, 0 underruns
     0 output errors, 0 collisions, 4 interface resets
     0 output buffer failures, 0 output buffers swapped out
     1 carrier transitions
  Timeslot(s) Used:1-24, SCC: 0, Transmitter delay is 0 flags

Further Reading

Reducing Latency and Jitter for Real-Time Traffic Using Multilink PPP
Multiclass Multilink PPP
Using Multilink PPP over Frame Relay

About Petr Lapukhov, 4xCCIE/CCDE:

Petr Lapukhov's career in IT begain in 1988 with a focus on computer programming, and progressed into networking with his first exposure to Novell NetWare in 1991. Initially involved with Kazan State University's campus network support and UNIX system administration, he went through the path of becoming a networking consultant, taking part in many network deployment projects. Petr currently has over 12 years of experience working in the Cisco networking field, and is the only person in the world to have obtained four CCIEs in under two years, passing each on his first attempt. Petr is an exceptional case in that he has been working with all of the technologies covered in his four CCIE tracks (R&S, Security, SP, and Voice) on a daily basis for many years. When not actively teaching classes, developing self-paced products, studying for the CCDE Practical & the CCIE Storage Lab Exam, and completing his PhD in Applied Mathematics.

Find all posts by Petr Lapukhov, 4xCCIE/CCDE | Visit Website


You can leave a response, or trackback from your own site.

19 Responses to “Fragmentation and Interleaving with MLPPP over Frame-Relay”

 
  1. nhatphuc says:

    I read the Cisco QoS Certification Exam Guide. On page 486, there’s an excerpt: “One common misconception is that fragmentation size should be based on the CIR of the VC, rather than on the access rate. Fragmentation attacks the problem of serialization delay, and serialization delay is based on how long it takes to encode the bits onto the physical interface, which in turn is determined by the physical clock rate on the interface. So, you should always base FRF.12 fragmentation sizes on the clock rate (access rate) of the slower of the two access links, not on CIR”

    In your example, PPP is run over Frame Relay so the fragment size base on CIR, doesn’t it? Why isn’t it the same?

    Thank you

  2. Petr Lapukhov, CCIE #16379 says:

    Correct, the fragment size is calculated based on the physical port speed. However, as I mentioned in the text, PPP multilink calculates fragment size using logical interface bandwidth, multiplied by configured multilink fragment delay. This is why we need to set the multilink fragment delay to a higher value, in order to make the fragment size match the physical interface clock rate.

    With FRF.12 it’s not an issue, you simply specify the fragment size in bytes (no indirect formula with delay value)

  3. Bakassa says:

    Hi Petr

    I thought queueing could not be applied ‘directly’ to a logical interface (but shaping is OK) as you did with
    interface virtual-template1
    service-policy output CBWFQ

    I read a cisco example at url
    http://www.cisco.com/univercd/cc/td/doc/product/software/ios124/124cr/hdia_r/dia_p1h.htm#wp1187236
    in “ppp multilink interleave” section.

    What about that sloution

    policy-map SHAPE_384K
    class class-default
    shape average 384000 3840 0
    shape adaptive 384000
    service-policy CBWFQ

    and then apply that policy to virtual-template1

    interface Virtual-Template 1
    encapsulation ppp
    ip unnumbered Loopback 0
    bandwidth 384
    ppp multilink
    ppp multilink interleave
    ppp multilink fragment delay 13
    service-policy output SHAPE_384K

    and add

    multilink virtual-template1 ?

    In that case, no need to apply map class to frame-relay serial interface for legacy frame-relay traffic-shaping.

    Thanks for your help

    Jean-Philippe Bakassa-Traore

  4. Daniel Craig says:

    Hi, I was looking around for a while searching for physical security certifications and I happened upon this site and your post regarding , I will definitely this to my physical security certifications bookmarks!

  5. ExArmic says:

    Petr, I note your typo. MCMLPPP has been introduced in RFC 2686!

  6. will ham says:

    Got one more question. I’m confused how the two virtual access interfaces (2&3) got created? I noticed that it says that int virtual-access 2 is a member of bundle of int virtual-access 3. I’m a bit lost.

  7. Payal Patel says:

    refer:http://ieoc.com/forums/t/11788.aspx

    if task says use multilink interface for all qos and ip commands;

    map-class frame-relay FRTS

    frame-relay cir 256000

    frame-relay bc 16000

    frame-relay be 2000

    frame-relay fair-queue // do we need this or fair-queue in class-default

    class-map match-all VOIP

    match ip rtp 16384 16383 // do we need this for all voip traffic with precedence critical

    match ip precedence critical

    policy-map CBWFQ

    class VOIP

    priority percent 50

    class class-default // do we need this or frame-relay fair-queue
    fair-queue

    int s1/1

    frame-relay traffic-shaping

    interface Serial1/1.13 point-to-point

    snmp trap link-status

    no cdp enable

    bandwidth 256 // do we need this

    frame-relay interface-dlci 113 ppp Virtual-Template1

    class FRTS

    end

    interface Virtual-Template1

    no ip address

    ppp multilink

    ppp multilink group 1

    bandwidth 256 / do we need this

    interface Multilink1

    bandwidth 256

    ip unnumbered Loopback0

    no cdp enable

    ppp multilink

    ppp multilink interleave

    ppp multilink group 1

    ppp multilink endpoint hostname

    ppp multilink fragment delay 8

    service-policy output CBWFQ

  8. Tammy Burley says:

    Petr –

    I don’t understand why the VT delay is set to 13ms resulting in 640bytes/ interval when the FRTS is set to 480bytes/interval.

    I’m a little in the weeds here as I know that FR.12 is based on access-rate. I just can’t seem to bend my head around the fact that the FRTS is so much less than what the VT is set to.

    Help.

    r/Tammy

  9. ouki says:

    Petr,

    Thanks for the excellent work! One thing confused me was that why you are not creating a “Multilink 1″ interface? Was that because there is no need because only one interface is in the bundle? Please clarify whether we should apply policy-map and ppp multilink frag in Virtual-Template1 or Multilink ?

  10. @ouki

    There is no need for creating the Multilink interface if the underlying logical connections are not bundled. We are using the “multilink” feature only to allow for interleaving in this situation.

  11. Kevin Hexley says:

    Thanks Petr, great post! Makes things much clearer for me.

  12. [...] INE Blog [4] Circuit Switching in the Internet, Pablo Molinero Fernandez [5] RFC 3945 [6] PPP Multilink Interleaving over Frame-Relay Tags: bandwidth, cbr, circuit-switching, llq, over-provisioning, packet-switching, [...]

  13. Min Pan says:

    Hi Petr,

    Thanks for your excellent article!

    I’ve read the Cisco configuration guide about implementing fragmentation and interleaving with MLPPP over Frame-relay, and the way of configuring is different from your article.

    Here is the link of the cisco document:

    http://www.cisco.com/en/US/docs/ios/qos/configuration/guide/mlppp_over_fr_ps6441_TSD_Products_Configuration_Guide_Chapter.html

    According to this document, service-policy should be applied under multilink group interface, instead of under virtual-template interface (that’s the way you’ve configured in your article), unless you’re using Cisco 7500 and 7600 serial routers.

    Could you please advise which way is correct? As we’re dealing with Cisco 3825 and 1841 routers in CCIE R&S exam, shouldn’t we follow the Cisco configuriton guide and configure it under multilink interface?

    Attach the configuration example here:

    Router> enable

    Router# configure terminal

    Router(config)# interface multilink 1

    Router(config-if)# ip address 10.10.100.1 255.255.255.0

    Router(config-if)# service-policy output policy1

    Router(config-if)# service-policy input policy1

    Router(config-if)# ppp multilink fragment delay 20

    Router(config-if)# ppp multilink interleave

    Router(config-if)# end

    Otherwise, if both ways are ok, what’s the difference between applying the service-policy under virtual-template interface and under mutilink group interface?

    Also, according to the configuration guide, when configure Multilink PPP over Frame Relay on a Multilink Group Interface, there is no configuration for bandwidth, not like configure virtual-template interface. But the bandwidth configuration is still required, isn’t it? So where should the bandwidth to be configured? Physical Interface?

    Looking forward for your reply!

    Min Pan — Sydney

    • The key idea is that you should apply the policy to the “IP” interface: be it virtual-template or multilink. In my case I was using the virtual-template for MLPPoFR but you could also aggregate PVCs using a multilink interface and apply the service-policy there.

  14. [...] PPP Multilink Interleaving over Frame-Relay [...]

  15. Leo Gal says:

    Hello Petr
    First off, Thanks for your outstanding work, that helps us all.

    I have a question regarding this scenario, could you navigate me to some link, or share some info on how RSVP works on MLPPPoFR with LFI and how could I counfigure it?
    Thank you very much.
    Leo

  16. ovi says:

    hi, very useful example – both this and this one where you show us how to replace legacy FRTS:

    http://blog.ine.com/2008/01/24/mqc-based-frame-relay-traffic-shaping/

    I don’t know if i’ll get an answer regarding this article is from 2008 BUT i have the same question as Jean-Philippe Bakassa-Traore:

    why don’t you use mqc instead of legacy FRTS ?
    Are there any restrictions when implementing MQC and doing LFI ? that’s why you went for legacy frts?

  17. Dmitry Merzlyakov says:

    Hi Petr,

    Could you please comment on the question above from Ovi? The point is IOS drops a warning if I apply MQC-FRTS policy on the Multilink interface:

    %FR-3-MLPOFR_ERROR: MLPoFR not configured properly on Link Virtual-Access2 Bundle Multilink1 :Frame Relay traffic shaping must be enabled

    However, ‘show ppp multilink’ claims Interleaving is enabled. Looks like a bogus message, but not sure.

    Thanks,
    Best regards,
    Dmitry

 

Leave a Reply

Categories

CCIE Bloggers