Posts Tagged ‘llq’

Nov
08

Abstract

This publication discusses the spectrum of problems associated with transporting Constant Bit Rate (CBR) circuits over packet networks, specifically focusing VoIP services. It provides guidance on practical calculation for voice bandwidth allocation in IP networks, including the maximum bandwidth proportion allocation and LLQ queue settings. Lastly, the publication discusses the benefits and drawbacks of transporting CBR flows over packet switched networks and demonstrates some effectiveness criteria.

Introduction

Historically, the main design goal of Packet Switched Networks (PSNs) was optimum bandwidth utilization for low-speed links. Compared to their counterpart, circuit-switched networks (CSNs such as SONET/SDH networks), PSNs use statistical as opposed to deterministic (synchronous) multiplexing. This feature allows PSNs to be very effective for bursty traffic sources, i.e. those that send traffic sporadically. Indeed, with many sources this allows the transmission channel to be optimally utilized by sending traffic only when necessary. Statistical multiplexing is only possible if every node in the network implements packet queueing, because PSNs introduce link contention. One good historical example is ARPANET: the network theoretical foundation has been developed in Kleinrock’s work on distributed queueing systems (see [1]).
Continue Reading

Tags: , , , , , , , , ,

Oct
17

Computing voice bandwidth is usually required for scenarios where you provision LLQ queue based on the number of calls and VoIP codec used. You need to account for codec rate, Layer 3 overhead (IP, RTP and UDP headers) and Layer 2 overhead (Frame-Relay, Ethernet, HDLC etc. headers). Accounting for Layer 2 overhead is important, since the LLQ policer takes this overhead in account when enforcing maximum rate.

Continue Reading

Tags: , , , , , , ,

Sep
16

The security appliance supports two kinds of priority queuing – standard priority queuing and hierarchical priority queuing. Let’s configure each in this third part of our blog.

Standard Priority Queuing

This queuing approach allows you to place your priority traffic in a priority queue, while all other traffic is placed in a best effort queue. You can police all other traffic if needed.

Step 1: Create the priority queue on the interface where you want to configure the standard priority queuing. This is done in global configuration mode with the priority-queue interface_name command. Notice this will place you in priority queue configuration mode where you can optionally manipulate the size of the queue with the queue-limit number_of_packets command. You can also optionally set the depth of the hardware queue with the tx-ring-limit number_of_packets command. Remember that the hardware queue forwards packets until full, and then queuing is handled by the software queue (composed of the priority and best effort queues).

pixfirewall(config)# priority-queue outside
pixfirewall(config-priority-queue)#

Step 2: Use the Modular Policy Framework (covered in Part 2 of these blogs) to configure the prioritized traffic.

pixfirewall(config-priority-queue)# exit
pixfirewall(config)# class-map CM-VOICE
pixfirewall(config-cmap)# match dscp ef
pixfirewall(config-cmap)# exit
pixfirewall(config)# class-map CM-VOICE-SIGNAL
pixfirewall(config-cmap)# match dscp af31
pixfirewall(config-cmap)# exit
pixfirewall(config)# policy-map PM-VOICE-TRAFFIC
pixfirewall(config-pmap)# class CM-VOICE
pixfirewall(config-pmap-c)# priority
pixfirewall(config-pmap-c)# exit
pixfirewall(config-pmap)# class CM-VOICE-SIGNAL
pixfirewall(config-pmap-c)# priority
pixfirewall(config-pmap-c)# exit
pixfirewall(config-pmap)# exit
pixfirewall(config)# service-policy PM-VOICE-TRAFFIC interface outside
pixfirewall(config)# end

Hierarchical Priority Queuing

This queuing approach allows you to shape traffic and allow a subset of the shaped traffic to be prioritized. I have cleared the configuration from the security appliance in preparation for this new configuration. Notice with this approach, you do not configure a priority queue on the interface. Also notice with this approach the nesting of the Policy Maps.

pixfirewall(config)# class-map CM-VOICE
pixfirewall(config-cmap)# match dscp ef
pixfirewall(config-cmap)# exit
pixfirewall(config)# class-map CM-VOICE-SIGNAL
pixfirewall(config-cmap)# match dscp af31
pixfirewall(config-cmap)# exit
pixfirewall(config)# policy-map PM-VOICE-TRAFFIC
pixfirewall(config-pmap)# class CM-VOICE
pixfirewall(config-pmap-c)# priority
pixfirewall(config-pmap-c)# exit
pixfirewall(config-pmap)# class CM-VOICE-SIGNAL
pixfirewall(config-pmap-c)# priority
pixfirewall(config-pmap-c)# exit
pixfirewall(config-pmap)# exit
pixfirewall(config)# policy-map PM-ALL-TRAFFIC-SHAPE
pixfirewall(config-pmap)# class class-default
pixfirewall(config-pmap-c)# shape average 2000000 16000
pixfirewall(config-pmap-c)# service-policy PM-VOICE-TRAFFIC
pixfirewall(config-pmap-c)# exit
pixfirewall(config-pmap)# service-policy PM-ALL-TRAFFIC-SHAPE interface outside
pixfirewall(config)# end

Verifications for Priority Queuing

These verification commands can be used for both forms of priority queuing. Obviously, you can examine portions of the running configuration to confirm your Modular Policy Framework components. For example:

pixfirewall# show run policy-map
!
policy-map PM-VOICE-TRAFFIC
 class CM-VOICE
  priority
 class CM-VOICE-SIGNAL
  priority
 class class-default
policy-map PM-ALL-TRAFFIC-SHAPE
 class class-default
  shape average 2000000 16000
  service-policy PM-VOICE-TRAFFIC
!

Another example:

pixfirewall# show run class-map
!
class-map CM-VOICE-SIGNAL
 match dscp af31
class-map CM-VOICE
 match dscp ef
!

To verify the statistics of the standard priority queuing configuration, use the following:

pixfirewall# show service-policy priority
Interface outside:
  Service-policy: PM-VOICE-TRAFFIC
   Class-map: CM-VOICE
      Priority:
        Interface outside: aggregate drop 0, aggregate transmit 0
    Class-map: CM-VOICE-SIGNAL
      Priority:
        Interface outside: aggregate drop 0, aggregate transmit 0

You can also view the priority queue statistics for an interface using the following:

pixfirewall# show priority-queue statistics outside
Priority-Queue Statistics interface outside
Queue Type         = BE
Tail Drops         = 0
Reset Drops        = 0
Packets Transmit   = 0
Packets Enqueued   = 0
Current Q Length   = 0
Max Q Length       = 0
Queue Type         = LLQ
|Tail Drops         = 0
Reset Drops        = 0
Packets Transmit   = 0
Packets Enqueued   = 0
Current Q Length   = 0
Max Q Length       = 0

To verify the statistics on the shaping you have done with the hierarchical priority queuing, use the following:

pixfirewall# show service-policy shape
Interface outside:
  Service-policy: PM-ALL-TRAFFIC-SHAPE
    Class-map: class-default
      shape (average) cir 2000000, bc 16000, be 16000
      (pkts output/bytes output) 0/0
      (total drops/no-buffer drops) 0/0
      Service-policy: PM-VOICE-TRAFFIC

The next blog entry on this subject will focus on the shape tool available on the PIX/ASA.

Thanks so much for reading!

Tags: , , , ,

Sep
12

This blog is focusing on QoS on the PIX/ASA and is based on 7.2 code to be consistent with the CCIE Security Lab Exam as of the date of this post. I will create a later blog regarding new features to 8.X code for all of you non-exam biased readers :-)

NOTE: We have already seen thanks to our readers that some of these features are very model/license dependent! For example, we have yet to find an ASA that allows traffic shaping. 

One of the first things that you discover about QoS for PIX/ASA when you check the documentation is that none of the QoS tools that these devices support are available when you are in multiple context mode. This jumped out at me as a bit strange and I just had to see for myself. Here I went to a PIX device, switched to multiple mode, and then searched for the priority-queue global configuration mode command. Notice that, sure enough, the command was not available in the CUSTA context, or the system context.

pixfirewall# configure terminal
pixfirewall(config)# mode multiple
WARNING: This command will change the behavior of the device
WARNING: This command will initiate a Reboot
Proceed with change mode? [confirm]
Convert the system configuration? [confirm]
pixfirewall> enable
pixfirewall# show mode
Security context mode: multiple
pixfirewall# configure terminal        
pixfirewall(config)# context CUSTA
Creating context 'CUSTA'... Done. (2)
pixfirewall(config-ctx)# context CUSTA
pixfirewall(config-ctx)# config-url flash:/custa.cfg
pixfirewall(config-ctx)# allocate-interface e2 
pixfirewall(config-ctx)# changeto context CUSTA
pixfirewall/CUSTA(config)# pri?     
configure mode commands/options:
   privilege
pixfirewall/CUSTA# changeto context system
pixfirewall# conf t
pixfirewall(config)# pr?
configure mode commands/options:
   privilege  

OK, so we have no QoS capabilities when in multiple context mode. :-| What QoS capabilities do we possess on the PIX/ASA when we are behaving in single context mode? Here they are:

  • Policing – you will be able to set a “speed limit” for traffic on the PIX/ASA. The policer will discard any packets trying to exceed this rate. I always like to think of the Soup Guy on Seinfeld with this one – “NO BANDWIDTH FOR YOU!” 
  • Shaping – again, this tool allows you to set a speed limit, but it is “kinder and gentler”. This tool will attempt to buffer traffic and send it later should the traffic exceed the shaped rate.
  • Priority Queuing – for traffic (like VoIP that rely hates delays and variable delays (jitter), the PIX/ASA does support priority queuing of that traffic. The documentation refers to this as a Low Latency Queuing (LLQ).

Now before we get too excited about these options for tools, we must understand that we are going to face some pretty big limitations with their usage compared to shaping, policing, and LLQ on a Cisco router. We will detail these limitations in future blogs on the specific tools, but here is an example. We might get very excited when we see LLQ in relation to the PIX/ASA, but it is certainly not the LLQ that we are accustomed to on a router. On a router, LLQ is really Class-Based Weighted Fair Queuing (CBWFQ) with the addition of strict Priority Queuing (PQ). On the PIX/ASA, we are just not going to have that type of granular control over many traffic forms. In fact, with the standard priority queuing approach on the PIX/ASA, there is a single LLQ for your priority traffic and all other traffic falls into a best effort queue.

If you have been around QoS for a while, you are going to be very excited about how we set these mechanisms up on the security appliance. We are going to use the Modular Quality of Service Command Line Interface (MQC) approach! The MQC was invented for CBWFQ on the routers, but now we are seeing it everywhere. In fact, on the security appliance it is termed the Modular Policy Framework. This is because it not only handles QoS configurations, but also traffic inspections (including deep packet inspections), and can be used to configure the Intrusion Prevention and Content Management Security Service Modules. Boy, the ole’ MQC sure has come a long way.

While you might be frustrated with some of the limitations in the individual tools, at least there are a couple of combinations that can feature the tools working together. Specificaly, you can:

  • Use standard priority queueing (for example for voice) and then police for all of the other traffic.
  • You can also use traffic shaping for all traffic in conjunction with hierarchical priority queuing for a subset of traffic. Again, in later blogs we will educate you more fully on each tool.

Thanks for reading and I hope you are looking forward to future blog entries on QoS with the ASA/PIX.

Tags: , , , , , , , ,

Aug
17

Try assessing your understanding of Cisco’s CBWFQ by looking at the following example:

class-map match-all HTTP_R6
 match access-group name HTTP_R6
!
policy-map CBWFQ
 class HTTP_R6
  bandwidth remaining percent 5
!
interface Serial 0/1
  bandwidth 128
  clock rate 128000
  service-policy output CBWFQ

and answering a question on the imaginable scenario: Two TCP flows (think of them as HTTP file transfers) are going across Serial 0/1 interface. One of the flows matches the class HTTP_R6, and another flow, marked with IP Precedence of 7 (pretty high), does not match any class. The traffic flow overwhelms the interface, so the system engages CBWFQ. Now the question is: how CBWFQ will share the interface bandwidth among the flows.

Continue Reading

Tags: , , , , ,

Jan
26

To begin with, why whould anyone need to run Multilink PPP (MLPPP or MLP) with Interleaving over Frame-Relay? Well, back in days, when Frame-Relay and ATM were really popular, there was a need to interwork the two technologies: that is, transparently pass encapsulated packets between FR and ATM PVCs. (This is similar in concept with modern L2 VPN interworking, however it was specific to ATM and Frame-Relay). Let’s imagine a situation where we have slow ATM and Frame-Relay links, used to transport a mix of VoIP and data traffic. As we know, some sort of fragmentation and interleaving scheme should be implemented, in order to keep voice quality under control. Since there was no fragmentation scheme common to both ATM and Frame-Relay, people came with idea to run PPP (yet another L2 tech) over Frame-Relay and ATM PVCs and use PPP multilink and interleave feature to implement fragmentation. (Actually there was no good scheme for native fragmentation and interleaving with VoIP over ATM – the cell mode technology – how ironic!)

Before coming up with a configuration example, let’s discuss briefly how PPP Multilink and Interleave works. MLPPP is defined under RFC 1990, and it’s purpose is to group a number of physical links into one logical channel with larger “effective” bandwidth. As we discussed before, MLPPP uses a fragmentation algorithm, where one large frame is being split at Layer2 and replaced with a bunch of sequenced (by the use of additional MLPPP header) smaller frames which are then being sent over multiple physical links in parallel. The receiving side will then accept fragments, reorder some of them if needed, and assemble the pieces into complete frame using the sequence numbers.

So here comes the interleave feature: small voice packets are not fragmented by MLPPP (no MLPPP header and sequence number added) and are simply inserted (intermixed) among the fragments of large data packet. Of course, a special interleaving priority queue is used for this purpose, as we have discussed before.

To summarize:

1) MLPPP uses fragmentation scheme where large packets are sliced in pieces and sequence numbers are added using special MLPPP headers
2) Small voice packets are interleaved with fragments of large packets using a special priority queue

We see that MLPPP was originally designed to work with multiple physical links at the same time. However, PPP Multilink Interleave only works with one physical link. The reason is that voice (small) packets are being sent without sequence numbers. If we were using multiple physical links, the receiving side may start accepting voice packets out of their original order (due to different physical link latencies). And since voice packets bear no fragmentation headers, there is no way to reorder them. In effect, packets may arrive to their final destination out of order, degrading voice quality.

To overcome this obstacle, Multiclass Multilink PPP (MCMLPPP or MCMLP) has been introduced in RFC 2886. Under this RFC, different “fragment streams” or classes are supported at sending and receiving sides, using independent sequence numbers. Therefore, with MCMLPPP voice packets may be sent using MLPPP header with separate sequence numbers space. In result, MCMPPP permits the use of fragmentation and interleaving over multiple physical links at time.

Now back to our MLPPPoFR example. Let’s imagine the situation where we have two routers (R1 and R2) connected via FR cloud, with physical ports clocked at 512Kpbs and PVC CIR values equal to 384Kbps (There is no ATM interworking in this example). We need to provide priority treatment to voice packets and enable PPP Multilink and Interleave to decrease serialization delays.


[R1]---[DLCI 112]---[Frame-Relay]---[DLCI 211]---[R2]

Start by defining MQC policy. We need to make sure that software queue gives voice packets priority treatmet, or else interleaving will be useless


R1 & R2:

!
! Voice bearer
!
class-map VOICE
 match ip dscp ef

!
! Voice signaling
!
class-map SIGNALING
 match ip dscp cs3

!
! CBWFQ: priority treatment for voice packets
!
policy-map CBWFQ
 class VOICE
  priority 48
 class SIGNALING
  bandwidth 8
 class class-default
  fair-queue

Next create a Virtual-Template interface for PPPoFR. We need to calculate the fragment size for MLPPP. Since physical port speed is 512Kpbs, and required serialization delay should not exceed 10ms (remember, fragment size is based on physical port speed!), the fragment size must be set to 512000/8*0,01=640 bytes. How is the fragment size configured with MLPPP? By using command ppp multilink fragment delay – however, IOS CLI takes this delay value (in milliseconds) and multiplies it by configured interface (virtual-template) bandwidth (in our case 384Kbps). We can actually change the virtual-template bandwidth to match the physical interface speed, but this would affect the CBWFQ weights! Therefore, we take the virtual-template bandwidth (384Kpbs) and adjust the delay to make sure the fragment size matches the physical interace rate is 512Kpbs. This way, the “effective” delay value would be set to “640*8/384 = 13ms” (Fragment_Size/CIR*8) to accomodate the physical and logical bandwidth discrepancy. (This may be unimportant if our physical port speed does not differ much from PVC CIR. However, if you have say PVC CIR=384Kbps and port speed 768Kbps you may want to pay attention to this issue)


R1:
interface Loopback0
 ip address 177.1.101.1 255.255.255.255
!
interface Virtual-Template 1
 encapsulation ppp
 ip unnumbered Loopback 0
 bandwidth 384
 ppp multilink
 ppp multilink interleave
 ppp multilink fragment delay 13
 service-policy output CBWFQ

R2:
interface Loopback0
 ip address 177.1.102.1 255.255.255.255
!
interface Virtual-Template 1
 encapsulation ppp
 ip unnumbered Loopback 0
 bandwidth 384
 ppp multilink
 ppp multilink interleave
 ppp multilink fragment delay 13
 service-policy output CBWFQ

Next we configure PVC shaping settings by using legacy FRTS configuration. Note that Bc is set to CIR*10ms.


R1 & R2:
map-class frame-relay SHAPE_384K
frame-relay cir 384000
frame-relay mincir 384000
frame-relay bc 3840
frame-relay be 0

Finally we apply all the settings to the Frame-Relay interfaces:


R1:
interface Serial 0/0/0:0
 encapsulation frame-relay
 frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/0:0.1 point-to-point
 no ip address
 frame-relay interface-dlci 112 ppp virtual-template 1
  class SHAPE_384K

R2:
interface Serial 0/0/1:0
 encapsulation frame-relay
 frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/1:0.1  point-to-point
 no ip address
 no frame-relay interface-dlci 221
 frame-relay interface-dlci 211 ppp virtual-Template 1
  class SHAPE_384K

Verification

Two virtual-access interfaces have been cloned. First for the member link:


R1#show interfaces virtual-access 2
Virtual-Access2 is up, line protocol is up
  Hardware is Virtual Access interface
  Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
  MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation PPP, LCP Open, multilink Open
  Link is a member of Multilink bundle Virtual-Access3   <---- MLP bundle member
  PPPoFR vaccess, cloned from Virtual-Template1
  Vaccess status 0x44
  Bound to Serial0/0/0:0.1 DLCI 112, Cloned from Virtual-Template1, loopback not set
  Keepalive set (10 sec)
  DTR is pulsed for 5 seconds on reset
  Last input 00:00:52, output never, output hang never
  Last clearing of "show interface" counters 00:04:17
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo       <---------- FIFO is the member link queue
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     75 packets input, 16472 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     86 packets output, 16601 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
     0 carrier transitions

Second for the MLPPP bundle itself:


R1#show interfaces virtual-access 3
Virtual-Access3 is up, line protocol is up
  Hardware is Virtual Access interface
  Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
  MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation PPP, LCP Open, multilink Open
  Open: IPCP
  MLP Bundle vaccess, cloned from Virtual-Template1   <---------- MLP Bundle
  Vaccess status 0x40, loopback not set
  Keepalive set (10 sec)
  DTR is pulsed for 5 seconds on reset
  Last input 00:01:29, output never, output hang never
  Last clearing of "show interface" counters 00:03:40
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing    <--------- CBWFQ is the bundle queue
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/128 (active/max active/max total)
     Reserved Conversations 1/1 (allocated/max allocated)
     Available Bandwidth 232 kilobits/sec
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     17 packets input, 15588 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     17 packets output, 15924 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
     0 carrier transitions

Verify the CBWFQ policy-map:


R1#show policy-map interface
 Virtual-Template1 

  Service-policy output: CBWFQ

    Service policy content is displayed for cloned interfaces only such as vaccess and sessions
 Virtual-Access3 

  Service-policy output: CBWFQ

    Class-map: VOICE (match-all)
      0 packets, 0 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: ip dscp ef (46)
      Queueing
        Strict Priority
        Output Queue: Conversation 136
        Bandwidth 48 (kbps) Burst 1200 (Bytes)
        (pkts matched/bytes matched) 0/0
        (total drops/bytes drops) 0/0

    Class-map: SIGNALING (match-all)
      0 packets, 0 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: ip dscp cs3 (24)
      Queueing
        Output Queue: Conversation 137
        Bandwidth 8 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 0/0
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: class-default (match-any)
      17 packets, 15554 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: any
      Queueing
        Flow Based Fair Queueing
        Maximum Number of Hashed Queues 128
        (total queued/total drops/no-buffer drops) 0/0/0

Check PPP multilink status:


R1#ping 177.1.102.1 source loopback 0 size 1500

Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 177.1.102.1, timeout is 2 seconds:
Packet sent with a source address of 177.1.101.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 64/64/64 ms

R1#show ppp multilink

Virtual-Access3, bundle name is R2
  Endpoint discriminator is R2
  Bundle up for 00:07:49, total bandwidth 384, load 1/255
  Receive buffer limit 12192 bytes, frag timeout 1000 ms
  Interleaving enabled            <------- Interleaving enabled
    0/0 fragments/bytes in reassembly list
    0 lost fragments, 0 reordered
    0/0 discarded fragments/bytes, 0 lost received
    0x34 received sequence, 0x34 sent sequence   <---- MLP sequence numbers for fragmented packets
  Member links: 1 (max not set, min not set)
    Vi2, since 00:07:49, 624 weight, 614 frag size <------- Fragment Size
No inactive multilink interfaces

Verify the interleaving queue:


R1#show interfaces serial 0/0/0:0
Serial0/0/0:0 is up, line protocol is up
  Hardware is GT96K Serial
  MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation FRAME-RELAY, loopback not set
  Keepalive set (10 sec)
  LMI enq sent  10, LMI stat recvd 11, LMI upd recvd 0, DTE LMI up
  LMI enq recvd 0, LMI stat sent  0, LMI upd sent  0
  LMI DLCI 1023  LMI type is CISCO  frame relay DTE
  FR SVC disabled, LAPF state down
  Broadcast queue 0/64, broadcasts sent/dropped 4/0, interface broadcasts 0
  Last input 00:00:05, output 00:00:02, output hang never
  Last clearing of "show interface" counters 00:01:53
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: dual fifo                        <--------- Dual FIFO
  Output queue: high size/max/dropped 0/256/0         <--------- High Queue
  Output queue: 0/128 (size/max)                      <--------- Low (fragments) queue
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     47 packets input, 3914 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     1 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     47 packets output, 2149 bytes, 0 underruns
     0 output errors, 0 collisions, 4 interface resets
     0 output buffer failures, 0 output buffers swapped out
     1 carrier transitions
  Timeslot(s) Used:1-24, SCC: 0, Transmitter delay is 0 flags

Further Reading

Reducing Latency and Jitter for Real-Time Traffic Using Multilink PPP
Multiclass Multilink PPP
Using Multilink PPP over Frame Relay

Tags: , , , , , ,

Categories

CCIE Bloggers