Posts Tagged ‘ppp’

May
15

 

The following question was recently sent to me regarding PPP and CHAP:

 

At the moment I only have packet tracer to practice on, and have been trying to setup CHAP over PPP.

It seems that the “PPP CHAP username xxxx” and “PPP CHAP password xxxx” commands are missing in packet tracer.

I have it set similar to this video… (you can skip the first 1 min 50 secs)

https://www.youtube.com/watch?v=5ltNfaPz0nA

As he doesn’t use the missing commands, if that were to be done on live kit would it just use the hostname and magic number to create the hash?

 

Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router?

Thanks, Paul.

 

Here was my reply:

Hi Paul,

When using PPP CHAP keep in mind four fundamental things:

  1. The “magic number” that you see in PPP LCP messages has nothing to do with Authentication or CHAP.  It is simply PPPs way of trying to verify that it has a bi-directional link with a peer. When sending a PPP LCP message a random Magic Number is generated.  The idea is that you should NOT see your own Magic Number in LCP messages received from your PPP Peer.  If you DO see the same magic number that you transmited, that means you are talking to yourself (your outgoing LCP CONFREQ message has been looped back to you).  This might happen if the Telco that is providing your circuit is doing some testing or something and has temporarily looped-back your circuit.
  2. At least one of the devices will be initiating the CHAP challenge.  In IOS this is enabled with the interface command, “ppp authentication chap”.  Technically it only has to be configured on one device (usually the ISP router that wishes to “challenge” the incoming caller) but with CHAP you can configure it on both sides if you wish to have bi-directional CHAP challenges.
  3. Both routers need a CHAP password, and you have a couple of options on how to do this.
  4. The “hash” that is generated in an outgoing PPP CHAP Response is created as a combination of three variables, and without knowing all three values the Hash Response cannot be generated:
  • A router’s Hostname
  • The configured PPP CHAP password
  • The PPP CHAP Challenge value

I do all of my lab testing on real hardware so I can’t speak to any “gotchas” that might be present in simulators like Packet Tracer.  But what I can tell you, is that on real routers the side that is receiving the CHAP challenge must be configured with an interface-level CHAP password.

The relevant configurations are below as an example.

ISP router that is initiating the CHAP Challenge for incoming callers:

username Customer password cisco
!
interface Serial1/3
 encapsulation ppp
 ppp authentication chap
 ip address x.x.x.x y.y.y.y
!

Customer router placing the outgoing PPP call to ISP:

hostname Customer
!
interface Serial1/3
 encapsulation ppp
 ppp chap password cisco
 ip address x.x.x.x y.y.y.y
!

If you have a situation where you expect that the Customer Router might be using this same interface to “call” multiple remote destinations, and use a different CHAP password for each remote location, then you could add the following:

 

Customer router placing the outgoing PPP call to ISP-1 (CHAP password = Bob) and ISP-2 (CHAP password = Sally):

hostname Customer
!
username ISP-1 password Bob
username ISP-2 password Sally
!
interface Serial1/3
 encapsulation ppp
 ppp chap password cisco
 ip address x.x.x.x y.y.y.y
!

Notice in the example above, the “username x password y” commands supercede the interface-level command, “ppp chap password x”. But please note that the customer (calling) router always needs the “ppp chap password” command configured at the interface level.  A global “username x password y” in the customer router does not replace this command.  In this situation, if the Customer router placed a call to ISP-3 (for which there IS no “username/password” statement) it would fallback to using the password configured at the interface-level.

Lastly, the “username x password y” command needs to be viewed differently depending on whether or not it is configured on the router that is RESPONDING to a Challenge…or is on the router that is GENERATING the Challenge:

  • When the command “username X password Y” is configured on the router that is responding to the CHAP Challenge (Customer router), the router’s local “hostname” and password in this command (along with the received Challenge) will be used in the Hash algorithm to generate the CHAP RESPONSE.

 

  • When the command “username X password Y” is configured on the router that is generating the CHAP Challenge (ISP Router), once the ISP router receives the CHAP Authentication Response (which includes the hostname of the Customer/calling router) it will match that received Hostname to a corresponding “username X password Y” statement. If one is found that matches, then the ISP router will perform its own CHAP hash of the username, password, and Challenge that it previously created to see if its own, locally-generated result matches the result that was received in the CHAP Response.

Lastly, you asked, “ Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router?”

Hopefully from my explanations above it is now clear that in the case of bi-directional authentication, the passwords do indeed have to be the same on both sides.

 

Hope that helps!

Keith

 


 

 

Tags: , ,

Jan
13

In this post, we will examine PAP and CHAP forms of PPP authentication. The emphasis here will be on the fact that these technologies are one-way in nature. So many of my CCIE-level students believe that they must be configured in a bidirectional configuration. I guess this is because it is what traditional Cisco classes always demonstrate at the CCNA and CCNP levels.

OK – I have pre-configured two routers, R1 and R2, they are connected by their Serial 0/0 interfaces. Let us begin with R1 as a PPP PAP server, and the R2 device as the PPP PAP client. If you ALWAYS think of these technologies (PAP and CHAP) in terms of CLIENT and SERVER commands, you will be in excellent shape.

Continue Reading

Tags: , , , , ,

Jul
06

If you ever used IPCP for address allocation with PPP (“ip address negotiated” on client side and “peer default ip address” on server side) you may have noticed that the mask assigned to a client is always /32. It does not matter what mask a server uses on it’s side of the connection, just PPP is designed to operate this way.

However, many have noticed two strange commands “ppp ipcp mask request” and “ppp ipcp mask X.X.X.X” under PPP interface configuration mode. If IPCP assigned address never uses a custom mask, what would the purpose of those commands be? The answer is simple – to configure on-demand address pools in a client. That is, a client may request a DHCP pool parameters from server using IPCP – for example request a subnet and a mask. The client may then further use this information and allocate IP addresses to it’s subordinates. Here is a configuration to verify this feature. Consider that R1 connects to R3 over a point-to-point link:

R1:
ip dhcp pool LOCAL
   import all
   origin ipcp
!
! Link to R3
!
interface Serial0/1
 ip address pool LOCAL
 encapsulation ppp
 ppp ipcp mask request

R3:
!
! Link to R1
!
interface Serial1/2
 ip address 172.16.13.3 255.255.255.0
 encapsulation ppp
 peer default ip address pool POOL
 clock rate 128000
 ppp ipcp mask 255.255.255.0
!
ip local pool POOL 172.16.100.1 172.16.100.254

Using the “debug ppp negotiation” command on R1 (the client) and R3 (the server) you may see the mask being requested and passed down to the client. Debug output from R1:

Se0/1 IPCP: I CONFREQ [REQsent] id 1 len 10
Se0/1 IPCP:    Address 172.16.13.3 (0x0306AC100D03)
Se0/1 IPCP: O CONFACK [REQsent] id 1 len 10
Se0/1 IPCP:    Address 172.16.13.3 (0x0306AC100D03)
Se0/1 CDPCP: Redirect packet to Se0/1
Se0/1 CDPCP: I CONFREQ [REQsent] id 1 len 4
Se0/1 CDPCP: O CONFACK [REQsent] id 1 len 4
Se0/1 IPCP: I CONFNAK [ACKsent] id 1 len 20
Se0/1 IPCP:    VSO OUI 0x00000C kind 1 (0x000A00000C01FFFFFF00)
Se0/1 IPCP:    Address 172.16.100.3 (0x0306AC106403)
Se0/1 IPCP: O CONFREQ [ACKsent] id 2 len 20
Se0/1 IPCP:    VSO OUI 0x00000C kind 1 (0x000A00000C01FFFFFF00)
Se0/1 IPCP:    Address 172.16.100.3 (0x0306AC106403)
Se0/1 CDPCP: I CONFACK [ACKsent] id 1 len 4
Se0/1 CDPCP: State is Open
Se0/1 IPCP: I CONFACK [ACKsent] id 2 len 20
Se0/1 IPCP:    VSO OUI 0x00000C kind 1 (0x000A00000C01FFFFFF00)
Se0/1 IPCP:    Address 172.16.100.3 (0x0306AC106403)
Se0/1 IPCP: State is Open
Se0/1 IPCP: Subnet: address 172.16.100.3 mask 255.255.255.0

Debug output from R3:

Se1/2 IPCP: O CONFREQ [Closed] id 1 len 10
Se1/2 IPCP:    Address 172.16.13.3 (0x0306AC100D03)
Se1/2 CDPCP: O CONFREQ [Closed] id 1 len 4
Se1/2 PPP: Process pending ncp packets
Se1/2 IPCP: I CONFREQ [REQsent] id 1 len 20
Se1/2 IPCP:    VSO OUI 0x00000C kind 1 (0x000A00000C0100000000)
Se1/2 IPCP:    Address 172.16.100.3 (0x0306AC106403)
Se1/2 IPCP: Use our explicit subbnet mask 255.255.255.0
Se1/2 IPCP: O CONFNAK [REQsent] id 1 len 14
Se1/2 IPCP:    VSO OUI 0x00000C kind 1 (0x000A00000C01FFFFFF00)
Se1/2 CDPCP: I CONFREQ [REQsent] id 1 len 4
Se1/2 CDPCP: O CONFACK [REQsent] id 1 len 4
Se1/2 CDPCP: I CONFACK [ACKsent] id 1 len 4
Se1/2 CDPCP: State is Open
Se1/2 IPCP: I CONFACK [REQsent] id 1 len 10
Se1/2 IPCP:    Address 172.16.13.3 (0x0306AC100D03)
Se1/2 IPCP: I CONFREQ [ACKrcvd] id 2 len 20
Se1/2 IPCP:    VSO OUI 0x00000C kind 1 (0x000A00000C01FFFFFF00)
Se1/2 IPCP:    Address 172.16.100.3 (0x0306AC106403)
Se1/2 IPCP: Use our explicit subbnet mask 255.255.255.0
Se1/2 IPCP: O CONFACK [ACKrcvd] id 2 len 20
Se1/2 IPCP:    VSO OUI 0x00000C kind 1 (0x000A00000C01FFFFFF00)
Se1/2 IPCP:    Address 172.16.100.3 (0x0306AC106403)

Now this is what you get when you configure “ip address negotiated” on R1:

R1#sh ip interface serial 0/1
Serial0/1 is up, line protocol is up
  Internet address is 172.16.100.5/32
  Broadcast address is 255.255.255.255
  Address determined by IPCP
  Peer address is 172.16.13.3

And this is what shows up when you use local DHCP address pool for autoconfiguration (note the subnet mask):

R1#sh ip interface serial 0/1
Serial0/1 is up, line protocol is up
  Internet address is 172.16.100.4/24
  Broadcast address is 255.255.255.255
  Address determined by setup command
  Peer address is 172.16.13.3

However, the funniest part is that R1 serial interface IP address is actually not allocated from the local (on-demand) DHCP pool! Observing the debug output you can see that R1 uses the IP address sent from R3, not allocated from the local DHCP pool. Then again, the local pool DHCP still has the requested subnet:

R1#sh ip dhcp pool 

Pool LOCAL :
 Utilization mark (high/low)    : 100 / 0
 Subnet size (first/next)       : 0 / 0
 Total addresses                : 254
 Leased addresses               : 0
 Pending event                  : none
 1 subnet is currently in the pool :
 Current index        IP address range                    Leased addresses
 172.16.100.1         172.16.100.1     - 172.16.100.254    0

R1#sh ip dhcp binding
Bindings from all pools not associated with VRF:
IP address          Client-ID/	 	    Lease expiration        Type
		    Hardware address/
		    User name
R1#

You can see the following on R3:

R3#sh ip local pool POOL
 Pool                     Begin           End             Free  In use
 POOL                     172.16.100.1    172.16.100.254   253       1
...
   172.16.100.1       Se1/2
   172.16.100.2       Se1/2
   172.16.100.3       Se1/2
   172.16.100.4       Se1/2
Inuse addresses:
   172.16.100.4       Se1/2

This is what so funny about Cisco IOS – you can never be sure the feature works in a most logical way you may suppose it to work. You can play with this example further, for example changing IP address allocation on R3 to local DHCP Pools or a static IP – there is always something you can experiment with!

Further reading:

Configuring the DHCP Server On-Demand Address Pool Manager

Tags: , , , ,

Mar
25

A common question that I get from students in class is what are the options to resolve spoke to spoke reachability in a Frame-Relay network. Below are your “standard” choices in order of preference:

1) Use point-to-point subinterfaces on the spokes.  This option is preferred as all IP addresses on the subnet will automatically be mapped to the DLCI that is bound to the subinterface.
2) Multipoint interfaces (physical or multipoint subinterfaces) on the spokes with Frame-Relay mappings pointing to the hub’s DLCI to reach the other spokes.
3) Multipoint interfaces on the spokes along with using the OSPF point-to-multipoint network type on all routers on the subnet. Each end point will advertise out a /32 and this advertisement will be relayed to the other spokes by the hub. This is exactly what the OSPF point-to-multipoint network type was designed for (full layer 3 reachability in a network that doesn’t have full layer 2 connectivity.
4) Use PPP over Frame-Relay (PPPoFR). By using PPPoFR IP will now be running over PPP and not directly over Frame-Relay. This means that IP sees everything as point-to-point links and no layer 3 to layer 2 mappings are needed.
5) Static /32 routes on the spokes point to the hub to reach the other spokes. Not a pretty solution but it will resolve the reachability issue.

Tags: , , ,

Jan
26

To begin with, why whould anyone need to run Multilink PPP (MLPPP or MLP) with Interleaving over Frame-Relay? Well, back in days, when Frame-Relay and ATM were really popular, there was a need to interwork the two technologies: that is, transparently pass encapsulated packets between FR and ATM PVCs. (This is similar in concept with modern L2 VPN interworking, however it was specific to ATM and Frame-Relay). Let’s imagine a situation where we have slow ATM and Frame-Relay links, used to transport a mix of VoIP and data traffic. As we know, some sort of fragmentation and interleaving scheme should be implemented, in order to keep voice quality under control. Since there was no fragmentation scheme common to both ATM and Frame-Relay, people came with idea to run PPP (yet another L2 tech) over Frame-Relay and ATM PVCs and use PPP multilink and interleave feature to implement fragmentation. (Actually there was no good scheme for native fragmentation and interleaving with VoIP over ATM – the cell mode technology – how ironic!)

Before coming up with a configuration example, let’s discuss briefly how PPP Multilink and Interleave works. MLPPP is defined under RFC 1990, and it’s purpose is to group a number of physical links into one logical channel with larger “effective” bandwidth. As we discussed before, MLPPP uses a fragmentation algorithm, where one large frame is being split at Layer2 and replaced with a bunch of sequenced (by the use of additional MLPPP header) smaller frames which are then being sent over multiple physical links in parallel. The receiving side will then accept fragments, reorder some of them if needed, and assemble the pieces into complete frame using the sequence numbers.

So here comes the interleave feature: small voice packets are not fragmented by MLPPP (no MLPPP header and sequence number added) and are simply inserted (intermixed) among the fragments of large data packet. Of course, a special interleaving priority queue is used for this purpose, as we have discussed before.

To summarize:

1) MLPPP uses fragmentation scheme where large packets are sliced in pieces and sequence numbers are added using special MLPPP headers
2) Small voice packets are interleaved with fragments of large packets using a special priority queue

We see that MLPPP was originally designed to work with multiple physical links at the same time. However, PPP Multilink Interleave only works with one physical link. The reason is that voice (small) packets are being sent without sequence numbers. If we were using multiple physical links, the receiving side may start accepting voice packets out of their original order (due to different physical link latencies). And since voice packets bear no fragmentation headers, there is no way to reorder them. In effect, packets may arrive to their final destination out of order, degrading voice quality.

To overcome this obstacle, Multiclass Multilink PPP (MCMLPPP or MCMLP) has been introduced in RFC 2886. Under this RFC, different “fragment streams” or classes are supported at sending and receiving sides, using independent sequence numbers. Therefore, with MCMLPPP voice packets may be sent using MLPPP header with separate sequence numbers space. In result, MCMPPP permits the use of fragmentation and interleaving over multiple physical links at time.

Now back to our MLPPPoFR example. Let’s imagine the situation where we have two routers (R1 and R2) connected via FR cloud, with physical ports clocked at 512Kpbs and PVC CIR values equal to 384Kbps (There is no ATM interworking in this example). We need to provide priority treatment to voice packets and enable PPP Multilink and Interleave to decrease serialization delays.


[R1]---[DLCI 112]---[Frame-Relay]---[DLCI 211]---[R2]

Start by defining MQC policy. We need to make sure that software queue gives voice packets priority treatmet, or else interleaving will be useless


R1 & R2:

!
! Voice bearer
!
class-map VOICE
 match ip dscp ef

!
! Voice signaling
!
class-map SIGNALING
 match ip dscp cs3

!
! CBWFQ: priority treatment for voice packets
!
policy-map CBWFQ
 class VOICE
  priority 48
 class SIGNALING
  bandwidth 8
 class class-default
  fair-queue

Next create a Virtual-Template interface for PPPoFR. We need to calculate the fragment size for MLPPP. Since physical port speed is 512Kpbs, and required serialization delay should not exceed 10ms (remember, fragment size is based on physical port speed!), the fragment size must be set to 512000/8*0,01=640 bytes. How is the fragment size configured with MLPPP? By using command ppp multilink fragment delay – however, IOS CLI takes this delay value (in milliseconds) and multiplies it by configured interface (virtual-template) bandwidth (in our case 384Kbps). We can actually change the virtual-template bandwidth to match the physical interface speed, but this would affect the CBWFQ weights! Therefore, we take the virtual-template bandwidth (384Kpbs) and adjust the delay to make sure the fragment size matches the physical interace rate is 512Kpbs. This way, the “effective” delay value would be set to “640*8/384 = 13ms” (Fragment_Size/CIR*8) to accomodate the physical and logical bandwidth discrepancy. (This may be unimportant if our physical port speed does not differ much from PVC CIR. However, if you have say PVC CIR=384Kbps and port speed 768Kbps you may want to pay attention to this issue)


R1:
interface Loopback0
 ip address 177.1.101.1 255.255.255.255
!
interface Virtual-Template 1
 encapsulation ppp
 ip unnumbered Loopback 0
 bandwidth 384
 ppp multilink
 ppp multilink interleave
 ppp multilink fragment delay 13
 service-policy output CBWFQ

R2:
interface Loopback0
 ip address 177.1.102.1 255.255.255.255
!
interface Virtual-Template 1
 encapsulation ppp
 ip unnumbered Loopback 0
 bandwidth 384
 ppp multilink
 ppp multilink interleave
 ppp multilink fragment delay 13
 service-policy output CBWFQ

Next we configure PVC shaping settings by using legacy FRTS configuration. Note that Bc is set to CIR*10ms.


R1 & R2:
map-class frame-relay SHAPE_384K
frame-relay cir 384000
frame-relay mincir 384000
frame-relay bc 3840
frame-relay be 0

Finally we apply all the settings to the Frame-Relay interfaces:


R1:
interface Serial 0/0/0:0
 encapsulation frame-relay
 frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/0:0.1 point-to-point
 no ip address
 frame-relay interface-dlci 112 ppp virtual-template 1
  class SHAPE_384K

R2:
interface Serial 0/0/1:0
 encapsulation frame-relay
 frame-relay traffic-shaping
!
! Virtual Template bound to PVC
!
interface Serial 0/0/1:0.1  point-to-point
 no ip address
 no frame-relay interface-dlci 221
 frame-relay interface-dlci 211 ppp virtual-Template 1
  class SHAPE_384K

Verification

Two virtual-access interfaces have been cloned. First for the member link:


R1#show interfaces virtual-access 2
Virtual-Access2 is up, line protocol is up
  Hardware is Virtual Access interface
  Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
  MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation PPP, LCP Open, multilink Open
  Link is a member of Multilink bundle Virtual-Access3   <---- MLP bundle member
  PPPoFR vaccess, cloned from Virtual-Template1
  Vaccess status 0x44
  Bound to Serial0/0/0:0.1 DLCI 112, Cloned from Virtual-Template1, loopback not set
  Keepalive set (10 sec)
  DTR is pulsed for 5 seconds on reset
  Last input 00:00:52, output never, output hang never
  Last clearing of "show interface" counters 00:04:17
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo       <---------- FIFO is the member link queue
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     75 packets input, 16472 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     86 packets output, 16601 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
     0 carrier transitions

Second for the MLPPP bundle itself:


R1#show interfaces virtual-access 3
Virtual-Access3 is up, line protocol is up
  Hardware is Virtual Access interface
  Interface is unnumbered. Using address of Loopback0 (177.1.101.1)
  MTU 1500 bytes, BW 384 Kbit, DLY 100000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation PPP, LCP Open, multilink Open
  Open: IPCP
  MLP Bundle vaccess, cloned from Virtual-Template1   <---------- MLP Bundle
  Vaccess status 0x40, loopback not set
  Keepalive set (10 sec)
  DTR is pulsed for 5 seconds on reset
  Last input 00:01:29, output never, output hang never
  Last clearing of "show interface" counters 00:03:40
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing    <--------- CBWFQ is the bundle queue
  Output queue: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/1/128 (active/max active/max total)
     Reserved Conversations 1/1 (allocated/max allocated)
     Available Bandwidth 232 kilobits/sec
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     17 packets input, 15588 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     17 packets output, 15924 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
     0 carrier transitions

Verify the CBWFQ policy-map:


R1#show policy-map interface
 Virtual-Template1 

  Service-policy output: CBWFQ

    Service policy content is displayed for cloned interfaces only such as vaccess and sessions
 Virtual-Access3 

  Service-policy output: CBWFQ

    Class-map: VOICE (match-all)
      0 packets, 0 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: ip dscp ef (46)
      Queueing
        Strict Priority
        Output Queue: Conversation 136
        Bandwidth 48 (kbps) Burst 1200 (Bytes)
        (pkts matched/bytes matched) 0/0
        (total drops/bytes drops) 0/0

    Class-map: SIGNALING (match-all)
      0 packets, 0 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: ip dscp cs3 (24)
      Queueing
        Output Queue: Conversation 137
        Bandwidth 8 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 0/0
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: class-default (match-any)
      17 packets, 15554 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: any
      Queueing
        Flow Based Fair Queueing
        Maximum Number of Hashed Queues 128
        (total queued/total drops/no-buffer drops) 0/0/0

Check PPP multilink status:


R1#ping 177.1.102.1 source loopback 0 size 1500

Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 177.1.102.1, timeout is 2 seconds:
Packet sent with a source address of 177.1.101.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 64/64/64 ms

R1#show ppp multilink

Virtual-Access3, bundle name is R2
  Endpoint discriminator is R2
  Bundle up for 00:07:49, total bandwidth 384, load 1/255
  Receive buffer limit 12192 bytes, frag timeout 1000 ms
  Interleaving enabled            <------- Interleaving enabled
    0/0 fragments/bytes in reassembly list
    0 lost fragments, 0 reordered
    0/0 discarded fragments/bytes, 0 lost received
    0x34 received sequence, 0x34 sent sequence   <---- MLP sequence numbers for fragmented packets
  Member links: 1 (max not set, min not set)
    Vi2, since 00:07:49, 624 weight, 614 frag size <------- Fragment Size
No inactive multilink interfaces

Verify the interleaving queue:


R1#show interfaces serial 0/0/0:0
Serial0/0/0:0 is up, line protocol is up
  Hardware is GT96K Serial
  MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation FRAME-RELAY, loopback not set
  Keepalive set (10 sec)
  LMI enq sent  10, LMI stat recvd 11, LMI upd recvd 0, DTE LMI up
  LMI enq recvd 0, LMI stat sent  0, LMI upd sent  0
  LMI DLCI 1023  LMI type is CISCO  frame relay DTE
  FR SVC disabled, LAPF state down
  Broadcast queue 0/64, broadcasts sent/dropped 4/0, interface broadcasts 0
  Last input 00:00:05, output 00:00:02, output hang never
  Last clearing of "show interface" counters 00:01:53
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: dual fifo                        <--------- Dual FIFO
  Output queue: high size/max/dropped 0/256/0         <--------- High Queue
  Output queue: 0/128 (size/max)                      <--------- Low (fragments) queue
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     47 packets input, 3914 bytes, 0 no buffer
     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
     1 input errors, 1 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     47 packets output, 2149 bytes, 0 underruns
     0 output errors, 0 collisions, 4 interface resets
     0 output buffer failures, 0 output buffers swapped out
     1 carrier transitions
  Timeslot(s) Used:1-24, SCC: 0, Transmitter delay is 0 flags

Further Reading

Reducing Latency and Jitter for Real-Time Traffic Using Multilink PPP
Multiclass Multilink PPP
Using Multilink PPP over Frame Relay

Tags: , , , , , ,

Jan
25

The need for fragmentation

We are going to briefly discuss Layer2 fragmentation schemes, their purpose and configuration examples. Let’s start with a general discussion. Usually, Layer2 fragmentation is used to accomplish one of two goals:

a) Link aggregation, e.g. making a number of physical channels look like one logical link from Layer2 standpoint. A good example is PPP Multilink, which breaks large packets into smaller pieces, and send them other multiple physical links simulataneously. Another example is FRF.16 (Multilink Frame-Relay).

b) Decrease large packets serialization delay on slow links. By “slow link”, we mean a link with “physical” speed (e.g. clock-rate) less than 1 Mbps. The issue is usually to have a mix of bulk data and delay-sensitive traffic (e.g. voice) on the same link. This is because large bulky packets (say 1500 bytes in size) may block the interface transmission queue for a long time (with slow links), making small voice packets (e.g. 60 bytes) to wait for more than maximum tolerable threshold (say 10ms).

For example, if physical interface has clock rate of 384000bps, large 1500 byte packet would take 1500*8/384000 > 30ms to serialize. So here comes the solution: break large packets into small pieces at layer2, to decrease the serialization delay. Say if we break one 1500 packet into 3×500 byte frames on a 384Kpbs link, we’ll get 10ms transmission delay for each fragment. Look at the following picture ([V] is a voice packet, and [D] is a data packet)


Before fragmentation:

--[DDD][V][DDD][V][V][DDD]--->

After fragmentation:

--[D][D][D][V][D][D][D][V][V][D][D][D]--->

There is still something wrong here: Small pieces of a large packet are being sent in a row, effectively blocking the transmission qeueue the same way it was before. So just fragmenting alone is not enough – we need a way to make sure the fragments of large packets are “mixed” with voice packets. The technique is called “interleaving”, and it always accompanies fragmentation. With interleaving we get a picture like this:


---[D][V][D][V][V][D][D][D][D]--->

That is, voice packets are not separated by large “islands” of data packets.

So how does interleaving work? Usually, it is accomplished by inserting a special “interleaving” queue before interface transmission (FIFO) queue. Interleaving queue usually has two parts: “high” and “low” FIFO queues. Small packets (packets smaller than configured fragment size) go to “high” queue, and large packets are first fragmented, and then assigned to “low” queue. With this strategy, “high” queue is a priority queue – it’s always get emptied first, and only then the “low” queue gets served.


[Interface Software Queue, e.g. WFQ] --> 

If(Packet.Size lt FRAGMENT_SIZE) 

then 

{ put to High_Queue } 

else 

{ Fragment and put fragments to Low_Queue } 

--> { Service (High_Queue) then Service(Low_Queue) } --> [Interface Xmit Queue]

We are not over yet! You’ve probably noticed “Interface Software Queue” on the diagram above. It plays an important role too. Say, if this is a simple FIFO queue, and a bunch of large data packets sit there ahead of small voice packets. The data packets will get dequeud first, fragmented, and since “high” interleaving queue is empty, will be sent in line on their own. Therefore, the last component to make fragmentation and interleaving work properly, is a software interface queue that give voice packets priority treatment. This could be legacy WFQ or modern CBWFQ/LLQ – just remember that voice packets should be taken from software queue first!

So here are the important things to remember about fragmentation:

1) Fragmentation is not effective without interleaving
2) Interleaving is accomplished by use of additional priority queue
3) Decision on where to put a packet to “high” interleaving queue is based on packet size solely
4) Interleaving is inefficient without a software queue that gives small (e.g. voice) packets priority treatment

Situation becomes more complicated, when we have multiple logical channels (e.g. PVCs) multiplexed over the same physical channel. For example, with a number of Frame-Relay PVCs, assigned to the same physical interface, we get multiple software queues – one per each PVC. They all share the same interleaving queue at physical interface level. Due to the fact that large packets of one PVC may affect small packets serialization delay of the other PVC, fragmentation should be turned on for all PVCs simultaneously.

Tags: , , , , ,

Jan
20

Below are a couple example configurations for PPPoE. Note that you can run into MTU issues when trying to use OSPF over PPPoE. This can easily be resolved by using the “ip ospf mtu-ignore” command as the dialer interface’s MTU is 1492 while the virtual-template’s (virtual-access) MTU is 1500.

*** Client ***
interface Ethernet0/0
 pppoe enable
 pppoe-client dial-pool-number 1
!
interface Dialer1
 ip address 142.1.35.5 255.255.255.0
 encapsulation ppp
 dialer-pool 1
 dialer persistent

*** Server ***

vpdn enable
!
vpdn-group CISCO
 accept-dialin
 protocol pppoe
 virtual-template 1
!
interface Ethernet0/0
 pppoe enable
!
interface Virtual-Template1
 ip address 142.1.35.3 255.255.255.0

The next example is using DHCP to assign the client their IP address:

*** Client ***

interface Ethernet0/1
 pppoe enable
 pppoe-client dial-pool-number 1
!
interface Dialer1
 ip address dhcp
 encapsulation ppp
 dialer pool 1
 dialer persistent

*** Server ***

ip dhcp excluded-address 191.1.45.1 190.12.45.3
!
ip dhcp pool MYPOOL
 network 191.1.45.0 255.255.255.0
!
vpdn enable
!
vpdn-group CISCO
 accept-dialin
 protocol pppoe
 virtual-template 1
!
interface Ethernet0/0
 pppoe enable
!
interface Virtual-Template1
 ip address 191.1.45.5 255.255.255.0
 peer default ip address dhcp-pool MYPOOL

Tags: , , ,

Jan
07

Hello Brian,Can you explain how PPP over Frame Relay works? Also what are the advantages and disadvantages of using it over normal Frame Relay configuration?Thanks and regards,

Yaser

Hi Yaser,

Frame Relay does not natively support features such as authentication, link quality monitoring, and reliable transmission. Based on this it is sometimes advantageous to encapsulate an additional PPP header between the normal layer 2 Frame Relay encapsulation and the layer 3 protocol. By running PPP over Frame Relay (PPPoFR) we can then implement authentication of Frame Relay PVCs, or even bind multiple PVCs together using PPP Multilink.

PPPoFR is configure in Cisco IOS through the usage of a Virtual-Template interface. A Virtual-Template is a PPP encapsulated interface that is designed to spawn a “template” of configuration down to multiple member interfaces. The traditional usage of this interface has been on dial-in access servers, such as the AS5200, to support multiple PPP dialin clients terminating their connection on a single interface running IP.

The first step in configuring PPPoFR is to create the Virtual-Template interface. This interface is where all logical options, such as IP address and PPP authentication will be configured. The syntax is as follows:

interface Virtual-Template1
 ip address 54.1.7.6 255.255.255.0
 ppp chap hostname ROUTER6
 ppp chap password 0 CISCO

Note the lack of the “encapsulation ppp” command on the Virtual-Template. This command is not needed as a Virtual-Template is always running PPP. This can be seen by looking at the “show interface virtual-template1” output in the IOS. Additionally in this particular case the remote end of this connection will be challenging the router to authenticate via PPP CHAP. The “ppp chap” subcommands have instructed the router to reply with the username ROUTER6 and an MD5 hash value of the PPP magic number and the password CISCO.

Our next step is to configure the physical Frame Relay interface, and to bind the Virtual-Template to the Frame Relay PVC. This is accomplished as follows:

interface Serial0/0
 encapsulation frame-relay
 frame-relay interface-dlci 201 ppp Virtual-Template1

Note that the “no frame-relay inverse-arp” command is not used on this interface. Since our IP address is located on the Virtual-Template interface the Frame Relay process doesn’t actually see IP running over the link. Instead it simply sees a PPP header being encapsulated on the link, while the IPCP protocol of PPP takes care of all the IP negotiation for us. Note that the order that these steps are performed in is significant. If a Virtual-Template interface is applied to a Frame Relay PVC before it is actually created you may see difficulties with getting the link to become active.

Also when using a Virtual-Template interface it’s important to understand that a Virtual-Access “member” interface is cloned from the Virtual-Template interface when the PPP connection comes up. Therefore the Virtual-Template interface itself will always be in the down/down state. This can affect certain network designs such as using the backup interface command on a Virtual-Template. In our particular case we can see from the below output this effect:

R6#show ip interface brief | include 54.1.7.6
Virtual-Access1 54.1.7.6 YES TFTP up up
Virtual-Template1 54.1.7.6 YES manual down down

Aside from this there is no other configuration that directly relates to Frame Relay for PPP. Other options such as authentication, reliability, and multilink would be configured under the Virtual-Template interface.

Tags: , ,

Categories

CCIE Bloggers