Posts Tagged ‘legacy’

Aug
12

This post is a partial excerpt from the QoS section of our IEWB-RS VOL1 V5. We’ll skip the discussion of how much of a legacy the Custom Queue is, and get stright to the working details. Custom queue feature is similar to WFQ in that it tries to share the bandwidth between packet flows using the max min approach: each flow class gets the guaranteed share proportional to its weight plus any class may claim the “unused” interface bandwidth. However, unlike WFQ, there are no dynamic conversations, just 16 static queues with configurable classification criteria. Custom Queue assigns byte counter to every queue (1500 bytes is the default value) and serves the queues in round-robin fashion, proportional to counters. Every dequeued packet decrements queue byte count by its size, until it drops down to zero. Custom Queuing feature supports additional system queue, number 0. System queue is the priority queue and is always served first, before of other (regular) queues. By default, system queue is used for layer 2 keepalives, but not for routing update packets (e.g. RIP, OSPF, EIGRP). Therefore, it’s recommended to map the routing update packets to system queue 0 manually, unless the interface is Frame-Relay, which uses special broadcast queue to send broadcasts. Note that all unclassified packets are by default assigned to queue 1 (e.g. routing updates will use this queue, unless mapped to some other queue), if the default queue number is not changed.

The limitation of round robin scheduling is that it can’t naturally dequeue less than one packet (quantum) from each queue. Since queues may have different average packet sizes (e.g. voice packets of 60 bytes and TCP packets of 1500 bytes) this may lead to undesirable bandwidth distribution ratio (e.g. if queue byte counter is 100 bytes and packet of 1500 bytes is in the queue, the packet will still be sent, since the counter is non zero). In order to make the distribution “fair”, every queue’s byte counter should be proportional to the queue’s average packet size. To make that happen, try to classify traffic so that packets in every queue have approximately the same packet size. Next, use the following example to calculate byte counters.

Consider the following scenario:

Continue Reading

Tags: , , ,

Jul
10

The subject is a legacy and is no longer a big issue for the CCIE SP exam. However, it still worths mentioning few major features of cell-mode ATM. For our article we’ll conside a small network consisting of three routers:

R1—ATM—R2

Basically, cell-mode ATM is an MPLS implementation which uses native ATM tagging mechanism for label switching. In order for ATM cloud to become MPLS enabled the following is required:

1) A label distribution protocol
2) An IGP running over the ATM cloud

Both things need IP connectivity between ATM devices. However, manually configuring a PVC between every pair of directly connected ATM LSRs would be quite a burden, so they came up with another solution. A fixed (well-known) VPI/VCI pair is used on each MPLS-enabled ATM interface to establish a point-to-point link with the peer. This VC could be changed using mpls atm control-vc command. As soon as you enable mpls ip on an ATM mpls subinterace, peers start discovery phase and establish an LDP connection (TCP session) over the control-VC. Just as a side note, the control-VC uses ATM AAL5 SNAP encapsulation to run IP.

interface ATM3/0.1 mpls
 ip unnumbered Loopback0
 no atm enable-ilmi-trap
 mpls atm control-vc 1 64
 mpls label protocol ldp
 mpls ip

Note that the actual IP address on an MPLS subinterface does not matter much, since the connection is essentially point-to-point. So for the purpose of address conservation you may use ip unnumbered here. Also, you may configure to use TDP instead of LDP, if you are connecting to another network using different protocol.

Next, you need to enable an IGP on the MPLS interface, e.g. configure OSPF. The IGP will use the same control-VC to exchange IP packets with the peer. As soon as IGP converges, LDP will start label bindings process. The significant different with ATM is that cell-mode MPLS uses “downstream on-demand” label distribution model. That is, an upstream router will request a label binding (a VPI/VCI pair) from the router downstream with respect to a particular IGP prefix. The router being requested, in turn, will ask it’s own downstream for this prefix and so the process will continue until the tail-end router will respond. Let’s see how would RIB and ATM LDP bindings table look like on R1 for our sample configuration:

Rack1R1#show ip ospf neighbor 

Neighbor ID     Pri   State           Dead Time   Address         Interface
150.100.100.254   0   FULL/  -        00:00:33    150.100.100.254 ATM3/0.1

Rack1R1#sh ip route ospf
     20.0.0.0/32 is subnetted, 1 subnets
O       20.0.0.1 [110/3] via 150.100.100.254, 00:32:35, ATM3/0.1
     150.100.0.0/32 is subnetted, 1 subnets
O       150.100.100.254 [110/2] via 150.100.100.254, 00:32:35, ATM3/0.1
     150.1.0.0/16 is variably subnetted, 2 subnets, 2 masks
O       150.1.2.2/32 [110/3] via 150.100.100.254, 00:32:35, ATM3/0.1

Rack1R1#show mpls atm-ldp bindings
 Destination: 150.1.1.0/24
    Tailend Router ATM3/0.1 1/34 Active, VCD=10
 Destination: 150.1.2.2/32
    Headend Router ATM3/0.1 (1 hop) 1/34  Active, VCD=10
 Destination: 150.100.100.254/32
    Headend Router ATM3/0.1 (1 hop) 1/35  Active, VCD=11
 Destination: 20.0.0.1/32
    Headend Router ATM3/0.1 (1 hop) 1/33  Active, VCD=9
 Destination: 10.0.0.0/24
    Tailend Router ATM3/0.1 1/33 Active, VCD=9

Label bindings table shows whether the prefix is local (we are the Tailend Router) or we have requested it from our downstream peer (we are the Headend, requesting router). Also you can see that the label imposed is a VPI/VCI pair, as it should be expected of ATM. Check the other edge of the cloud:

Rack1R2#show ip route ospf
     10.0.0.0/32 is subnetted, 1 subnets
O       10.0.0.1 [110/3] via 150.100.100.254, 01:00:47, ATM3/0.1
     150.100.0.0/32 is subnetted, 1 subnets
O       150.100.100.254 [110/2] via 150.100.100.254, 01:00:47, ATM3/0.1
     150.1.0.0/16 is variably subnetted, 2 subnets, 2 masks
O       150.1.1.1/32 [110/3] via 150.100.100.254, 01:00:47, ATM3/0.1

Rack1R2#show mpls atm-ldp bindings
 Destination: 150.1.2.0/24
    Tailend Router ATM3/0.1 1/34 Active, VCD=9
 Destination: 150.100.100.254/32
    Headend Router ATM3/0.1 (1 hop) 1/35  Active, VCD=10
 Destination: 150.1.1.1/32
    Headend Router ATM3/0.1 (1 hop) 1/34  Active, VCD=9
 Destination: 20.0.0.0/24
    Tailend Router ATM3/0.1 1/33 Active, VCD=8
 Destination: 10.0.0.1/32
    Headend Router ATM3/0.1 (1 hop) 1/33  Active, VCD=8

This router is “tailend” for two prefixes: “150.2.2.2/32″ and “20.0.0.1″ – the ones it advertises into IGP.

Verify end-to-end connectivity now:

Rack1R2#ping 10.0.0.1 source loopback 1

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
Packet sent with a source address of 20.0.0.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 8/18/24 ms

Another two feature specific to cell-mode ATM are MPLS LDP loop-detection and VC-merge. The first feature is specific to “downstream on-demand” mode of LDP operations. When a downstream router responds to or forwards a label binding request, it prepends it’s LDP router-id to the list already in the binding requests/response. Using this mechanics, any requesting node may validate if it’s router-id already presents in the binding request, e.g. verify if a loop has formed for some reason. You may enable the loop detection with LDP using the global configuration mode command mpls ldp loop-detection on all ATM LSR routers. To verify the effect of this command issue the following command:

Rack1R1#show mpls atm-ldp bindings path
 Destination: 150.1.1.0/24
    Tailend Router ATM3/0.1 1/34 Active, VCD=10
       Path:	150.100.100.254 	150.1.1.1*
 Destination: 150.1.2.2/32
    Headend Router ATM3/0.1 (1 hop) 1/34  Active, VCD=10
       Path:	150.1.1.1*	150.100.100.254
 Destination: 150.100.100.254/32
    Headend Router ATM3/0.1 (1 hop) 1/35  Active, VCD=11
       Path:	150.1.1.1*	150.100.100.254
 Destination: 20.0.0.1/32
    Headend Router ATM3/0.1 (1 hop) 1/33  Active, VCD=9
       Path:	150.1.1.1*	150.100.100.254
 Destination: 10.0.0.0/24
    Tailend Router ATM3/0.1 1/33 Active, VCD=9
       Path:	150.100.100.254 	150.1.1.1*

The asterisk sign marks the local router and the path shows the router-ids we got in request or response packets. The next feature is called “VC-merge” and it allows an ATM-LSR to decrease the number of downstream requested labels, by associating the same label to the upstream requests going over the same downstream interface. Technically, it will require a switch to buffer all AAL5SNAP cells to form a complete PDU, before forwading the packet down. This will make LSR perform less efficient with respect to forwarding performance but will greatly reduce the number of label bindings required. You can enable the VC-merge mode on a node that has multiple upstreams using the mpls ldp atm vc-merge command.

And probably the last configuration option you need to consider is manually specifying the range of VPI/VCI used in responses for label binding requests. Remember, you need to specifiy it on the both ends of a point-to-point link between two LSRs and this feature works only with TDP.

R2:
interface ATM3/0.1 mpls
 ip unnumbered Loopback0
 no atm enable-ilmi-trap
 mpls atm vpi 2 vci-range 64-128
 mpls label protocol tdp
 mpls ip

ATM:
interface ATM2/0.1 mpls
 ip unnumbered Loopback0
 ip router isis
 no atm enable-ilmi-trap
 mpls atm vpi 2 vci-range 64-128
 mpls label protocol tdp
 mpls ip

Will result in the following output on ATM LSR:

ATM#show mpls atm-ldp bindings
 Destination: 150.100.100.254/32
    Tailend Router ATM1/0.1 1/35 Active, VCD=19
    Tailend Router ATM2/0.1 2/66 Active, VCD=20
 Destination: 150.1.2.2/32
    Headend Router ATM2/0.1 (1 hop) 2/65  Active, VCD=19
    Tailend Router ATM1/0.1 1/34 Active, VCD=18
 Destination: 150.1.1.1/32
    Headend Router ATM1/0.1 (1 hop) 1/34  Active, VCD=18
    Tailend Router ATM2/0.1 2/65 Active, VCD=19
 Destination: 20.0.0.1/32
    Headend Router ATM2/0.1 (1 hop) 2/64  Active, VCD=18
    Tailend Router ATM1/0.1 1/33 Active, VCD=17
 Destination: 10.0.0.1/32
    Headend Router ATM1/0.1 (1 hop) 1/33  Active, VCD=17
    Tailend Router ATM2/0.1 2/64 Active, VCD=18

This is it for the basic configuration aspects of cell-mode MPLS pertaining to the CCIE SP exam. Remember, nobody is going to ask you to configure BPX switches there, so focus on more relevant SP topics, such as L3/L2 VPNs and SP Multicast :)

Tags: , ,

Jul
08

Generally, flow-control is a mechanics allowing the receiving party of a connection to control the rate of the sending party. You may see many different implementations of flow-control technologies at different levels of OSI model (e.g. XON/XOFF for RS232, TCP sliding window, B2B credits for Fibre Channel, FECN/BECN for Frame-Relay, ICMP source-quench message, etc). Flow-Control allows for explicit feedback loop and theoretically implementing loss-less networks that avoid congestion.

For the original Ethernet technology on half-duplex connections there was no possibility of implementing explicit flow control, since only one side could send frames at time. However, you may still remember the Cisco’s so-called “back-pressure” feature on some of the older switches, e.g. Cisco Catalyst 1924. The idea was that switch may send barrage of dummy frames on a half-duplex link, effectively preventing the attached station from transmitting information at given moments of time.

Continue Reading

Tags: , , , ,

Jan
22

This is the most well-known FRTS method, which has been available for quite a while on Cisco routers. It is now being outdated by MQC configurations.
The key characteristic is that all settings are configured under map-class command mode, and later are applied to a particular set PVCs. The
same configuration concept was used for legacy ATM configuration mode (map-class atm).

Legacy FRTS has the following characteristics:

- Enabled with frame-relay traffic-shaping command at physical interface level
- Incompatible with GTS or MQC commands at subinterfaces or physical interface levels
- With FRTS you can enforce bitrate per-VC (VC-granular, unlike GTS), by applying a map-class to PVC
- When no map-class is explicitly applied to PVC, it’s CIR and Tc are set to 56K/125ms by default
- Shaping parameters are configured under map-class frame-relay configuration submode
- Allows to configure fancy-queueing (WFQ/PQ/CQ) or simple FIFO per-VC
- No option to configure fancy-queueing at interface level: interface queue is forced to FIFO (if no FRF.12 is configured)
- Allows for adaptive shaping (throttling down to minCIR) on BECN reception (just as GTS) and option to reflect incoming FECNs as BECNs
- Option to enable adaptive shaping which responds to interface congestion (non-empty interface queue)

Example: Shape PVC to 384Kbps with minimal Tc (10ms) and WFQ as interface queue


map-class frame-relay SHAPE_384K
 frame-relay cir 384000
 frame-relay bc 3840
 frame-relay be 0
 !
 ! Adaptive shaping: respond to BECNs and interface congestion
 !
 frame-relay adaptive-shaping becn
 frame-relay adaptive-shaping interface-congestion
 !
 ! Per-VC fancy-queueing
 !
 frame-relay fair-queue
!
interface Serial 0/0/0:0
 frame-relay traffic-shaping
!
interface Serial 0/0/0:0.1
 ip address 177.0.112.1 255.255.255.0
 frame-relay interface-dlci 112
  class SHAPE_384K

Verification: Check FRTS settings for the configured PVC


Rack1R1#show frame-relay pvc 112

PVC Statistics for interface Serial0/0/0:0 (Frame Relay DTE)

DLCI = 112, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0/0:0

  cir 384000    bc 3840      be 0         byte limit 480    interval 10   <------ Shaping parameters
  mincir 192000    byte increment 480   Adaptive Shaping BECN and IF_CONG <---- Adaptive Shaping enabled
  pkts 0         bytes 0         pkts delayed 0         bytes delayed 0
  shaping inactive
  traffic shaping drops 0
  Queueing strategy: weighted fair  <---- WFQ is the per-VC queue
  Current fair queue configuration:
   Discard     Dynamic      Reserved
   threshold   queue count  queue count
    64          16           0
  Output queue size 0/max total 600/drops 0

The other PVC, with no class configured, has CIR set to 56Kbps and uses FIFO as per-VC queue:


Rack1R1#show frame-relay pvc 113

PVC Statistics for interface Serial0/0/0:0 (Frame Relay DTE)

DLCI = 113, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0/0:0.2

  cir 56000     bc 7000      be 0         byte limit 875    interval 125 <---- CIR=56K
  mincir 28000     byte increment 875   Adaptive Shaping none
  pkts 74        bytes 5157      pkts delayed 0         bytes delayed 0
  shaping inactive
  traffic shaping drops 0
  Queueing strategy: fifo <------------------ FIFO
  Output queue 0/40, 0 drop, 0 dequeued

Check the physical interface queue:


Rack1R1#show interfaces serial 0/0/0:0 | inc Queue
  Queueing strategy: fifo

- Interface queue could be changed to PIPQ (PVCs are assigned to 4 pririty groups, with PQ being physical interface queue)
- PIPQ stands for PVC Interface Priority queueing

Example: Map PVC 112 traffic to high interface queue, and PVC 113 to low interface queue, with WFQ being per-VC queueing


!
! Shape to 384K an assign to high interface level queue
!
map-class frame-relay SHAPE_384K_HIGH
 frame-relay cir 384000
 frame-relay bc 3840
 frame-relay be 0
 !
 ! Per-VC fancy-queueing
 !
 frame-relay fair-queue
 frame-relay interface-queue priority high

!
! Shape to 256k an assign to low interface level queue
!
map-class frame-relay SHAPE_256K_LOW
 frame-relay cir 256000
 frame-relay bc 2560
 frame-relay be 0
 !
 ! Per-VC fancy-queueing
 !
 frame-relay fair-queue
 frame-relay interface-queue priority low

!
! Enable PIPQ as interface queueing strategy
!
interface Serial 0/0/0:0
 frame-relay traffic-shaping
 frame-relay interface-queue priority
!
interface Serial 0/0/0:0.1
 ip address 177.0.112.1 255.255.255.0
 frame-relay interface-dlci 112
  class SHAPE_384K_HIGH
!
interface Serial 0/0/0:0.2
 ip address 177.0.113.1 255.255.255.0
 frame-relay interface-dlci 113
  class SHAPE_256K_LOW

Verfication: Check PVC interface-level priorities


Rack1R1#show frame-relay pvc 112

PVC Statistics for interface Serial0/0/0:0 (Frame Relay DTE)

DLCI = 112, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0/0:0.1

  cir 384000    bc 3840      be 0         byte limit 480    interval 10
  mincir 192000    byte increment 480   Adaptive Shaping none
  pkts 1687      bytes 113543    pkts delayed 0         bytes delayed 0
  shaping inactive
  traffic shaping drops 0
  Queueing strategy: weighted fair
  Current fair queue configuration:
   Discard     Dynamic      Reserved
   threshold   queue count  queue count
    64          16           0
  Output queue size 0/max total 600/drops 0
  priority high
  ^^^^^^^^^^^^^

Rack1R1#show frame-relay pvc 113

PVC Statistics for interface Serial0/0/0:0 (Frame Relay DTE)

DLCI = 113, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0/0:0.2

  cir 256000    bc 2560      be 0         byte limit 320    interval 10
  mincir 128000    byte increment 320   Adaptive Shaping none
  pkts 1137      bytes 79691     pkts delayed 0         bytes delayed 0
  shaping inactive
  traffic shaping drops 0
  Queueing strategy: weighted fair
  Current fair queue configuration:
   Discard     Dynamic      Reserved
   threshold   queue count  queue count
    64          16           0
  Output queue size 0/max total 600/drops 0
  priority low
  ^^^^^^^^^^^^

Verify interface-level queue:


Rack1R1#show interfaces serial 0/0/0:0 | inc (Output|high)
  Output queue (queue priority: size/max/drops):
     high: 0/20/0, medium: 0/40/0, normal: 0/60/0, low: 0/80/0

- With FRF.12 fragmentation configured per any PVC, physical interface queue is changed to dual-FIFO
- This is due to the fact that fragmentation is ineffective without interleaving
- Fragment size is calculated based on physical interface speed to allow minimum serialization delay

Example: Enable FRF.12 fragmentation for PVC DLCI 112 and physical interface speed 512Kbps

!
! PVC shaped to 384Kbps, with physical interface speed 512Kbps
!
map-class frame-relay SHAPE_384K_FRF12
 frame-relay cir 384000
 frame-relay bc 3840
 frame-relay be 0
 !
 ! Per-VC fancy-queueing
 !
 frame-relay fair-queue

 !
 ! Enable FRF.12 per VC. Fragment size = 512Kbps*0,01/8 = 640 bytes
 !
 frame-relay fragment 640
!
!
!
interface Serial 0/0/0:0
 frame-relay traffic-shaping
!
interface Serial 0/0/0:0.1
 ip address 177.0.112.1 255.255.255.0
 frame-relay interface-dlci 112
  class SHAPE_384K_FRF12

Verfication: Check PVC settings


Rack1R1#show frame-relay pvc 112

PVC Statistics for interface Serial0/0/0:0 (Frame Relay DTE)

DLCI = 112, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0/0:0.1

  fragment type end-to-end fragment size 640
  cir 384000    bc   3840      be 0         limit 480    interval 10
  mincir 192000    byte increment 480   BECN response no  IF_CONG no
  frags 1999      bytes 135126    frags delayed 0         bytes delayed 0
  shaping inactive
  traffic shaping drops 0
  Queueing strategy: weighted fair
  Current fair queue configuration:
   Discard     Dynamic      Reserved
   threshold   queue count  queue count
    64          16           0
  Output queue size 0/max total 600/drops 0

Look at physical interface queue:


Rack1R1#show interfaces serial 0/0/0:0 | inc Queu|Output q
  Queueing strategy: dual fifo
  Output queue: high size/max/dropped 0/256/0
  Output queue: 0/128 (size/max)

- You can map up to 4 priority queues to 4 different VCs (inverse PIPQ)
- This scenario usually implies multiple PVCs running between two sites (e.g. PVC for voice and PVC for data)

Example: Map voice packets to high interface-level priority queue and send them over PVC 112


!
! Voice bearer
!
access-list 101 permit udp any any range 16384 32767

!
! Simple priority list to classify voice bearer to high queue
!
priority-list 1 protocol ip high list 101

interface Serial 0/0/0:0
 ip address 177.1.0.1 255.255.255.0
 !
 ! We apply the priority group twice: first to implement queueing
 !
 priority-group 1
 !
 ! Next to map priority levels to DLCIs
 !
 frame-relay priority-dlci-group 1 112 112 113 113

Verfication:


Rack1R1#show queueing interface serial 0/0/0:0
Interface Serial0/0/0:0 queueing strategy: priority

Output queue utilization (queue/count)
	high/217 medium/0 normal/1104 low/55 

Rack1R1#show frame-relay pvc 112

PVC Statistics for interface Serial0/0/0:0 (Frame Relay DTE)

DLCI = 112, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0/0:0

  pvc create time 3d01h, last time pvc status changed 3d01h
  Priority DLCI Group 1, DLCI 112 (HIGH), DLCI 112 (MEDIUM)
  DLCI 113 (NORMAL), DLCI 113 (LOW)

- You can change per-VC queue to CBWFQ/LLQ, and still shape with FRTS
- Note that available bandwidth will be calculated from minCIR value, which is CIR/2 by default

Example: Implement CBWFQ Per-VC


!
! Classify voice using NBAR
!
class-map VOICE
 match protocol rtp

!
! Simple LLQ
!
policy-map CBWFQ
 class VOICE
  priority 64
 class class-default
  bandwidth 64

!
! Use CBWFQ as queueing strategy
! Note that MinCIR = 384/2=192Kbps
!
map-class frame-relay SHAPE_384K_CBWFQ
 frame-relay cir 384000
 frame-relay bc 3840
 frame-relay be 0
 service-policy output CBWFQ
!
!
!
interface Serial 0/0/0:0
 frame-relay traffic-shaping
!
interface Serial 0/0/0:0.1
 ip address 177.0.112.1 255.255.255.0
 frame-relay interface-dlci 112
  class SHAPE_384K_CBWFQ

Verfication: Check PVC queueing strategy


Rack1R1#show frame-relay pvc 112

PVC Statistics for interface Serial0/0/0:0 (Frame Relay DTE)

DLCI = 112, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0/0:0

  cir 384000    bc 3840      be 0         byte limit 480    interval 10
  mincir 192000    byte increment 480   Adaptive Shaping none
  pkts 0         bytes 0         pkts delayed 0         bytes delayed 0
  shaping inactive
  traffic shaping drops 0
  service policy CBWFQ
 Serial0/0/0:0: DLCI 112 -

  Service-policy output: CBWFQ

    Class-map: VOICE (match-all)
      0 packets, 0 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: protocol rtp
      Queueing
        Strict Priority
        Output Queue: Conversation 40
        Bandwidth 64 (kbps) Burst 1600 (Bytes)
        (pkts matched/bytes matched) 0/0
        (total drops/bytes drops) 0/0

    Class-map: class-default (match-any)
      32 packets, 2560 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: any
      Queueing
        Output Queue: Conversation 41
        Bandwidth 128 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 0/0
        (depth/total drops/no-buffer drops) 0/0/0
  Output queue size 0/max total 600/drops 0

To verify that only MinCIR of bandwidth is allocated to CBWFQ under map-class do the following:


Rack1R1(config)#policy-map CBWFQ
Rack1R1(config-pmap)# class VOICE
Rack1R1(config-pmap-c)#  priority 64
Rack1R1(config-pmap-c)# class class-default
Rack1R1(config-pmap-c)#  bandwidth 192
I/f Serial0/0/0:0 DLCI 112 Class class-default requested bandwidth 192 (kbps) Only 128 (kbps) available

Tags: , , , , , , ,

Jan
21

As first and very basic option, you may use Generic Traffic Shaping to implement FRTS. This is a common technique, not unique to Frame-Relay, with the following properties:

- Configured by using traffic-shape interface command
- As with standard GTS, internal shaper queue is basic WFQ
- Configured per inteface/subinteface (no PVC granularity)
- GTS may adapt to BECNs & reflect FECNs (BECN received at any interface/subinteface PVC will cause shaper rate to throttle back to minCIR)
- FECN reflection activates sending BECNs in responce to incoming FECN frames. Note, that FECN/BECN responce requires provider to mark frames with FECN/BECN bits
- You can configure Fancy-Queueing (WFQ/PQ/CQ) at physical interface level with GTS.

Example:


!
! Physical Interface, fancy-queueing may apply here
!

interface Serial 0/0/0:0
 no ip address
 encapsulation frame-relay
 fair-queue

!
! Subinterface, apply GTS here
!

interface Serial0/0/0:0.1 point-to-point
 ip address 177.0.112.1 255.255.255.0
 !
 ! Shaping rate
 !
 traffic-shape rate 512000
 !
 ! "MinCIR", adapt to BECNs
 !
 traffic-shape adaptive 256000
 !
 ! Reflect FECNs as BECNs
 !
 traffic-shape fecn-adapt
 frame-relay interface-dlci 112

Verification:


Rack1R1#show traffic-shape 

Interface   Se0/0/0:0.1
       Access Target    Byte   Sustain   Excess    Interval  Increment Adapt
VC     List   Rate      Limit  bits/int  bits/int  (ms)      (bytes)   Active
-             512000    3200   12800     12800     25        1600      BECN

Rack1R1#show traffic-shape statistics
                  Acc. Queue Packets   Bytes     Packets   Bytes     Shaping
I/F               List Depth                     Delayed   Delayed   Active
Se0/0/0:0.1             0     157       10500     0         0         no

Rack1R1#show traffic-shape queue
Traffic queued in shaping queue on Serial0/0/0:0.1
  Queueing strategy: weighted fair
  Queueing Stats: 0/1000/64/0 (size/max total/threshold/drops)
     Conversations  0/0/32 (active/max active/max total)
     Reserved Conversations 0/0 (allocated/max allocated)
     Available Bandwidth 512 kilobits/sec

Tags: , , , ,

Jan
15

Commonly people run into issues with the ip default-network command putting static routes in their configuration when they select a network that can not be considered as the candidate default network. I’ll show the two common mistakes with this command that causes this to happen.

In the scenario below R4 is receiving a subnet of the 10.0.0.0/8 network (10.1.1.0/24) and has the 172.16.1.0/24 network directly attached to it’s E0/0 interface. We can also see that the router does not have any static routes configured.

Rack4R4(config)#do show ip route rip
10.0.0.0/24 is subnetted, 1 subnets
R       10.1.1.0 [120/1] via 172.16.1.7, 00:00:10, Ethernet0/0
Rack4R4(config)#
Rack4R4(config)#do show ip interface brief | exclude unassigned
Interface                  IP-Address      OK? Method Status                Protocol
Ethernet0/0                172.16.1.4      YES manual up                    up
Rack4R4(config)#
Rack4R4(config)#do show run | include ip route
Rack4R4(config)#

Now I’ll set the default network to a network that I have a connected route for.

Rack4R4(config)#ip default-network 172.16.1.0
Rack4R4(config)#do show run | include ip default-network
Rack4R4(config)#
Rack4R4(config)#do show run | include ip route
ip route 172.16.0.0 255.255.0.0 172.16.1.0
Rack4R4(config)#

We can see that the ip default-network command put a static route in the configuration since the network I tried to mark as default was directly connected. Now I’ll try to remove it.

Rack4R4(config)#no ip route 172.16.0.0 255.255.0.0 172.16.1.0
%No matching route to delete

And as we can see from the error message the static route wasn’t removed. To remove the static route just do a “no” on the ip default-network command.

Rack4R4(config)#no ip default-network 172.16.1.0
Rack4R4(config)#do show run | include ip route
Rack4R4(config)#

Now we’ll try to set the default network to a subnet of a classful network that we have a route for in the routing table (see above)

Rack4R4(config)#ip default-network 10.1.1.0
Rack4R4(config)#do show run | include ip default-network
Rack4R4(config)#
Rack4R4(config)#do show run | include ip route
ip route 10.0.0.0 255.0.0.0 10.1.1.0
Rack4R4(config)#

Once again a static route was added and I’ll need to remove it.

Rack4R4(config)#no ip default-network 10.1.1.0

In order for the ip default-network command to actually work I’ll need to select a classful network. To do this I summarized the 10.1.1.124/30 to 10.0.0.0/8 on the router that was advertising the /30 to R4.

Rack4R4(config)#do show ip route rip
R    10.0.0.0/8 [120/1] via 172.16.1.7, 00:00:01, Ethernet0/0
Rack4R4(config)#
Rack4R4(config)#ip default-network 10.0.0.0
Rack4R4(config)#do show run | include ip default-network
ip default-network 10.0.0.0
Rack4R4(config)#
Rack4R4(config)#do show run | include ip route
Rack4R4(config)#
Rack4R4(config)#do show ip route | include \*
ia - IS-IS inter area, * - candidate default, U - per-user static route
R*   10.0.0.0/8 [120/1] via 172.16.1.7, 00:00:02, Ethernet0/0
Rack4R4(config)#

As we can see now the 10.0.0.0/8 is flagged as our candidate default network.

Tags: , , ,

Categories

CCIE Bloggers