Posts from ‘MPLS’
Update: Congrats to Mark, our winner of 100 rack rental tokens for the first correct answer, that XR2 is missing a BGP router-id. In regular IOS, a router-id is chosen based on the highest Loopback interface. If there is no Loopback interface the highest IP address of all up/up interfaces is chosen. In the case of IOS XR however, the router-id will not be chosen from a physical link. It will only be chosen from the highest Loopback interface, or from the manual router-id command. Per the Cisco documentation:
For BGP sessions between neighbors to be established, BGP must be assigned a router ID. The router ID is sent to BGP peers in the OPEN message when a BGP session is established.
BGP attempts to obtain a router ID in the following ways (in order of preference):
- By means of the address configured using the bgp router-id command in router configuration mode.
- By using the highest IPv4 address on a loopback interface in the system if the router is booted with saved loopback address configuration.
- By using the primary IPv4 address of the first loopback address that gets configured if there are not any in the saved configuration.
If none of these methods for obtaining a router ID succeeds, BGP does not have a router ID and cannot establish any peering sessions with BGP neighbors. In such an instance, an error message is entered in the system log, and the show bgp summary command displays a router ID of 0.0.0.0.
After BGP has obtained a router ID, it continues to use it even if a better router ID becomes available. This usage avoids unnecessary flapping for all BGP sessions. However, if the router ID currently in use becomes invalid (because the interface goes down or its configuration is changed), BGP selects a new router ID (using the rules described) and all established peering sessions are reset.
Since XR2 in this case does not have a Loopback configured, the BGP process cannot initialize. The kicker with this problem is that the documentation states that when this problem occurs you should see that “an error message is entered in the system log”, however in this case a Syslog was not generated about the error. At least this is the last time this problem will bite me
In this blog post we’re going to discuss the fundamental logic of how MPLS tunnels allow applications such as L2VPN & L3VPN to work, and how MPLS tunnels enable Service Providers to run what is known as the “BGP Free Core”. In a nutshell, MPLS tunnels allow traffic to transit over devices that have no knowledge of the traffic’s final destination, similar to how GRE tunnels and site-to-site IPsec VPN tunnels work. To accomplish this, MPLS tunnels use a combination of IGP learned information, BGP learned information, and MPLS labels.
In this blog post we are going to review a number of MPLS scaling techniques. Theoretically, the main factors that limit MPLS network growth are:
- IGP Scaling. Route Summarization, which is the core procedure for scaling of all commonly used IGPs does not work well with MPLS LSPs. We’ll discuss the reasons for this and see what solutions are available to deploy MPLS in presence of IGP route summarization.
- Forwarding State growth. Deploying MPLE TE may be challenging in large network as number of tunnels grow like O(N^2) where N is the number of TE endpoints (typically the number of PE routers). While most of the networks are not even near the breaking point, we are still going to review techniques that allow MPLS-TE to scale to very large networks (10th of thousands routers).
- Management Overhead. MPLS requires additional control plane components and therefore is more difficult to manage compared to classic IP networks. This becomes more complicated with the network growth.
The blog post summarizes some recently developed approaches that address the first two of the above mentioned issues. Before we begin, I would like to thank Daniel Ginsburg for introducing me to this topic back in 2007.
Last week we wrapped up the MPLS bootcamp, and it was a blast! A big shout out to all the students who attended, as well as to many of the INE staff who stopped by (you know who you are ). Thank you all.
Here is the topology we used for the class, as we built the network, step by step.
The class was organized and delivered in 30 specific lessons. Here is the “overview” slide from class: Continue Reading
In the previous MPLS Components post, we discussed the many benefits that MPLS can bring to the network, and we detailed the typical components found in a Layer 3 MPLS VPN design. In this post, we will provide more details for the MPLS components and their important, inner workings. We will make reference to the previous diagram in this post as well:
When PE1 receives a packet from CE1, it will engage in what we call a Push operation. PE1 is considered the ingress PE router and engages in label imposition. (Notice that we like to speak in fancy terminology here; when we add a label to a packet, it is termed a push or an imposition).
We know from the 5-Day QoS bootcamp that Differentiated Services is one of the three major overall approaches to providing Quality of Service in an enterprise. The other options are Integrated Services and Best Effort.
When we studied Differentiated Services, we saw that the primary marking technology approach was the Differentiated Services Code Point (DSCP) concept. These are the high order 6 bits in the IP packet ToS Byte. But how can MPLS use these markings in order to provide QoS treatment (Per Hop Behaviors (PHBs)) to various traffic forms?
The first major issue to solve is the fact that Label Switch Routers (LSRs) rely solely on the MPLS header when making forwarding decisions. These devices will no longer analyze the IP Header information, thus negating the use of the ToS Byte. This was solved through the creation of the Experimental Bits field in the MPLS header. The IETF has now renamed the field to the Traffic Class field. See RFC 5462.
One of our students in the INE RS bootcamp today, asked about an OSPF sham-link. I thought it would make a beneficial addition to our blog, and here it is. Thanks for the request Christian!
Reader’s Digest version: MPLS networks aren’t free. If a customers is using OSPF to peer between the CE and PE routers, and also has an OSPF CE to CE neighborship, the CE’s will prefer the Intra-Area CE to CE routes (sometimes called the “backdoor” route in this situation), instead of using the Inter-Area CE to PE learned routes that use the MPLS network as a transit path. OSPF sham-links correct this behavior.
This blog post walks through the problem and the solution, including the configuration steps to create and verify a sham-link.
To begin, MPLS is set up in the network as shown with R2 and R4 acting as Provider Edge (PE) routers, and MPLS is enabled throughout R2-R3-R4.
R1 and R5 are Customer Edge (CE) routers, and the Serial0/1.15 interfaces of R1 and R5 are temporarily shut down, (this means the backdoor route isn’t in place yet, and at the moment, there is no problem).
Currently, R1 and R5 see the routes to each others local networks through the VPNv4 MPLS network, and the routes show up as Inter-Area OSPF routes with the PE routers as the next hop. Continue Reading
Inter-AS Multicast VPN solution introduces some challenges in cases where peering systems implement BGP-free core. This post illustrates a known solution to this problem, implemented in Cisco IOS software. The solution involves the use of special MP-BGP and PIM extensions. The reader is assumed to have understanding of basic Cisco’s mVPN implementation, PIM-protocol and Multi-Protocol BGP extensions.
mVPN – Multicast VPN
MSDP – Multicast Source Discovery Protocol
PE – Provider Edge
CE – Customer Edge
RPF – Reverse Path Forwarding
MP-BGP – Multi-Protocol BGP
PIM – Protocol Independent Multicast
PIM SM – PIM Sparse Mode
PIM SSM – PIM Source Specific Multicast
LDP – Label Distribution Protocol
MDT – Multicast Distribution Tree
P-PIM – Provider Facing PIM Instance
C-PIM – Customer Facing PIM Instance
NLRI – Network Layer Rechability Information
Inter-AS mVPN Overview
In this post we are going to discuss operations of the “traceroute” and “ping” command in MPLS environment. The reader is supposed to have solid understanding of MPLS VPN technologies, prior to read this document. Note the use of terms “MPLS ping/traceroute” which are interchangeable with “LSP ping/traceroute”
The following is the testbed topology we are going to use for simulations. All PE/P routers are 7206s running IOS version 12.0(33)S. Unfortunately, MPLS ping and traceroute commands are just a recent addition to IOS code, and thus you only see them in later 12.4T versions and recent 12.0S images. The IOS versions currently used in the CCIE SP lab do not support the MPLS ping/trace features.
Classic Ping and Traceroute
The subject is a legacy and is no longer a big issue for the CCIE SP exam. However, it still worths mentioning few major features of cell-mode ATM. For our article we’ll conside a small network consisting of three routers:
Basically, cell-mode ATM is an MPLS implementation which uses native ATM tagging mechanism for label switching. In order for ATM cloud to become MPLS enabled the following is required:
1) A label distribution protocol
2) An IGP running over the ATM cloud
Both things need IP connectivity between ATM devices. However, manually configuring a PVC between every pair of directly connected ATM LSRs would be quite a burden, so they came up with another solution. A fixed (well-known) VPI/VCI pair is used on each MPLS-enabled ATM interface to establish a point-to-point link with the peer. This VC could be changed using mpls atm control-vc command. As soon as you enable mpls ip on an ATM mpls subinterace, peers start discovery phase and establish an LDP connection (TCP session) over the control-VC. Just as a side note, the control-VC uses ATM AAL5 SNAP encapsulation to run IP.
interface ATM3/0.1 mpls ip unnumbered Loopback0 no atm enable-ilmi-trap mpls atm control-vc 1 64 mpls label protocol ldp mpls ip
Note that the actual IP address on an MPLS subinterface does not matter much, since the connection is essentially point-to-point. So for the purpose of address conservation you may use ip unnumbered here. Also, you may configure to use TDP instead of LDP, if you are connecting to another network using different protocol.
Next, you need to enable an IGP on the MPLS interface, e.g. configure OSPF. The IGP will use the same control-VC to exchange IP packets with the peer. As soon as IGP converges, LDP will start label bindings process. The significant different with ATM is that cell-mode MPLS uses “downstream on-demand” label distribution model. That is, an upstream router will request a label binding (a VPI/VCI pair) from the router downstream with respect to a particular IGP prefix. The router being requested, in turn, will ask it’s own downstream for this prefix and so the process will continue until the tail-end router will respond. Let’s see how would RIB and ATM LDP bindings table look like on R1 for our sample configuration:
Rack1R1#show ip ospf neighbor Neighbor ID Pri State Dead Time Address Interface 18.104.22.168 0 FULL/ - 00:00:33 22.214.171.124 ATM3/0.1 Rack1R1#sh ip route ospf 126.96.36.199/32 is subnetted, 1 subnets O 188.8.131.52 [110/3] via 184.108.40.206, 00:32:35, ATM3/0.1 220.127.116.11/32 is subnetted, 1 subnets O 18.104.22.168 [110/2] via 22.214.171.124, 00:32:35, ATM3/0.1 126.96.36.199/16 is variably subnetted, 2 subnets, 2 masks O 188.8.131.52/32 [110/3] via 184.108.40.206, 00:32:35, ATM3/0.1 Rack1R1#show mpls atm-ldp bindings Destination: 220.127.116.11/24 Tailend Router ATM3/0.1 1/34 Active, VCD=10 Destination: 18.104.22.168/32 Headend Router ATM3/0.1 (1 hop) 1/34 Active, VCD=10 Destination: 22.214.171.124/32 Headend Router ATM3/0.1 (1 hop) 1/35 Active, VCD=11 Destination: 126.96.36.199/32 Headend Router ATM3/0.1 (1 hop) 1/33 Active, VCD=9 Destination: 10.0.0.0/24 Tailend Router ATM3/0.1 1/33 Active, VCD=9
Label bindings table shows whether the prefix is local (we are the Tailend Router) or we have requested it from our downstream peer (we are the Headend, requesting router). Also you can see that the label imposed is a VPI/VCI pair, as it should be expected of ATM. Check the other edge of the cloud:
Rack1R2#show ip route ospf 10.0.0.0/32 is subnetted, 1 subnets O 10.0.0.1 [110/3] via 188.8.131.52, 01:00:47, ATM3/0.1 184.108.40.206/32 is subnetted, 1 subnets O 220.127.116.11 [110/2] via 18.104.22.168, 01:00:47, ATM3/0.1 22.214.171.124/16 is variably subnetted, 2 subnets, 2 masks O 126.96.36.199/32 [110/3] via 188.8.131.52, 01:00:47, ATM3/0.1 Rack1R2#show mpls atm-ldp bindings Destination: 184.108.40.206/24 Tailend Router ATM3/0.1 1/34 Active, VCD=9 Destination: 220.127.116.11/32 Headend Router ATM3/0.1 (1 hop) 1/35 Active, VCD=10 Destination: 18.104.22.168/32 Headend Router ATM3/0.1 (1 hop) 1/34 Active, VCD=9 Destination: 22.214.171.124/24 Tailend Router ATM3/0.1 1/33 Active, VCD=8 Destination: 10.0.0.1/32 Headend Router ATM3/0.1 (1 hop) 1/33 Active, VCD=8
This router is “tailend” for two prefixes: “126.96.36.199/32″ and “188.8.131.52″ – the ones it advertises into IGP.
Verify end-to-end connectivity now:
Rack1R2#ping 10.0.0.1 source loopback 1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds: Packet sent with a source address of 184.108.40.206 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 8/18/24 ms
Another two feature specific to cell-mode ATM are MPLS LDP loop-detection and VC-merge. The first feature is specific to “downstream on-demand” mode of LDP operations. When a downstream router responds to or forwards a label binding request, it prepends it’s LDP router-id to the list already in the binding requests/response. Using this mechanics, any requesting node may validate if it’s router-id already presents in the binding request, e.g. verify if a loop has formed for some reason. You may enable the loop detection with LDP using the global configuration mode command mpls ldp loop-detection on all ATM LSR routers. To verify the effect of this command issue the following command:
Rack1R1#show mpls atm-ldp bindings path Destination: 220.127.116.11/24 Tailend Router ATM3/0.1 1/34 Active, VCD=10 Path: 18.104.22.168 22.214.171.124* Destination: 126.96.36.199/32 Headend Router ATM3/0.1 (1 hop) 1/34 Active, VCD=10 Path: 188.8.131.52* 184.108.40.206 Destination: 220.127.116.11/32 Headend Router ATM3/0.1 (1 hop) 1/35 Active, VCD=11 Path: 18.104.22.168* 22.214.171.124 Destination: 126.96.36.199/32 Headend Router ATM3/0.1 (1 hop) 1/33 Active, VCD=9 Path: 188.8.131.52* 184.108.40.206 Destination: 10.0.0.0/24 Tailend Router ATM3/0.1 1/33 Active, VCD=9 Path: 220.127.116.11 18.104.22.168*
The asterisk sign marks the local router and the path shows the router-ids we got in request or response packets. The next feature is called “VC-merge” and it allows an ATM-LSR to decrease the number of downstream requested labels, by associating the same label to the upstream requests going over the same downstream interface. Technically, it will require a switch to buffer all AAL5SNAP cells to form a complete PDU, before forwading the packet down. This will make LSR perform less efficient with respect to forwarding performance but will greatly reduce the number of label bindings required. You can enable the VC-merge mode on a node that has multiple upstreams using the mpls ldp atm vc-merge command.
And probably the last configuration option you need to consider is manually specifying the range of VPI/VCI used in responses for label binding requests. Remember, you need to specifiy it on the both ends of a point-to-point link between two LSRs and this feature works only with TDP.
R2: interface ATM3/0.1 mpls ip unnumbered Loopback0 no atm enable-ilmi-trap mpls atm vpi 2 vci-range 64-128 mpls label protocol tdp mpls ip ATM: interface ATM2/0.1 mpls ip unnumbered Loopback0 ip router isis no atm enable-ilmi-trap mpls atm vpi 2 vci-range 64-128 mpls label protocol tdp mpls ip
Will result in the following output on ATM LSR:
ATM#show mpls atm-ldp bindings Destination: 22.214.171.124/32 Tailend Router ATM1/0.1 1/35 Active, VCD=19 Tailend Router ATM2/0.1 2/66 Active, VCD=20 Destination: 126.96.36.199/32 Headend Router ATM2/0.1 (1 hop) 2/65 Active, VCD=19 Tailend Router ATM1/0.1 1/34 Active, VCD=18 Destination: 188.8.131.52/32 Headend Router ATM1/0.1 (1 hop) 1/34 Active, VCD=18 Tailend Router ATM2/0.1 2/65 Active, VCD=19 Destination: 184.108.40.206/32 Headend Router ATM2/0.1 (1 hop) 2/64 Active, VCD=18 Tailend Router ATM1/0.1 1/33 Active, VCD=17 Destination: 10.0.0.1/32 Headend Router ATM1/0.1 (1 hop) 1/33 Active, VCD=17 Tailend Router ATM2/0.1 2/64 Active, VCD=18
This is it for the basic configuration aspects of cell-mode MPLS pertaining to the CCIE SP exam. Remember, nobody is going to ask you to configure BPX switches there, so focus on more relevant SP topics, such as L3/L2 VPNs and SP Multicast