Posts from ‘IGP’
One of the most important technical protocols on the planet is Open Shortest Path First (OSPF). This highly tunable and very scalable Interior Gateway Protocol (IGP) was designed as the replacement technology for the very problematic Routing Information Protocol (RIP). As such, it has become the IGP chosen by many corporate enterprises.
OSPF’s design, operation, implementation and maintenance can be extremely complex. The 3-Day INE bootcamp dedicated to this protocol will be the most in-depth coverage in the history of INE videos.
This course will be developed by Brian McGahan, and Petr Lapukhov. It will be delivered online in a Self-Paced format. The course will be available for purchase soon for $295.
Here is a preliminary outline:
Day 1 OSPF Operations
● Dijkstra Algorithm
● Neighbors and Adjacencies
○ OSPF Packet Formats
○ OSPF Authentication
○ Link-State information Flooding
About the Protocol
- The algorithm used for this advanced Distance Vector protocol is the Diffusing Update Algorithm.
- As we discussed at length in this post, the metric is based upon Bandwidth and Delay values.
- For updates, EIGRP uses Update and Query packets that are sent to a multicast address.
- Split horizon and DUAL form the basis of loop prevention for EIGRP.
- EIGRP is a classless routing protocol that is capable of Variable Length Subnet Masking.
- Automatic summarization is on by default, but summarization and filtering can be accomplished anywhere inside the network.
EIGRP forms “neighbor relationships” as a key part of its operation. Hello packets are used to help maintain the relationship. A hold time dictates the assumption that a neighbor is no longer accessible and causes the removal of topology information learned from that neighbor. This hold timer value is reset when any packet is received from the neighbor, not just a Hello packet.
To start my reading from Petr’s excellent CCDE reading list for his upcoming LIVE and ONLINE CCDE Bootcamps, I decided to start with:
EIGRP for IP: Basic Operation and Configuration by Russ White and Alvaro Retana
I was able to grab an Amazon Kindle version for about $9, and EIGRP has always been one of my favorite protocols.
The text dives right in to none other than the composite metric of EIGRP and it brought a smile to my face as I thought about all of the misconceptions I had regarding this topic from early on in my Cisco studies. Let us review some key points regarding this metric and hopefully put some of your own misconceptions to rest.
- While we are taught since CCNA days that the EIGRP metric consists of 5 possible components – BW, Delay, Load, Reliability, and MTU; we realize when we look at the actual formula for the metric computation, MTU is actually not part of the metric. Why have we been taught this then? Cisco indicates that MTU is used as a tie-breaker in a situation that might require it. To review the actual formula that is used to compute the metric, click here.
- Notice from the formula that the K (constant values) impact which components of the metric are actually considered. By default K1 is set to 1 and K3 is set to 1 to ensure that Bandwidth and Delay are utilized in the calculation. If you wanted to make Bandwidth twice as significant in the calculation, you could set K1 to 2, as an example. The metric weights command is used for this manipulation. Note that it starts with a TOS parameter that should always be set to 0. Cisco never did fully implement this functionality.
- The Bandwidth that effects the metric is taken from the bandwidth command used in interface configuration mode. Obviously, if you do not provide this value – the Cisco router will select a default based on the interface type.
- The Delay value that effects the metric is taken from the delay command used in interface configuration mode. This value depends on the interface hardware type, e.g. it is lower for Ethernet but higher for Serial interfaces. Note how the Delay parameter allows you to influence EIGRP pathing decisions without the manipulation of the Bandwidth value. This is nice since other mechanisms could be relying heavily on the bandwidth setting, e.g. EIGRP bandwidth pacing or absolute QoS reservation values for CBWFQ.
- The actual metric value for a prefix is derived from the SUM of the delay values in the path, and the LOWEST bandwidth value along the path. This is yet another reason to use more predictive Delay manipulations to change EIGRP path preference.
In the next post on the EIGRP metric, we will examine this at the actual command line, and discuss EIGRP load balancing options. Thanks for reading!
The purpose of event dampening is reducing the effect of oscillations on routing systems. In general, periodic process that affect the routing system as a whole should have the period no shorter than the system convergence time (relaxation time). Otherwise, the system will never stabilize and will be constantly updating its state. In reality, complex system have multiple periodic processes running at the same time, which results is in harmonic process interference and complex process spectrum. Considering such behavior is outside the scope of this paper. What we want to do, is finding optimal settings to filter high-frequency events from the routing system. In our particular case, events are interface flaps, occurring periodically. We want to make sure that oscillations with period T or less are not reported to the routing system. Here T is found empirically, based on observed/estimated convergence time as suggested above.
Event dampening uses exponential back-off algorithm to suppress event reporting to the upper level protocols. Effectively, every time an interface flaps (goes down, to be accurate) a penalty value of P is added to the interface penalty counter. If at some point the accumulated penalty exceeds the “suppress” value of S, the interface is placed in the suppress state and further link events are not reported to the upper protocol modules. At all time, the interface penalty counter follows exponential decay process based on the formula P(t)=P(0)*2^(-t/H) where H is half-life time setting for the process. As soon as accumulated penalty reaches the lower boundary of R – the reuse value, interface is unsuppressed, and further changes are again reported to the upper level protocols.
In this blog post we are going to discuss some OSPF features related to convergence and scalability. Specifically, we are going to discuss Incremental SPF (iSPF), LSA group pacing, and LSA generation/SPF throttling. Before we begin, let’s define convergence as the process of restoring the stable view of network after a change, and scalability as the property of the routing protocol to remain stable and well-behaving as the network grows. In general, these two properties are reciprocal, i.e. faster convergence generally means less stable and scalable network and vice versa. The reason for that is that faster convergence means that the routing protocol is “more sensitive” to oscillating or “noisy” processes, which in turn makes it less stable.
This is the follow up discussion for the post titled, “Have you seen my Router ID?”
The underlying issue here was trying to get OSPF to bypass the usual selection process for Router ID. The normal selection order is:
Manual router ID configured under ospf process
Highest IP address of a loopback in the up state in the respective routing table
Highest IP address of an interface of an up state in the respective routing table
If there are no up interfaces and you have not manually configured a router ID, you will get an error when you try to configure an OSPF process.
In general, most solutions focused around either using the OSPF selection process to one’s advantage, or trying to “hide” the loopback from OSPF, so that it would select something else.
In yesterday’s post, titled “Have you seen my Router ID?”, a challenge section was provided. This post will focus on scrutinizing the section itself, from a strategy / analysis point of view.
From a high level overview, we have two devices peering OSPF over a FastEthernet link, with some loopback networks advertised by one side, and received on the other router. If that was all that the section was asking for, then it should be a task that anyone at CCNA level could complete. When looking at the higher levels of certification, to some point you’re still configuring some of the same items, but you are expected to know more and more about the underlying technologies, theories, and interactions.
Just like other practice lab sections, we are provided with a list of bullet points. In order to get full credit for the section, we need to make sure that we are meeting ALL of the stated requirements. In the event that you don’t know how to complete ALL the items, you are usually better off skipping the section, unless it is needed for core connectivity. When you’re in the studying phase, it’s also a good idea to play “what if”, and ask yourself how you would achieve the task if the section were worded slightly differently.
Make sure that you take the time to read carefully, rather than just diving into a configuration. Make sure that you are carefully taking the time to understand exactly what is being asked.
Taking a look at the individual bullet points:
There is more than one possible solution for this challenge. Feel free to post your proposed answer in the comments section. We will try to keep comments hidden from public view, so that the fun isn’t spoiled for others. Also, don’t feel bad if the answer(s) aren’t immediately apparent. A number of very bright people have been puzzled by this scenario. Answers will be posted on Friday, September 18th.
R1 and R2 are configured with their FastEthernet interfaces on the same subnet. R1 will be forming an OSPF neighbor adjacency to R2 over the FastEthernet interface, and will also be advertising some loopback networks into OSPF.
R1′s Relevant Configuration:
interface Loopback1 ip address 188.8.131.52 255.255.255.255 interface Loopback11 ip address 184.108.40.206 255.255.255.255 interface Loopback111 ip address 220.127.116.11 255.255.255.255 interface FastEthernet0/0 ip address 18.104.22.168 255.255.255.0 no shut
R2′s Relevant Configuration:
interface FastEthernet0/0 ip address 22.214.171.124 255.255.255.0 no shut router ospf 1 network 126.96.36.199 0.0.0.255 area 0
Your task is to configure R1 while meeting all of the following criteria for requirements and restrictions:
- R2 should see the networks 188.8.131.52/32, 184.108.40.206/32, and 220.127.116.11/32 as OSPF routes in R2′s routing table, but they should not appear as IA, E2, or E1.
- The output of “show ip ospf neighbor” on R2 should show 18.104.22.168 as the Neighbor ID for the adjacency to R1, even if R1 is reloaded. No other Neighbor IDs should show up on R2.
- You are not allowed to use the “router-id” command on R1.
- You are not permitted to shut down any interfaces on R1, or remove any of the existing configuration on R1.
- No additional configuration may be added to R2, all configuration for this challenge is done on R1.