Edit: Thanks for playing! You can find the official answer and explanation here.

I had an interesting question come across my desk today which involved a very common area of confusion in OSPF routing logic, and now I'm posing this question to you as a challenge!

The first person to answer correctly will get free attendance to our upcoming CCIE Routing & Switching Lab Cram Session, which runs the week of June 1st 2015, as well as a free copy of the class in download format after it is complete.  The question is as follows:

Given the below topology, where R4 mutually redistributes between EIGRP and OSPF, which path(s) will R1 choose to reach the network, and why?

Bonus Questions:

  • What will R2's path selection to be, and why?
  • What will R3's path selection to be, and why?
  • Assume R3's link to R1 is lost.  Does this affect R1's path selection to If so, how?

Tomorrow I'll be post topology and config files for CSR1000v, VIRL, GNS3, etc. so you can try this out yourself, but first answer the question without seeing the result and see if your expected result matches the actual result!


Good luck everyone!


One of the most important technical protocols on the planet is Open Shortest Path First (OSPF). This highly tunable and very scalable Interior Gateway Protocol (IGP) was designed as the replacement technology for the very problematic Routing Information Protocol (RIP). As such, it has become the IGP chosen by many corporate enterprises.

OSPF’s design, operation, implementation and maintenance can be extremely complex. The 3-Day INE bootcamp dedicated to this protocol will be the most in-depth coverage in the history of INE videos.

This course will be developed by Brian McGahan, and Petr Lapukhov. It will be delivered online in a Self-Paced format. The course will be available for purchase soon for $295.

Here is a preliminary outline:

Day 1 OSPF Operations

●      Dijkstra Algorithm

●      Neighbors and Adjacencies

○   OSPF Packet Formats

○   OSPF Authentication

○   Link-State information Flooding

●      Concept of Areas

○   Notion of ABR

○   Notion of ASBR

●      Network Types

○   Flooding on P2P Links

○   Flooding with DR

○   Topologic Representation

●      The Link State Database

○   LSA Format (Checksum, Seq#, etc)

○   LSA Types

○   LSA Purging

●      The Routing Table

○   How is RIB computed from LSDB

●      Flooding Reduction

○   DNA bit

○   DC Circuits

○   Database Filter

Day 2 Configuring OSPF

●      Basic Configurations

○   Setting Router IDs

○   OSPF and Secondary Addresses

●      NBMA Networks

○   Selecting Network Type

○   Ensuring peer reachability

●      Special Areas

○   Stub Area Types

○   Routing in NSSA Areas

●      OSPF Summarization

○   Internal vs External

●      Virtual Links

○   Transit Capability

○   Summarization and Virtual Links

Day 3 Advanced Topics and Troubleshooting

●      OSPF Fast Convergence

○   L3 and L2 interaction

○   SPF and LSA Throttling

●      OSPF Tuning

○   LSA Pacing

○   Hello Timer Tuning

○   Max-Metric LSA

●      OSPF in MPLS Layer 3 VPNs

○   Superbackbone

○   MP-BGP extensions for OSPF

○   Loop-Prevention Concepts

○   Sham-Link

●      Inter-Area Loop Prevention Caveats

●      Key OSPF Verifications

●      OSPF Troubleshooting Process

○   Adjacency Problems (e.g. MTU issues)

○   Intra-area reachability (e.g. network types mismatch)

○   Inter-area reachability (e.g. summary LSA blocking)

○   Troubleshooting VLs and SLs


Continuing my review of titles from Petr’s excellent CCDE reading list for his upcoming LIVE and ONLINE CCDE Bootcamps, here are further notes to keep in mind regarding EIGRP.

About the Protocol

  • The algorithm used for this advanced Distance Vector protocol is the Diffusing Update Algorithm.
  • As we discussed at length in this post, the metric is based upon Bandwidth and Delay values.
  • For updates, EIGRP uses Update and Query packets that are sent to a multicast address.
  • Split horizon and DUAL form the basis of loop prevention for EIGRP.
  • EIGRP is a classless routing protocol that is capable of Variable Length Subnet Masking.
  • Automatic summarization is on by default, but summarization and filtering can be accomplished anywhere inside the network.

Neighbor Adjacencies

EIGRP forms "neighbor relationships" as a key part of its operation. Hello packets are used to help maintain the relationship. A hold time dictates the assumption that a neighbor is no longer accessible and causes the removal of topology information learned from that neighbor. This hold timer value is reset when any packet is received from the neighbor, not just a Hello packet.

EIGRP uses the network type in order to dictate default Hello and Hold Time values:

  • For all point-to-point types - the default Hello is 5 seconds and the default Hold is 15
  • For all links with a bandwidth over 1 MB - the default is also 5 and 15 seconds respectively
  • For all multi-point links with a bandwidth less than 1 MB - the default Hello is 60 seconds and the default Hold is 180 seconds

Interestingly, these values are carried in the Hello packets themselves and do not need to match in order for an adjacency to form (unlike OSPF).

Reliable Transport

By default, EIGRP sends updates and other information to multicast and the associated multicast MAC address of 01-00-5E-00-00-0A.

For multicast packets that need to be reliably delivered, EIGRP waits until a RTO (retransmission timeout) before beginning a recovery action. This RTO value is based off of the SRTT (smooth round-trip time) for the neighbor. These values can be seen in the show ip eigrp neighbor command.

If the router sends out a reliable packet and does not receive an Acknowledgement from a neighbor, the router informs that neighbor to no longer listen to multicast until it is told to once again. The local router then begins unicasting the update information. Once the router begins unicasting, it will try for 16 times or the expiration of the Hold timer, whichever is greater. It will then reset the neighbor and declare a Retransmission Limit Exceeded error.

Note that not all EIGRP packets follow this reliable routine - just Updates and Queries. Hellos and acknowledgements are examples of packets that are not sent reliably.


To start my reading from Petr's excellent CCDE reading list for his upcoming LIVE and ONLINE CCDE Bootcamps, I decided to start with:
EIGRP for IP: Basic Operation and Configuration by Russ White and Alvaro Retana
I was able to grab an Amazon Kindle version for about $9, and EIGRP has always been one of my favorite protocols.
The text dives right in to none other than the composite metric of EIGRP and it brought a smile to my face as I thought about all of the misconceptions I had regarding this topic from early on in my Cisco studies. Let us review some key points regarding this metric and hopefully put some of your own misconceptions to rest.

  • While we are taught since CCNA days that the EIGRP metric consists of 5 possible components - BW, Delay, Load, Reliability, and MTU; we realize when we look at the actual formula for the metric computation, MTU is actually not part of the metric. Why have we been taught this then? Cisco indicates that MTU is used as a tie-breaker in a situation that might require it. To review the actual formula that is used to compute the metric, click here.
  • Notice from the formula that the K (constant values) impact which components of the metric are actually considered. By default K1 is set to 1 and K3 is set to 1 to ensure that Bandwidth and Delay are utilized in the calculation. If you wanted to make Bandwidth twice as significant in the calculation, you could set K1 to 2, as an example. The metric weights command is used for this manipulation. Note that it starts with a TOS parameter that should always be set to 0. Cisco never did fully implement this functionality.
  • The Bandwidth that effects the metric is taken from the bandwidth command used in interface configuration mode. Obviously, if you do not provide this value - the Cisco router will select a default based on the interface type.
  • The Delay value that effects the metric is taken from the delay command used in interface configuration mode. This value depends on the interface hardware type, e.g. it is lower for Ethernet but higher for Serial interfaces. Note how the Delay parameter allows you to influence EIGRP pathing decisions without the manipulation of the Bandwidth value. This is nice since other mechanisms could be relying heavily on the bandwidth setting, e.g. EIGRP bandwidth pacing or absolute QoS reservation values for CBWFQ.
  • The actual metric value for a prefix is derived from the SUM of the delay values in the path, and the LOWEST bandwidth value along the path. This is yet another reason to use more predictive Delay manipulations to change EIGRP path preference.

In the next post on the EIGRP metric, we will examine this at the actual command line, and discuss EIGRP load balancing options. Thanks for reading!


The purpose of event dampening is reducing the effect of oscillations on routing systems. In general, periodic process that affect the routing system as a whole should have the period no shorter than the system convergence time (relaxation time). Otherwise, the system will never stabilize and will be constantly updating its state. In reality, complex system have multiple periodic processes running at the same time, which results is in harmonic process interference and complex process spectrum. Considering such behavior is outside the scope of this paper. What we want to do, is finding optimal settings to filter high-frequency events from the routing system. In our particular case, events are interface flaps, occurring periodically. We want to make sure that oscillations with period T or less are not reported to the routing system. Here T is found empirically, based on observed/estimated convergence time as suggested above.

Event dampening uses exponential back-off algorithm to suppress event reporting to the upper level protocols. Effectively, every time an interface flaps (goes down, to be accurate) a penalty value of P is added to the interface penalty counter. If at some point the accumulated penalty exceeds the "suppress" value of S, the interface is placed in the suppress state and further link events are not reported to the upper protocol modules. At all time, the interface penalty counter follows exponential decay process based on the formula P(t)=P(0)*2^(-t/H) where H is half-life time setting for the process. As soon as accumulated penalty reaches the lower boundary of R - the reuse value, interface is unsuppressed, and further changes are again reported to the upper level protocols.


What we want to find out, is the lowest value of H that suppresses all harmonic oscillation processes with the period of T or lower, but does not suppress longer-period processes. E.g. we want to block all oscillations happening every 5 seconds or more often, but report interface flaps happening every 6 seconds or less often. Look at the figure above and consider that there have been two flap events, separated by time period T. At the moment of the second flap, the suppress condition is: P*2^(-T/H)+P >= S. Here the left part is the penalty accumulated at the moment of the second flap, assuming the initial penalty at the moment of the first flap was zero. From this inequality, we quickly find out that H >= T/log2(P/(S-P)). if we could make P/(S-P)=2, then the formula would be greatly simplified. Per Cisco's implementation, P (penalty) is fixed to 1000, and by setting S=1500 we get 1000/(1500-1000)=2. Therefore, if we select S=1500, P=1000 then our condition becomes H >= T. Since we are looking for the minimal value of H we can set H=T. Seeded with this values, event dampening filter will reject all oscillating porcesses with the period shorter than T. However, there is one more parameter we are left to find is R - the reuse time.

We may apply the following logic here. Observing no further events since the last flap for the duration of 2xT, we may assume that the periodic process has stopped. Therefore, we may unblock the interface after 2xT seconds. The reuse value could be found by taking the penalty accumulated after the second flap, and further decaying it for 2xT more seconds: (P*2^(-T/H)+P)*2^(-2T/H) <= R. Since we set H=T we quickly find out that R >= 3/8*P = 375. At this point we have all parameters we need to know in order to apply optimal event dampening settings based on the cut-off period for oscillating processes. Here is a sample configuration, for T=10 seconds. Notice the last parameter, know as the maximum suppress time - the maximum time that the interface could be kept in suppress state. Since our goal is to hold the interface suppressed for at least 2xT seconds, the maximum suppress time is twice the half-life value.

interface FastEthernet 0/0
dampening 10 375 1500 20

Lastly, a few words on figuring out the convergence time for your network. To being with, we only consider IGP protocols in this discussion. Dampening in BGP is more complicated, due to the scale of the routing system involved. The general consensus nowadays is that using dampening in BGP may result in more harm than good, due to cascading withdrawn messages. Next, for the IGPs, you are generally considered with a single fault domain, which in properly designed network is bounded to one IGP area (or EIGRP query scope zone). Convergence time for a single area depends on the following factors:

  • Area size - impacts routing database sizes, affects LSA/Query propagation time and SPF runtime.
  • Weakest (in terms of CPU/Memory) router in the area - this is the router to complete SPF computations the last.
  • RIB/FIB sizes: a significant amount of time is wasted on updating RIB/FIB tables after IGP re-convergence. Again, depends on the area size

To summarize, the main factor is the area size and the number of links in the area (which normally follows the power law based on the number of nodes). However, knowing this fact does not give us a formula for the convergence time. In most cases, you should rely on empirical evidence to obtain this. Starting with one-two seconds could be reasonable, but you should scale this value by the factor of two or three to account for multiple oscillations that may run in the network concurrently. Still, one again, there is no magical formula for this - this is what network engineers and designers are for!


In this blog post we are going to discuss some OSPF features related to convergence and scalability. Specifically, we are going to discuss Incremental SPF (iSPF), LSA group pacing, and LSA generation/SPF throttling. Before we begin, let's define convergence as the process of restoring the stable view of network after a change, and scalability as the property of the routing protocol to remain stable and well-behaving as the network grows. In general, these two properties are reciprocal, i.e. faster convergence generally means less stable and scalable network and vice versa. The reason for that is that faster convergence means that the routing protocol is “more sensitive” to oscillating or “noisy” processes, which in turn makes it less stable.

Incremental SPF

The classic Dijkstra SPF algorithm complexity (or roughly saying, the process run-time) depends on the particular network topology, but it is said to be proportional to N*log(N) in a non-densely connected topology, where N is the number of nodes in the area, see [RFC1245]. The computation complexity used to be a limiting factor for old, slow routers of the 90’s, where a single SPF run could hog the CPU dramatically. Modern routers take an order of tens, max hundreds of milliseconds for single full SPF runs even for the largest topologies. The runtime could be further reduced using the SPF algorithm optimization know as incremental SPF, which has been developed quite some time ago see [ARPOPT] (notice the year 1980 on the paper and the  name of the main author - it’s no other than Eric Rosen!).

The implementation might look a bit sophisticated, but the main idea of iSPF is keeping the SPT structure after the first SPF calculation and using it for further computation optimizations. As you remember, the goal of SPF is building an SPT (shortest-path tree) on the network topology graph, rooted at the node that runs the computations. Look at the sample topology (taken from [JDOYLE]) and the SPT for router R1 below:


If R1 would retain the SPT after SPF calculations (at the expense of extra memory) the following three properties could be used for SPF calculation optimization:

Property 1 If a node added or removed to or from the topology appears to be a leaf node to the saved SPT, there needs to be a very simple computation performed to add new routes. Essentially, the existing tree is simply “extended” by one node and distance-vector like computations can be performed:


The same optimization property might be utilized when a stub link is added or removed to or from any node in the network.

Property 2 There is a link failure, and the link is NOT part of the saved SPT. For example, consider the link between R4 and R5 fails. This link is not part of the saved SPT, and therefore, there is no need to perform any SPF calculation at all!


Even though there is great benefit in not making any SPF calculations, it’s hard to predict how many link failures would cause such effect. Besides, different routers would have different SPTs and a link failure that does not affect one SPT, might affect the others.

Notice that in the case of transit link addition or link cost change, i.e. in the case when graph “connectivity” increases, this property would not work and the new tree should be built.

Property 3: The last property is more generic. Assume there is a transit link failure in the topology and it affects a part of the saved SPT, which means properties (1) and (2) do not apply. For example, imagine there is a link fault between R2 and R4. Still, we only need to re-calculate the paths for the nodes downstream of the failure, based on our existing SPT. So the router would initiate SPT calculations from R1 to R4 only, not ever bothering with other nodes. However, if there is a link failure between R1 and R5, then the router would have to recalculate the paths to R5, R6 and R7 – the nodes on the downstream tree under the failed link.


One effect is that the father away from the root node the failure is, the less computations potentially have to be done. Even though remote link failures take more time to propagate to the local node via LSA flooding, they result in shorter iSPF run-time, as the amount of downstream affected nodes is smaller. Once again, since different routers have different SPTs for the topology, the same failure may have different effects on iSPF efficiency for different routers.

Another important fact is that this feature performs better in sparsely connected networks. In the asymptotic case of a fully-meshed topology, a single transit-link failure would cause re-running the SPF for all nodes, resulting in the same performance as classic SPF.

iSPF and PRC

Among all the iSPF properties, property (1) is probably the most important and effective in practice. This property puts OSPF on par with the Partial Route Computation (PRC) feature found in IS-IS. The PRC feature made ISIS very effective in situations when new stub links were added, as ISIS propagates network reachability information separately from topological information, thanks to LSP’s TLV-based structure. Different TLVs are used for IS-node reachability information and network prefixes associated with the node. When an ISIS router receives an LSP that only lists network prefix change, it performs partial SPF recalculation, based on distance-vector logic, by adding the new prefix in the routing table with the cost of reaching the originating router, similar to property (1). Only a change in the transit link status would trigger full SPF computation in ISIS.

In the past, the PRC feature made ISIS more scalable for a single area design when compared to OSPF. The problem with OSPF was that topological information and network reachability information for router links were conveyed in a single type-1 LSA. Whether there was a transit link failure or simply a stub link going up or down, OSPFv2 had to perform complete SPF calculation as this was reflected using the same event that would report a network prefix or metric. Only type-3 and type-5 LSAs would have triggered PRC in OSPFv2. There was a trick to make better use of PRC with OSPFv2 – advertise all connected interfaces via redistribution, which resulted in type-5 LSAs being flooded. However, the tradeoff was that type-5 LSA’s flooding scope was the whole routing domain, plus type-5 LSAs have the largest size among other LSA types, not to mention slightly added configuration complexity.

The introduction of  iSPF made OSPF as effective as ISIS as far as SPF computation goes. To be fair though, we should mention that iSPF was also added to ISIS, so now both protocols are equally effective for SPF computations. By default, iSPF is disabled and could be enabled using the command ispf under the routing process configuration mode. By enabling iSPF you will make OSPF use slightly more memory than by default, propotional to the 2*N where N is the number of nodes in the area. This due to the fact that a spanning tree for N nodes has exactly N-1 edges.

LSA Group Pacing

As you remember, OSPF LSAs have two important attributes – age and checksum. The age field is needed to guarantee wiping of the outdated information and the checksum is needed to maintain the information integrity. By default, the maximum LSA age is one hour, and the originating router is supposed to re-flood the LSAs every 30 minutes. In addition to that, the router needs to run periodic check-summing on all LSAs.

The way Cisco routers originally did that was by running a refresh procedure every 30 minutes and refreshing every self-originated LSA in the database, no matter how old it was. This would result in sudden CPU spikes every 30 minutes in case of large databases in addition to bursty LSA flooding. Every router in the routing domain in turn would have to receive and process a large amount of LSA information. This is a good example of a “synchronization” problem. In addition to this refreshing, every 10 minutes a router would run periodic check-summing and an aging procedure, and flush any aged non-self-originated or corrupted LSAs.

In order to alleviate the 30-minute refreshing problem, Cisco IOS implemented an independent aging procedure for every LSA in the LSDB. A short period scheduler would scan the database and decrement every LSA’s age individually. Only LSAs that were close to their “half-life” of 30 minutes would be re-flooded. This is the opposite of doing a “complete” refresh every 30 minutes. However, this process would result in many quick floods during the 30-minutes interval, as a result of independent aging. This might be viewed as a “fragmentation” problem.

The “balanced” solution is known as “group pacing”. Instead of refreshing an LSA instantly as soon as it reaches its half-life age, the router would wait a “pacing interval” amount of time to group various LSAs with similar age. The pacing interval is normally shorter than the 30 minute “grand interval” and defaults to 240 seconds. Thus, a router would attempt to group LSAs with similar lifetime and refresh them simultaneously.


Look at the diagram above for the illustration of the concept. Original refreshing procedure would produce large bursts of LSA flooding every 30 minutes. The individual aging would result in fragmentary re-flooding. The group pacing feature would introduce controlled bursting – the shorter the interval, the smaller are the “bursts”.

The same pacing concept could be applied to check-summing and aging. Specifically, if we run individual timers for ever LSA, and aim at check-summing and aging every 10 minutes, we would get the same fragmentary “CPU-spiking” patterns. Instead of running the process individually for every LSA, grouping based on the same group pacing interval could be used for the purpose of check-summing and aging, so a small batch of LSAs that are close to being aged out or check-summed are processed together.

The IOS commands to control the various group pacing intervals are:

timers pacing ?
flood OSPF flood pacing timer
lsa-group OSPF LSA group pacing timer
retransmission OSPF retransmission pacing timer

The LSA group interval is used for the refreshing/aging/checksumming grouping discussion above. The “retransmission” keyword is a bit more interesting. Every time the router needs to retransmit an un-acknowledged LSA over an adjacency, it might wait some time to group it with other un-acknowledged LSAs. This is the same grouping principle, which allows for better packing of LSA information in IP packets. Of course, the “retransmission” grouping interval is much shorten than LSA grouping and measured in milliseconds.

The “flood” keyword serves a similar purpose, but controls the interface LSA flood list. For every interface the OSPF process keeps the “flood list”, which contains the LSAs that have been generated or received and are destined to be flooded out of this interface. Instead of flooding every LSA as soon as it hits the list, the OSPF process would wait the “pacing interval” for more potential LSAs and pack them in a single update packet. This process optimizes bandwidth and CPU usage on both sides of the adjacency. Of course, the resolution for this timer is set in milliseconds, due to the real-time nature of the process.

So what are the optimal group pacing timer values? Probably the defaults, which could be found as follows:

Rack1R5#show ip ospf | inc transmission|pacing
LSA group pacing timer 240 secs
Interface flood pacing timer 33 msecs
Retransmission pacing timer 66 msecs

For very large LSDBs, you generally want to set the LSA group pacing timer to be inversely proportional to the size of database. This ensures less “surges” when flooding large LSA batches. Keep in mind that tuning LSA group pacing improves OSPF performance and thus protocol scalability, but does not affect convergence speed.

SPF and LSA Generation Throttling

Throttling is the general process of slowing down responses to the frequently oscillating events such as link flaps. The general idea is to reduce resource wastage in unstable situations and wait till the situations calm down. If you have a link that flaps up and down frequently, you may want to suppress the links state information flooding until it becomes stable (either stable down or up). Throttling is critical for ensuring network stability and thus protocol scalability. This procedure is also very similar to event dampening, though they differ a little bit – dampening would suppress events while throttling simply increases the response times, hoping that oscillations would stop or at least the responses do not follow the same oscillating pattern but filter the high-frequency noisy events.

The general idea is as follows. When an event occurs, e.g. a link goes down or new LSA arrives, do not respond to it immediately, e.g. by generating an LSA or running SPF, but wait some time, hoping to accumulate more similar events, e.g. waiting for the link to go back up, or more LSAs arriving. This could potentially save a lot of resources, by reducing the number of SPF runs or amount of LSAs flooded. The question is – how long should we hold or throttle the responses? Ideally, it would be nice to adapt this interval according to the network conditions – i.e. make it longer when the network is unstable and shorter under stable conditions. Cisco implements an exponential back-off timer to implement this idea. Here is how it works.

The exponential back-off is defined using three parameters – start interval, increment, and max_wait time specified using the command timers throttle spf start increment max_wait.Suppose the network was stable for a relatively long time, and then an event such as LSA arrival has occurred. The router delays SPF computations for the start amount if milliseconds and sets the next hold-time to increment milliseconds. Next, if an event occurs after the start wait window expired, the event would be held for processing until the increment milliseconds window expire, but the next hold-time would be doubled, i.e. set to 2*increment. Effectly, every time an event occurs during the current hold-time window, the processing is delayed until the current hold-time expires and the next hold-time interval is doubled. The hold-time grows exponentially until it reaches the max_wait value. After this, every event received during current hold-time window would result in the next interval being equal to the constant max_wait. This ensures that exponential growth is limited by a ceiling value. If there are no events for the duration of 2*max_wait milliseconds, the hold-time window is reset back to the start value, assuming the network returned to stable condition.

Look at the figure below.


The first event schedules SPF run in start milliseconds. At the same time, the next interval is set to increment milliseconds. Since there is an event during the second hold interval, the third hold interval is set to 2xincrement. There is another event during the third window, and this sets the forth window to 4xincrement. However, in our case this exceeds the max_wait value, and thus the forth hold-time interval equals max_wait milliseconds. There are more events during the forth interval, but since the maximum hold-time value has been reached, the fifth interval is set to max_wait milliseconds. Since there are no events during the firth and sixth intervals, the hold-time is reset to start milliseconds again.

Although SPF response to LSA arrivals has been used in the above examples, the same idea applies to new LSA generation as response to local link events. This could be controlled using the LSA generation throttling command timers throttle lsa start increment max_wait. Both SPF and LSA generation throttling are on by default and you may probably want to reduce their values only if you really need to speed up your network convergence. However, keep in mind that improving response time automatically results in less stable routing protocol behavior.

Summary and Further Reading

We briefly discussed three extensions to OSPFv2 protocol: iSPF, LSA group pacing and event throttling. The first two features improve OSPF performance, while the last one allows for better scalability and dynamic adaptation to unstable network topologies. Even though these and other OSPFv2 enhancements significantly increase its scalability, the more general design features such as area partitioning, network summarization and event dampening should not be neglected. Lastly, if you haven’t done so yet, I would strongly suggest you to read the following publications:

[RFC1245] “OSPF Protocol Analysis”, J. Moy
[JDOYLE] “OSPF and IS-IS: Choosing an IGP for Large-Scale Networks”, J. Doyle
[LSAP] ”OSPF LSA Group Pacing”
[THROT] "OSPF Shortest Path First Throttling"
[ARPOPT] “ARPANET Routing Algorithm Improvements” E. Rose et al

And Happy New Year to all of you!


This is the follow up discussion for the post titled, "Have you seen my Router ID?"

The underlying issue here was trying to get OSPF to bypass the usual selection process for Router ID. The normal selection order is:

Manual router ID configured under ospf process
Highest IP address of a loopback in the up state in the respective routing table
Highest IP address of an interface of an up state in the respective routing table

If there are no up interfaces and you have not manually configured a router ID, you will get an error when you try to configure an OSPF process.

In general, most solutions focused around either using the OSPF selection process to one's advantage, or trying to "hide" the loopback from OSPF, so that it would select something else.

Some solutions were flat out wrong, because they broke the section requirements (mostly either configuring a router-id, or shutting down a loopback interface. Be sure to read the lab requirements carefully.

Solutions that didn't work (at least on the versions that I tested)

Backup interface
The idea being that the interface is down, and will not be assigned out as a router ID. This worked fine initially, but the router grabbed the (wrong) address after a reload, violating the requirement of functionality after the reload.

Interface Dampening with a restart penalty
The idea here was presumably that the interface would start off dampened, and not be selected as the router ID. After a reload, the interface did indeed show as dampened, but the interface was still up, and was chosen as the router ID.

Solutions that worked:

Easiest / most common answer:

Two OSPF routing processes

If there are two processes configured on the router, the highest loopback will be assigned to the first one CONFIGURED, and the second highest loopback will be assigned to the second process. The first process doesn't have to have any networks assigned, nor does it need to have a process number numerically smaller than the second process.

More complex:

One or more VRFs, possibly with secondary addresses.

Instructor Favorite (of the proposed solutions):

Kron scheduling to kick off a EEM applet when the router reloads to "no shut" the loopback interface. Since the loopback starts off shutdown, the OSPF process doesn't use it, and grabs the other one. The EEM applet runs, enabling the loopback, allowing the networks to be advertised properly to the neighbor. Although this configuration did include the loopback being shutdown, it was only shutdown for a brief period of time while the device was loading.

(Note: Scott Morris laughed out loud when informed of this one.)


In yesterday's post, titled "Have you seen my Router ID?", a challenge section was provided. This post will focus on scrutinizing the section itself, from a strategy / analysis point of view.

From a high level overview, we have two devices peering OSPF over a FastEthernet link, with some loopback networks advertised by one side, and received on the other router.  If that was all that the section was asking for, then it should be a task that anyone at CCNA level could complete.  When looking at the higher levels of certification, to some point you're still configuring some of the same items, but you are expected to know more and more about the underlying technologies, theories, and interactions.

Just like other practice lab sections, we are provided with a list of bullet points.  In order to get full credit for the section, we need to make sure that we are meeting ALL of the stated requirements.  In the event that you don't know how to complete ALL the items, you are usually better off skipping the section, unless it is needed for core connectivity.  When you're in the studying phase, it's also a good idea to play "what if", and ask yourself how you would achieve the task if the section were worded slightly differently.

Make sure that you take the time to read carefully, rather than just diving into a configuration.  Make sure that you are carefully taking the time to understand exactly what is being asked.

Taking a look at the individual bullet points:

1.  R2 should see networks A, B, and C as OSPF routes in R2's routing table, but they should not appear as IA, E2, or E1.

Here we have a general requirement, but it is coupled with a restriction.  With a solution in place, this should be easy to verify.

Knowledge questions:

  • What are the different types of OSPF routes that you might see in the routing table?
  • Of the various OSPF route types, are there any that are NOT possible, given the information in the section?
  • If there are route types that are NOT possible, why are they not possible?
  • If the section was worded differently, and for example said they SHOULD appear as a certain type, do you know how you would configure that?

Self assessment: Will your proposed solution meet the requirements of this bullet point?

2.  The output of “show ip ospf neighbor” on R2 should show as the Neighbor ID for the adjacency to R1, even if R1 is reloaded.  No other Neighbor IDs should show up on R2.

More requirements and restrictions, but still very straightforward regarding how to verify that a solution meets the requirements of this bullet point.

Knowledge questions:

  • What is the significance of R2 showing as the Neighbor ID for the adjacency?
  • What types of situations would cause a reload to have an effect on the Neighbor ID?
  • What would cause R2 to see other Neighbor IDs?

Self assessment: Will your proposed solution meet these requirements?

3.  You are not allowed to use the "router-id" command on R1.

This bullet point is an explicit restriction.

Knowledge questions:

  • What does the "router-id" command do?
  • If this restriction was not here, would your solution be different?
  • If the section was worded differently, and you were REQUIRED to use this command, do you know how to use it?

Self assessment: Were you careful to make sure your proposed solution was not using this command?

4.  You are not permitted to shut down interfaces on R1, or remove any of the existing configuration on R1.

Knowledge questions:

  • How could shutting down an interface affect this scenario?
  • How could removing configuration affect this scenario?

5.  No additional configuration may be added to R2, all configuration for this challenge is done on R1.

Mostly just excess clarification here, to point out that only R1 needs to be configured.

Knowledge question: Would being able to make configuration changes on R2 affect anything in this scenario?

Note:  The questions listed here are just some samples of the types of things that you might be thinking or asking yourself when practicing a technology, or working through a lab scenario.


There is more than one possible solution for this challenge. Feel free to post your proposed answer in the comments section. We will try to keep comments hidden from public view, so that the fun isn't spoiled for others. Also, don't feel bad if the answer(s) aren't immediately apparent. A number of very bright people have been puzzled by this scenario.  Answers will be posted on Friday, September 18th.


R1 and R2 are configured with their FastEthernet interfaces on the same subnet. R1 will be forming an OSPF neighbor adjacency to R2 over the FastEthernet interface, and will also be advertising some loopback networks into OSPF.


R1's Relevant Configuration:

interface Loopback1
 ip address

interface Loopback11
 ip address

interface Loopback111
 ip address

interface FastEthernet0/0
 ip address
 no shut

R2's Relevant Configuration:

interface FastEthernet0/0
 ip address
 no shut

router ospf 1
 network area 0


Your task is to configure R1 while meeting all of the following criteria for requirements and restrictions:

  • R2 should see the networks,, and as OSPF routes in R2's routing table, but they should not appear as IA, E2, or E1.
  • The output of "show ip ospf neighbor" on R2 should show as the Neighbor ID for the adjacency to R1, even if R1 is reloaded.  No other Neighbor IDs should show up on R2.
  • You are not allowed to use the "router-id" command on R1.
  • You are not permitted to shut down any interfaces on R1, or remove any of the existing configuration on R1.
  • No additional configuration may be added to R2, all configuration for this challenge is done on R1.

Subscribe to INE Blog Updates

New Blog Posts!