Author Archive


This course covers the basics of implementing inter-VLAN routing by explaining the theory behind two common methodologies, as well as their implementation on Cisco routers and switches. By the end of this course students will be able to explain the differences between “Router-On-A-Stick” and “Switched Virtual Interfaces,” as well as how to implement inter-VLAN routing using either of these techniques.

Why You Should Watch:

Virtually all organizations that implement VLANs into their switched networking topologies also need to know how to route IP traffic between those VLANs. Knowing the techniques available to accomplish this kind of routing is essential whether you are managing a network, or simply pursuing a networking certification (like the Cisco CCNA).

Many learners are confused about the differences between VLANs and SVIs (Switched Virtual Interfaces) as well as their inter-relationship. This course is meant to clarify any confusion you may have between those differences, and teach you both the theory and implementation (utilizing Cisco IOS software) of Inter-VLAN Routing.

Who Should Watch:

This course is intended for anyone wanting to learn about inter-VLAN routing with an emphasis on the techniques to do so using Cisco routers and switches. A basic familiarity with the Cisco IOS command line and the basic high-level concepts of VLANs, switches, routers and IP routing are recommended.

About The Instructor

Keith Bogart has been in the IT field since 1998. Keith started as a Customer Service Representative at Cisco Systems, and then transitioned into the Cisco Technical Assistance Center (TAC). For almost twenty years Keith has served as both a Technical Instructor as well as Course Developer for Cisco Systems and (for the past few years) INE. Keith is a Cisco Certified Internetwork Expert, as well as a Cisco Certified Networking Associate. Keith is currently employed as a Technical Instructor and Course Developer at INE.



You may recall that, when using Named-Mode EIGRP configuration you have automatic access to EIGRP Wide Metrics.  In addition to providing you a new K-Value (K6 which is used against Jitter and Energy) the EIGRP Distance formula has been revised (what they call, “scaled”) to account for links above-and-beyond 10Gbps.  Remember that with Classic-Mode EIGRP, the formula looked like this:

metric = ([K1 * bandwidth + (K2 * bandwidth) / (256 - load) + K3 * delay] * [K5 / (reliability + K4)]) * 256

In the formula, the “**bandwidth**” value was represented as:

BW = 10^7 / minimum BW


In the formula above, the “minimum BW” was represented as Kbps. The problem with this “classic” method was all links with a bandwidth higher than 10Gbps (10,000,000,000 bps, represented as 10,000,000 Kbps in the formula) were given the same BW value as 10Gbps.  In other words, whether you put a single link of 10Gbps into that formula,  a link of 40Gbps, or an Etherchannel with a combined bandwidth of 80Gbps…they all equated to “1″. So in Classic Mode EIGRP, EIGRP couldn’t distinguish between these types of links to develop an accurate path to a destination.

When EIGRP Wide-Metrics were developed, Cisco applied an “EIGRP_WIDE_SCALE” factor against some portions of the formula (which equates to the value of 65,536) to account for faster links (as well as smaller delay values).  They also changed the terminology in the formula from “bandwidth” to “throughput”. So now the “new” formula for EIGRP Wide-Metrics does the following to the “minimum bandwidth” portion of the formula:

Minimum Throughput = (10^7 * 65536)/Bw), (remember that Bw is in Kbps) where 65536 is the “EIGRP_WIDE_SCALE” constant.

By multiplying 10^7 against 65,536 EIGRP, Wide-Metrics can now accurately differentiate between links of any speed/bandwidth. EIGRP Wide-Metrics also multiply this value of 65,536 (the “EIGRP_WIDE_SCALE” constant) against the Delay sum.

But here’s the problem,  the computed value of this new formula might NOT FIT into the IP Routing Table (called the “RIB” – Routing Information Base).

When you view the output of “show ip route” for any given route, you see two values contained in brackets.  For an EIGRP-learned route, the first number in the brackets represents the Administrative Distance.  The second value represents what I call the “EIGRP Distance”.  Others call this simply the route “metric” or “EIGRP Composite Cost”.  No matter what term you use, this field in the RIB is only 4-bytes long.


Here is the problem,  EIGRP wide metrics (because they have an “EIGRP_WIDE_SCALE” multiplier of 65,536 used against several of the vector-metrics such as bandwidth and delay) could come up with a distance value so large…that the resulting distance value doesn’t FIT within a 4-byte value in the RIB.

The maximum decimal value that can be contained within a 4-byte number is 4,294,967,296.  However, if you were to place one’s (1′s) in each placeholder the EIGRP wide-metrics formula, the resultant bandwidth value (by itself) would be so large that it would break the boundaries of a 4-byte placeholder in the RIB:

BW = (10^7 * 65536)/1) = 655,360,000,000

and that is even BEFORE adding the sum-of-the-delays into the mix.:

((K1*[655,360,000,000) + (K2*Scaled Bw)/(256 – Load) + (K3*Scaled Delay)*(K5/(Reliability + K4)))

The result would be, that while EIGRP was able to calculate a Distance value, that value would be too large to be placed into the RIB. This could happen in a couple of scenarios:

  • An EIGRP packet containing a really slow-speed link in the path (like a 56Kbps dialup link)
  • Redistribution of other protocols into EIGRP, and selecting a “bandwidth” value (within the “metric” keyword) that was too low.

Bandwidth is too small

And so here’s the rub…EIGRP Wide-Metrics supply the ability to differentiate between links of all kinds of different bandwidth values (due to the additional “EIGRP_WIDE_SCALE” factor of 65,536) but the resultant EIGRP Distance value could be too large to fit into the 4-byte “Metric” field within the RIB. If that were the case, this is what you’d see (notice the words, “FD is Infinity” below for the EIGRP routes to as well as

Well…those engineers at Cisco were pretty smart and incorporated a special little “tweak” into Wide-Metrics to account for just this problem. This tweak is called the “metric rib-scale”. What this does, is to take all EIGRP Feasible Distance values (which may-or-may-not be too large to fit into the 4-byte RIB “metric” value) and DIVIDE THEM by a value called…you guessed it, the “metric rib-scale”. The default value of the “metric rib-scale” is 128 which, for most normal routes, is enough to bring them down to size to fit into the RIB. This value can be seen in the following output:

This explains why, when viewing the EIGRP Topology Table, an entry for a prefix will display both the 64-bit EIGRP Distance value…as well as the “scaled” values (that was divided by 128) as the “RIB” value:

And here you can see that scaled RIB metric reflected in the IP Routing Table (since the original EIGRP Feasible Distance was too large to fit):

But sometimes, the 64-bit Feasible Distance of a route is so large, that scaling/dividing it by the default RIB-Scale value of 128 simply isn’t enough. As I previously showed you, these types of EIGRP Topology entries will show as “FD is Infinity”. It is for this reason, that one may need to adjust this value to a larger RIB-Scale factor (using the EIGRP command, “metric rib-scale”) such that the resulting quotient is small enough to fit into the RIB.

For example, let’s take a look at this output again…

Even if we divide the FD of 656,671,375,360 by the default RIB-Scale value of 128 the quotient would be 5,130,245,120 which is still larger than our maximum allowable RIB metric of 4,294,967,296. It is for this reason that we would need to adjust the RIB-Scale value to something else (larger than 128) to create a quotient that was smaller than 4,294,967,296. The RIB-Scale is a configurable number between “1″ and “255″. So by increasing the number beyond the default of 128 we can create quotients that are small enough to fit within the RIB (IP Routing Table).

So let’s apply a new RIB-Scale value to EIGRP and see how that same route, which was previously listed as “Infinity” can fit into the RIB;

(BEFORE…with the default RIB-Scale value)


(AFTER applying a larger RIB-Scale value)



Hello everyone!
I recently received an email from a learner who is studying for his CCNA Routing-and-Switching Certification and he had a few excellent questions about the OSI model and how, exactly data moves from one-layer to the next. I figured my response might prove valuable to others studying for their CCNA so…here it is!

  1. Learner-Question: In video of the osi model, you said that the session layer should provide the source and destination port number but the fields of those ports are at the transport header- my question is how does the session layer put this number on field which does not exist in that time (when i send the date the encapsulation process goes down from the app layer)?

    In order to thoroughly answer all of your questions below, one really needs to know about computer programming, APIs, etc…which frankly, I know very little about. But what I do know, I’ll try to explain. From my understanding, there are some kind of software “links” or “hooks” which are used to allow a program at one layer of the OSI model to communicate with a program at another layer. Many applications have software built-in that provide multi-layer functionality. For example, imagine that you open some kind of Terminal Client (like Hyperterminal, SecureCRT, PuTTY, etc). That software you’ve opened technically does not reside at ANY of the OSI layers. That software just provides the graphical display such as the buttons you can press, the pulldown menus available, etc. Now imagine that within PuTTY or Hyperterminal you press a button to initiate a Telnet connection. At that moment, the PuTTY software informs your CPU that the CPU must start the Telnet program. PuTTY provides the interface so you can see…and control…what is going on, but PuTTY itself is NOT Telnet. It’s simply the user-interface so you can control Telnet.

    The functionality of Telnet actually is actually composed of an Application-Layer process as well as a Session layer process, all rolled into one. At the Application layer, the Telnet protocol answers such questions as, “what is a “username” and what is a “password” and is that required? Shall it send data downstream to lower-levels of the OSI model one-bit-at-a-time or several bytes-at-a-time? How is the user supposed to know when Telnet is waiting for input, versus currently transmitting output?” etc etc. The Session-Layer component of Telnet knows that it should be “listening” for incoming sessions on port-23. And when initiating outgoing sessions, it should use a destination port-23. At some point, the Telnet protocol creates a hook (I think these are called APIs) that allows it to invoke the Transmission Control Protocol (TCP). TCP knows that as part of the datastructure it creates, it must reserve 2-bytes for a “destination port-number” field and another 2-bytes for a “source port-number field” but what TCP DOESN’T know is what numbers to place in those fields. So this API (or whatever it is) allows the Session-Layer component of Telnet to convey to TCP that it place the value of “23” in either the Source or Destination Port Number field (depending on who is initiating the Telnet session).

    You may now be thinking, “but what about the Presentation Layer? You didn’t include that in the Telnet process?”. I believe that once SecureCRT (or PuTTY or Hyperterminal) invoke your Application-Layer protocol (such as Telnet or SSH) that SecureCRT/Hyperterminal will provide the Presentation Layer-component. SecureCRT knows if, when you press a keystroke on your keyboard, that key should be represented by ASCII or EBCDIC, SecureCRT/Hyperterminal also knows if you pressed the button indicating that encryption should be used. So it kind of “merges” or “blends” all of that information into Telnet thus providing the Presentation-Layer components. I’m not sure HOW it does this…but it does.

  2. This question is about the type code field which lays at the llc sublayer, I understood that it purpose is to provide the upper layers what protocol is “talking”‘, how does it happens if the nic strips off the frame header in the decapsulation process?

    Basically what I wrote above happens in reverse here. There is some kind of internal software “hook” (probably another API) that allows your Layer-2 protocol (Ethernet) to communicate the value in the EtherType field to the CPU. In this way the CPU knows if it needs to invoke a Layer-3 procoess (like IP) or…if that process is already running…to take the Data from the Telnet frame and forward it to the correct layer-3 process. So IP itself does NOT see that Ethernet frame or any of the fields within it. But that “hook” (API???) provides the interface so that Ethernet data can be transferred upstream to the IPv4 process. At this point, my knowledge of the specific details of how this works ends.

  3. If the type code provides the protocol(and its version), why does the IP header has “vers” field?

    Once again, to answer this question I believe it’s all about the APIs that allow protocols at different layers to talk to each other. Moving downstream (from Layer-3 to Layer-2) when IPv4 (as an example) has created a full IP Packet, it will “call” the API that allows it to hook into the Layer-2 protocol. IP doesn’t even CARE what that Layer-2 protocol is. It probably does something like, “Hey Layer-2 hooking API!! I’ve got some data here. Please pass it on to whatever protocol is operating at the Datalink Layer for me!!” The API, because it is talking to IPv4 will then invoke whatever layer-2 protocol is running (Ethernet, HDLC, Frame-Relay, etc) and say, “I’ve got some Layer-3 data for you!!”. At that point, the Layer-2 protocol (Ethernet in this case) will say, “Great! Can you give me some number that I can shove into my Ethertype field that indicates WHICH Layer-3 protocol created the data?? I don’t really care personally…but the device at the other end of the link receiving this data will need to know!”. So the API (that was originally called by the IPv4 process and was DESIGNED to be an interpreter between IPv4 and Ethernet) will say, “sure…the number you need is 0×800!” and thus…Ethernet places that value into the Ethertype field. Receiving an Ethernet frame would work the same way but in reverse. This time the Layer-2 protocol would “call” that L2-to-L3 API and provide the data, ALONG WITH the value of the Ethertype field to that API. In turn, the API would then know it needs to call-out to IPv4 and transfer the data upstream.




The following question was recently sent to me regarding PPP and CHAP:


At the moment I only have packet tracer to practice on, and have been trying to setup CHAP over PPP.

It seems that the “PPP CHAP username xxxx” and “PPP CHAP password xxxx” commands are missing in packet tracer.

I have it set similar to this video… (you can skip the first 1 min 50 secs)

As he doesn’t use the missing commands, if that were to be done on live kit would it just use the hostname and magic number to create the hash?


Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router?

Thanks, Paul.


Here was my reply:

Hi Paul,

When using PPP CHAP keep in mind four fundamental things:

  1. The “magic number” that you see in PPP LCP messages has nothing to do with Authentication or CHAP.  It is simply PPPs way of trying to verify that it has a bi-directional link with a peer. When sending a PPP LCP message a random Magic Number is generated.  The idea is that you should NOT see your own Magic Number in LCP messages received from your PPP Peer.  If you DO see the same magic number that you transmited, that means you are talking to yourself (your outgoing LCP CONFREQ message has been looped back to you).  This might happen if the Telco that is providing your circuit is doing some testing or something and has temporarily looped-back your circuit.
  2. At least one of the devices will be initiating the CHAP challenge.  In IOS this is enabled with the interface command, “ppp authentication chap”.  Technically it only has to be configured on one device (usually the ISP router that wishes to “challenge” the incoming caller) but with CHAP you can configure it on both sides if you wish to have bi-directional CHAP challenges.
  3. Both routers need a CHAP password, and you have a couple of options on how to do this.
  4. The “hash” that is generated in an outgoing PPP CHAP Response is created as a combination of three variables, and without knowing all three values the Hash Response cannot be generated:
  • A router’s Hostname
  • The configured PPP CHAP password
  • The PPP CHAP Challenge value

I do all of my lab testing on real hardware so I can’t speak to any “gotchas” that might be present in simulators like Packet Tracer.  But what I can tell you, is that on real routers the side that is receiving the CHAP challenge must be configured with an interface-level CHAP password.

The relevant configurations are below as an example.

ISP router that is initiating the CHAP Challenge for incoming callers:

username Customer password cisco
interface Serial1/3
 encapsulation ppp
 ppp authentication chap
 ip address x.x.x.x y.y.y.y

Customer router placing the outgoing PPP call to ISP:

hostname Customer
interface Serial1/3
 encapsulation ppp
 ppp chap password cisco
 ip address x.x.x.x y.y.y.y

If you have a situation where you expect that the Customer Router might be using this same interface to “call” multiple remote destinations, and use a different CHAP password for each remote location, then you could add the following:


Customer router placing the outgoing PPP call to ISP-1 (CHAP password = Bob) and ISP-2 (CHAP password = Sally):

hostname Customer
username ISP-1 password Bob
username ISP-2 password Sally
interface Serial1/3
 encapsulation ppp
 ppp chap password cisco
 ip address x.x.x.x y.y.y.y

Notice in the example above, the “username x password y” commands supercede the interface-level command, “ppp chap password x”. But please note that the customer (calling) router always needs the “ppp chap password” command configured at the interface level.  A global “username x password y” in the customer router does not replace this command.  In this situation, if the Customer router placed a call to ISP-3 (for which there IS no “username/password” statement) it would fallback to using the password configured at the interface-level.

Lastly, the “username x password y” command needs to be viewed differently depending on whether or not it is configured on the router that is RESPONDING to a Challenge…or is on the router that is GENERATING the Challenge:

  • When the command “username X password Y” is configured on the router that is responding to the CHAP Challenge (Customer router), the router’s local “hostname” and password in this command (along with the received Challenge) will be used in the Hash algorithm to generate the CHAP RESPONSE.


  • When the command “username X password Y” is configured on the router that is generating the CHAP Challenge (ISP Router), once the ISP router receives the CHAP Authentication Response (which includes the hostname of the Customer/calling router) it will match that received Hostname to a corresponding “username X password Y” statement. If one is found that matches, then the ISP router will perform its own CHAP hash of the username, password, and Challenge that it previously created to see if its own, locally-generated result matches the result that was received in the CHAP Response.

Lastly, you asked, “ Also, in bi-directional authentication, do both routers have to use the same password or can they be different as long as they match what they expect from the other router?”

Hopefully from my explanations above it is now clear that in the case of bi-directional authentication, the passwords do indeed have to be the same on both sides.


Hope that helps!





Tags: , ,


CCIE Bloggers