Posts Tagged ‘CCDE’
INE would like to send a big congratulations out to our very own Brian McGahan, who just moments ago took and passed the Cisco CCDE exam! Numbers are still forthcoming, but this being the first exam given in 2013, may well be CCDE #2013::1. Brian will be writing a more in-depth blog post here, outlining some of the more fundamental aspects related to reading and studying for the exam, but in the interim, we just wanted to say CONGRATULATIONS!!
I’ll be in London the week of November 19th for an Service Provider Bootcamp that I’m teaching until Friday the 23rd. I’m going to keep the facility and do a CCDE Open Study session like the one I did in San Jose a couple weeks ago. I’ll be able to allow up to 25 people attend and of course it’s free of charge. We’ll go over one CCDE scenario per day for the 3 days prior to the exam which is being held in London on the 27th of November.
Even if you aren’t going to take the CCDE but are interested in getting a chance to discuss real world networking with other senior engineers then you should attend. We had a great time in San Jose as nearly everyone was at least a CCIE and many double/triple CCIEs which allowed for interesting discussions on the scenarios and technologies in general.
I’ll try to keep doing these sessions a few times a year and keeping them free of charge in various locations to help promote the CCDE program.
Our London location for the CCDE Open Study Session will be the Rydges Hotel in Kensington. This is the same location as my R&S and SP London bootcamps.
61 Gloucester Rd, London, Greater London
SW7 4PE, United Kingdom
The Rydges Hotel is a little on the pricey side as it’s in a really nice area but if you want to stay there Jeremy Brown our Bootcamp Coordinator can get you a discounted INE rate. Also you’ll be right down the street from the Natural History Museum, Victoria and Albert Museum and Harrods.
To sign up for the session, email Jeremy Brown and he’ll reserve your seat.
I’ll post a CCDE recommended reading list and I’m trying to get a CCDE simulation released over the next couple months. I have the content developed and just waiting for the testing engine to be finalized.
Lastly during the last session in San Jose we “cracked” the CCDE demo (https://learningnetwork.cisco.com/docs/DOC-2438) to get a 100% score. If you haven’t taken the new demo try it out and see what score you get. Then next week I’ll post the answers to the demo here on the blog.
The CCDE bootcamp is coming shortly on May 1st, and we would like to provide some information to those of you who have already registered for the class or considering to join us. The class will go for five days and finish right before the CCDE practical exam in Chicago. The class is interactive for the most part – instructor will present you documents, diagrams, slides and questions on board and then the whole class will go through the solutions in live mode, discussing various options and correct answers. The class is centered around three major “platforms”: generic “large-scale” network topologies that are used to construct various network design cases. There are three main platforms presented in the class:
- Internet Service Provider. A fictitious ISP that provides VPN and Internet services to enterprises in addition to wholesale Internet services. Generic two-layer network, featuring a mix of interconnection technologies and using ISIS/BGP for routing. This platform is mainly used to work with scenarios relating to transit traffic services.
- Application Service Provider. A company that has its own wide-area network interconnecting data-centers and points-of-presence. The company provides server application services – e.g. a virtual call-center, online support desk etc to multiple customers. Customers connect either directly or tunneling over Internet. This platform is used to demonstrate issues arising in networks that provide centralized services to different customers. This network uses OSPF and BGP for routing, traffic flows are mainly considered to be “client-server” flows between different networks.
- Large Enterprise Network. Presents a generic enterprise network with diverse set of offices and private WAN network. The network services just one company, but has to support a large variety of application and different connection types. Traffic flows are mainly contained to one network but there are multiple “concentration” points. The network uses EIGRP for routing.
Every platform is used to construct 5 different scenarios, featuring from 15 to 20 different questions each. Answering each question requires analyzing the network baseline and additional information presented through the course of the class and selecting the optimal answer. Similar to the actual exam, the scenarios will have one the the following logical structures:
- Merge two networks or spin off a new network.
- Add a service or application – e.g. deploy L3 VPNs or add VoIP.
- Scale the network – accommodate technologies to network growth, e.g. IGP/BGP/MPLS scaling.
- Replace a technology – e.g. replace routing protocol or link type with another one.
You will be required to do a “fresh” design or fix a faulty/suboptimal scenario and propose a better solution. For example, you may be asked to fix a network that has new application deployed that is not working as required. The class will focus on live discussion of design problems as well as strategy tips for passing the CCDE practical exam. One again, students are assumed to have knowledge equivalent in scope to CCIE Written exam blueprint. And lastly, the following is link to a sample CCDE scenario – baseline and questions in the format they are going to be presented during the class.
To start my reading from Petr’s excellent CCDE reading list for his upcoming LIVE and ONLINE CCDE Bootcamps, I decided to start with:
EIGRP for IP: Basic Operation and Configuration by Russ White and Alvaro Retana
I was able to grab an Amazon Kindle version for about $9, and EIGRP has always been one of my favorite protocols.
The text dives right in to none other than the composite metric of EIGRP and it brought a smile to my face as I thought about all of the misconceptions I had regarding this topic from early on in my Cisco studies. Let us review some key points regarding this metric and hopefully put some of your own misconceptions to rest.
- While we are taught since CCNA days that the EIGRP metric consists of 5 possible components – BW, Delay, Load, Reliability, and MTU; we realize when we look at the actual formula for the metric computation, MTU is actually not part of the metric. Why have we been taught this then? Cisco indicates that MTU is used as a tie-breaker in a situation that might require it. To review the actual formula that is used to compute the metric, click here.
- Notice from the formula that the K (constant values) impact which components of the metric are actually considered. By default K1 is set to 1 and K3 is set to 1 to ensure that Bandwidth and Delay are utilized in the calculation. If you wanted to make Bandwidth twice as significant in the calculation, you could set K1 to 2, as an example. The metric weights command is used for this manipulation. Note that it starts with a TOS parameter that should always be set to 0. Cisco never did fully implement this functionality.
- The Bandwidth that effects the metric is taken from the bandwidth command used in interface configuration mode. Obviously, if you do not provide this value – the Cisco router will select a default based on the interface type.
- The Delay value that effects the metric is taken from the delay command used in interface configuration mode. This value depends on the interface hardware type, e.g. it is lower for Ethernet but higher for Serial interfaces. Note how the Delay parameter allows you to influence EIGRP pathing decisions without the manipulation of the Bandwidth value. This is nice since other mechanisms could be relying heavily on the bandwidth setting, e.g. EIGRP bandwidth pacing or absolute QoS reservation values for CBWFQ.
- The actual metric value for a prefix is derived from the SUM of the delay values in the path, and the LOWEST bandwidth value along the path. This is yet another reason to use more predictive Delay manipulations to change EIGRP path preference.
In the next post on the EIGRP metric, we will examine this at the actual command line, and discuss EIGRP load balancing options. Thanks for reading!
INE is happy to announce a new class dedicated to the recently introduced Cisco Certified Design Expert (CCDE) certification. The first CCDE Practical Bootcamp is to be run on May 1-5th in Chicago, right before the actual CCDE practical exam that is scheduled on May 6th. Our goal was designing a “last-week” refresher and booster class to finalize your CCDE exam preparation. Students are assumed to have solid theoretical knowledge of the exam’s technology base prior to attending. This blog posts gives you a quick overview of the class structure and pre-requisites you should meet in order to benefit the most from this training offer.
BGP (see ) is the de-facto protocol used for Inter-AS connectivity nowadays. Even though it is commonly accepted that BGP protocol design is far from being ideal and there have been attempts to develop a better replacement for BGP, none of them has been successful. To further add to BGP’s widespread adoption, MP-BGP extension allows BGP transporting almost any kind of control-plane information, e.g. to providing auto-discovery functions or control-plane interworking for MPLS/BGP VPNs. However, despite BGP’s success, the problems with the protocol design did not disappear. One of them is slow convergence, which is a serious limiting factor for many modern applications. In this publication, we are going to discuss some techniques that could be used to improve BGP convergence for Intra-AS deployments.
BGP-Only Convergence Process
Tuning BGP Transport
BGP Fast Peering Session Deactivation
BGP and IGP Interaction
BGP PIC and Multiple-Path Propagation
Practical Scenario: BGP PIC + BGP NHT
Considerations for Implementing BGP PIC
Appendix: Practical Scenario Baseline Configuration
The below is a compilation of the blost posts I made over time and that seemed to be helpful based on the feedback I received. I noticed a lot of blog visitors walking through our archives looking for the information. Hopefully, this compilation will help navigating our blog better. Feel free to post your feedback and suggestions!
- Troubleshooting Multicast Routing
- Understanding Third-Party Next-Hop
- A High-Level Overview of LISP
- OSPF Fast Convergence
- A Simple IPv4 Prefix Summarization Procedure
- Tuning OSPF Performance
- OSPF Prefix-Filtering Using Forwarding Address
- Understanding OSPF Transit Capability
- OSPF Route Filtering Demystified
- EIGRP Query Scoping
- Introduction to Optimized Edge Routing
- Understanding Unequal Cost Load-Balancing
- Understanding PIM BSR protocol
- Understanding Redistribution: Part 1
- Understanding Redistribution: Part 2
- Understanding Redistribution: Part 3
- Inter-AS mVPNs: MDT SAFI, BGP Connector and RPF Proxy Vector
- The Long Road to M-LSPs
- Using MPLs and M-LDP Signaling for Multicast VPNs
- Scaling MPLS Networks
- What is Overlay Transport Virtualization
- Understanding MPLS Ping and Traceroute
- Understanding Cell Mode MPLS
- Curious Scenario: Poor Man’s VPLS
- Curious Scenario: Turning Switch into Hub
- Understanding EIGRP SoO and BGP Cost Community
- RSTP and Fast Convergence
- Understanding MSTP
- Understanding STP and RSTP Convergence
- MSTP Tutorial Part 2: Outside a Region
- MSTP Tutorial Part 1: Inside a Region
- Understanding PVST+
- Understanding STP Convergence: Part 2
- Understanding STP Convergence: Part 1
This publication briefly covers the use of 3rd party next-hops in OSPF, RIP, EIGRP and BGP routing protocols. Common concepts are introduced and protocol-specific implementations are discussed. Basic understanding of the routing protocol function is required before reading this blog post.
Third-party next-hop concept appears only to distance vector protocol, or in the parts of the link-state protocols that exhibit distance-vector behavior. The idea is that a distance-vector update carries explicit next-hop value, which is used by receiving side, as opposed to the “implicit” next-hop calculated as the sending router’s address – the source address in the IP header carrying the routing update. Such “explicit” next-hop is called “third-party” next-hop IP address, allowing for pointing to a different next-hop, other than advertising router. Intitively, this is only possible if the advertising and receiving router are on a shared segment, but the “shared segment” concept could be generalized and abstracted. Every popular distance-vector protocols support third party next-hop – RIPv2, EIGRP, OSPF and BGP all carry explicit next-hop value. Look at the figure below – it illustrates the situation where two different distance-vector protocols are running on the shared segment, but none of them runs on all routers attached to the segment. The protocols “overlap” at a “pivotal” router and redistribution is used to provide inter-protocol route exchange.
In this blog post we are going to review a number of MPLS scaling techniques. Theoretically, the main factors that limit MPLS network growth are:
- IGP Scaling. Route Summarization, which is the core procedure for scaling of all commonly used IGPs does not work well with MPLS LSPs. We’ll discuss the reasons for this and see what solutions are available to deploy MPLS in presence of IGP route summarization.
- Forwarding State growth. Deploying MPLE TE may be challenging in large network as number of tunnels grow like O(N^2) where N is the number of TE endpoints (typically the number of PE routers). While most of the networks are not even near the breaking point, we are still going to review techniques that allow MPLS-TE to scale to very large networks (10th of thousands routers).
- Management Overhead. MPLS requires additional control plane components and therefore is more difficult to manage compared to classic IP networks. This becomes more complicated with the network growth.
The blog post summarizes some recently developed approaches that address the first two of the above mentioned issues. Before we begin, I would like to thank Daniel Ginsburg for introducing me to this topic back in 2007.
This goal of this post is brief discussion of main factors controlling fast convergence in OSPF-based networks. Network convergence is a term that is sometimes used under various interpretations. Before we discuss the optimization procedures for OSPF, we define network convergence as the process of synchronizing network forwarding tables after a topology change. Network is said to be converged when none of forwarding tables are changing for “some reasonable” amount of time. This “some” amount of time could be defined as some interval, based on the expected maximum time to stabilize after a single topology change. Network convergence based on native IGP mechanisms is also known as network restoration, since it heals the lost connections. Network mechanisms for traffic protection such as ECMP, MPLS FRR or IP FRR offering different approach to failure handling are outside the scope of this article. We are further taking multicast routing fast recovery out of the scope as well, even though this process is tied to IGP re-convergence.
It is interesting to notice that IGP-based “restoration” techniques have one (more or less) important problem. During the time of re-convergence, temporary micro-loops may exist in the topology due to inconsistency of FIB (forwarding) tables of different routers. This behavior is fundamental to link-state algorithms, as routers closer to failure tend to update their forwarding database before the other routers. The only popular routing protocol that lacks this property is EIGRP, which is loop-free at any moment during re-convergence, thanks to the explicit termination of the diffusing computations. For the link state-protocols, there are some enhancements to the FIB update procedures that allow avoiding such micro-loops with link-state routing, described in the document [ORDERED-FIB].
Even though we are mainly concerned with OSPF, ISIS will be mentioned in the discussion as well. It should be noted that compared to IS-IS, OSPF provides less “knobs” for convergence optimization. The main reason is probably the fact that ISIS is being developed and supported by a separate team of developers, more geared towards the ISPs where fast convergence is a critical competitive factor. The common optimization principles, however, are the same for both protocols, and during the conversation will point out at the features that OSPF lacks while IS-IS has for tuning. Finally, we start our discussion with a formula, which is further explained in the text:
Convergence = Failure_Detection_Time + Event_Propagation_Time + SPF_Run_Time + RIB_FIB_Update_Time
The formula reflects the fact that convergence time for a link-state protocol is sum of the following components:
- Time to detect the network failure, e.g. interface down condition.
- Time to propagate the event, i.e. flood the LSA across the topology.
- Time to perform SPF calculations on all routers upon reception of the new information.
- Time to update the forwarding tables for all routers in the area.