Jul
10

This past Monday I passed the CCIE Data Center Lab Exam in San Jose CA, making me four time Cisco Certified Internetwork Expert (CCIE) #8593 in Routing & Switching, Service Provider, Security, and Data Center, as well as Cisco Certified Design Expert (CCDE) #20130013.  This was my first - and thankfully last - attempt at the DC lab exam, and also my first experience in the San Jose CCIE Lab location.  In this post I’m going to outline my preparation process for CCIE Data Center, as well as to talk about my experience with the actual day of the lab.

 

The Initial Commitment

When the new CCIE Data Center track was first announced last year, it was a no-brainer that I was going to pursue it.  As I already had 15+ years of experience in Enterprise networking, with a large focus on campus LAN switching, IGP and BGP routing, plus some minor exposure to the Nexus platforms, I thought it would be a cinch.  After all, Nexus is just a fancy Catalyst 6500, right? The major hurdle for the track however was not the technologies, but procuring the equipment.  After debating back and forth for quite a while, Brian Dennis and I decided that INE would hold off on the company Ferraris, and instead invest in the equipment for CCIE Data Center.  One of our deciding factors to invest in the track was the sheer volume of customers at our CCIE Candidate Party at Cisco Live 2012 that kept asking us all night long, “when are you guys going to do Nexus training!”  As they say, ask and you shall receive… or was it if you build it, they will come?

Coincidentally, our initial build plans for DC started in early July 2012, which makes it almost one year to the day from when we committed to the track until when I finally had a chance to take the lab exam.

Originally I had planned to try to get the very first available slot for the DC lab exam, but as always life happened and a few things got in the way, such as the birth of my daughter, as well as a short pit stop along the way to pick up the Cisco Certified Design Expert (CCDE). Anyways, onto my preparation…

Once our equipment build was finalized, which by the way was the most grueling and complicated build of my 15+ year career, Mark Snow and I decided to implement a divide and conquer approach to the blueprint, where we would split the Nexus topics, I would take Storage, he would take Unified Computing System (UCS), and then we’d come back and meet in the middle.  Nexus I assumed would be simple, since I had some experience using it as a basic 10GigE aggregation switch, but none of the advanced DC specific topics (e.g. vPC, FabricPath, OTV, FCoE, etc.)  In hindsight, yes Nexus is just a glorified Cat6k, however there are caveats, caveats, and more caveats galore.  Did I mention Nexus has a lot of caveats?

Recommended Reading or: How I Learned to Stop Worrying and Love the Documentation

Since a lot of the DC specific technologies are so new, there’s not many traditional books that are out there that can help you, unlike something like OSPF that is over 20 years old.  With Nexus the topics are so cutting edge, the NX-OS software team is literally pushing out hotfixes and new features as we speak.  Therefore the only main resource that is available for reading about a lot of these technologies is the Cisco Documentation.  I can already hear the collective groan from the audience about reading the documentation, but I can’t stress this enough, you must read the Nexus documentation if you are serious about these technologies and about passing the CCIE DC Lab Exam.

To give you an idea, this is what my Chrome bookmarks toolbar still looks like today.

 

Personally the way I did this was to download every single configuration guide for Nexus 7K, 5K, and 2K in PDF format, and then load them on my iPad.  Starting with Nexus 7K I worked from basic system administration up to interfaces, layer 2 switching, layer 3 routing, SAN switching, etc.  Don’t count on having access to the PDF versions of the documentation in the actual lab exam, but for preparation purposes these are much more convenient than clicking through every section of the documentation in HTML format.

Each configuration guide can be downloaded as a single complete PDF.

 

Note that for MDS you don’t need to read through as much, since the SAN switching syntax is essentially the same between the Nexus 7K, 5K, and MDS, as they all run NX-OS.  The sections of MDS documentation that I did read end-to-end however are the Cisco MDS 9000 Family NX-OS Interfaces Configuration Guide and the Cisco MDS 9000 Family NX-OS IP Services Configuration Guide.  Both of these sections are key, as some topics such as FC Trunking and Port Channeling work differently in MDS than they do in Nexus, and then the IP Storage features such as FCIP and iSCSI are unique to MDS and are not supported in Nexus.

Another key point about the documentation for Data Center, just like for other CCIE tracks and other technologies in general, is that once you know how to use the documentation and where things are located you don’t need to worry about the default values for features, or other inane details about syntax.  For example there was a discussion recently on the CCIE DC Facebook group about how to create a mnemonic device (e.g. All People Seem To Need Data Processing / Please Do Not Throw Sausage Pizza Away) in order to remember in which features higher values are better and in which features lower values are better, e.g. LACP system-priority, vPC role priority, FabricPath root tree priority, etc.  I responded, who cares?  Why waste time remembering default values that likely will change between versions anyways?  Instead, your time would be better spent making sure that you know the manual navigation path for all features that you will be tested on in the exam.

Lower is higher and higher is lower… makes perfect sense, right?

 

Another point to consider is that in the actual Lab Exam, access to the documentation web pages is not very fast.  I’m assuming this is due to the strict content filtering that all the pages have to go through before they show up on your desktop.  Regardless as to the reason, if you need to use the documentation in the exam and you don’t already know exactly where the page you want is located, you’re gonna have a bad time.

Additionally, don’t limit your reading of the documentation to just the configuration guides.  There are a number of other very useful portions of the documentation that you should read – again, end-to-end, there are no shortcuts here – such as the white papers, design guides, and troubleshooting guides.

The Nexus 7000 White Papers are an essential read.

 

This is especially true since some of the verification and troubleshooting syntax for Nexus is just out of this world.  I swear whoever works on the actual syntax parser for the NX-OS software team must get paid based on the number of characters that the commands contain.  Did you say that your Fibre Channel over Ethernet isn’t working to your Nexus 2232PP Fabric Extenders that have multiple parent Nexus 5548UP switches paired in Enhanced Virtual Port Channel?  I hope you remember how to troubleshoot them with the command show system internal dcbx info interface ethernet 101/1/1!  Err… how about we just know where to find it in the FCoE troubleshooting guide instead then.

The troubleshooting guides are an often overlooked section of the documentation.

 

The real point of using the documentation is as follows: you must understand, in detail, the design caveats and hardware caveats that the Nexus, MDS, and UCS platforms have as they relate to the DC technologies.

Pictured above, some light reading on the Design Considerations for Classical Ethernet Integration of the Cisco Nexus 7000 M1 and F1 Modules

 

Recommended Books

Beyond the documentation, there are a select few regular books that I used during my studies.  The vast majority of them are either available on the Safari Online site, or in the case of the IBM Redbooks, free in PDF form direct from IBM’s website.  These books, in no particular order are:

Cisco Live 365

For those of you that have never heard of Cisco Live 365 before, you’re welcome. ;)  This is where all the video recordings and PDFs of slide decks are from the different Cisco Live (i.e. Cisco Networkers) conventions that have occurred in the past few years, from multiple locations.  A lot of these sessions are used to talk about the introduction of new products, e.g. the new Nexus 7700 that was just announced at Cisco Live 2013 Orlando, while others are technical deep-dives into topics.  In the case of CCIE Data Center there are a lot of really good presentations that I would recommend looking at during your preparation.  You don’t need to have physically attended Cisco Live in the past to get access, just sign up for an account for free and you can search all the content.  The Data Center sessions generally start with “BRKDCT” (Breakout Data Center Technologies), so that’s a good place to start your search.  Notable ones that I personally thought are worth looking at are in no particular order as follows:

  • BRKDCT-2204 - Nexus 7000/5000/2000/1000v Deployment Case Studies
  • BRKDCT-2237 - Versatile architecture of using Nexus 7000 with F1 and M-series I/O modules to deliver FEX, FabricPath edge and Multihop FCoE all at the same time
  • BRKCRS-3145 - Troubleshooting Cisco Nexus 5000 / 2000 Series Switches
  • BRKDCT-2048 - Deploying Virtual Port Channel in NXOS
  • BRKCRS-3146 - Advanced VPC operation and troubleshooting
  • BRKDCT-2081 - Cisco FabricPath Technology and Design
  • BRKDCT-2202 - FabricPath Migration Use Case
  • BRKDCT-1044 - FCoE for the IP Network Engineer
  • BRKSAN-2047 - FCoE - Design, Operations and Management Best Practices
  • BRKCOM-2002 - UCS Supported Storage Architectures and Best Practices with Storage
  • BRKVIR-3013 - Deploying and Troubleshooting the Nexus 1000v virtual switch
  • BRKRST-2930 - Implementing QoS with Nexus and NX-OS
  • BRKCOM-2005 - UCS and Nexus Fabric and VM's - Extending FEX direct to VM's in UCS and Nexus 5500
  • BRKCOM-2003 - UCS Networking - Deep Dive

INE’s Videos, Workbooks, & Classes

Now in my personal case, when I am learning a new technology, I know that I have truly absorbed and understood the topics when I can explain it to someone else in a clear and concise manner, hence my day job, author and instructor at INE.  From the culmination of reading these books, reading the documentation, and testing essentially every feature that the platforms have to offer, Mark Snow and I developed INE’s Nexus, Storage, and UCS classes, as well as the associated workbook labs and the live Bootcamp classes for these technologies.

As I’ve done many write-ups before on these offerings, and without getting too much into a sales pitch, you can find more information here about INE’s CCIE Data Center Video Series, here about INE’s CCIE Data Center Workbook, here about our CCIE Data Center 10-Day Bootcamp, and here about our CCIE Data Center Rack Rentals.  Note that we are currently adding more capacity to rack rentals and adding more Bootcamp classes to the schedule, both of which I’ll be posting separate announcements about shortly.

Read, Test, Rinse, and Repeat

While learning and developing the content for Data Center I followed the same methodology that Brian Dennis and I have been personally using and have been teaching for the past 10 years (yes, I can’t believe it’s been that long).  This methodology is essentially a four step process of learning and testing incrementally.  This is also the same methodology that has helped Brian Dennis obtain five CCIEs, and for me to obtain four CCIEs and the CCDE, so trust me when I say that it works.

The methodology is a basic four step process as follows:

  • Gain a basic understanding of the technologies
  • Gain basic hands-on experience to reinforce and expand your understanding
  • Gain an expert level of understanding
  • Gain an expert level of hands-on experience

It might seem self-explanatory that you need to start at the bottom and work your way up, i.e. A then B then C then D, however over the years we’ve seen so many CCIE candidates try to shortcut this process and try to go from A directly to D.  Traditionally these are the candidates that end up taking the lab exam 5, 6, 7 times before passing, essentially trying to brute force their way through the lab.  Believe it or not, we have had customers in the past that have attempted the CCIE Lab Exam in the same track 10 or more times before passing.  Coincidentally, these are also usually the customers that don’t want to hear that they don’t know something or that their methodology is wrong. Go figure.

Pictured above, how to make a hobby out of visiting building C at Cisco’s San Jose campus.

 

At least for me personally, obtaining a CCIE is more about the journey than it is the destination.  I feel that I would have cheated myself coming out of the process without truly being an expert at the technologies covered, so I made sure to really take the time and be meticulous about going through everything.

Pictured above, how to astound the engineers at the technical interview for your new job after getting your CCIE.

 

The CCIE Data Center Written Exam

Before scheduling the Lab Exam, I of course had to tackle the necessary evil that is the CCIE Data Center Written Exam. In my opinion this exam should be renamed the “how many stupid facts can I memorize about the Nexus and UCS platforms exam.”  I didn’t pass the DC written exam on my first attempt, or on my second attempt.  I’m not going to say exactly how many times I took the the DC written exam, but let’s just say that it’s somewhere more than two and somewhere less than infinity, and that I likely have seen every question in the test pool multiple times.

For those of you have passed this exam on your first try, more power to you.  With me on the other hand I try not to memorize any facts that I can quickly look up instead. While whoever wrote the CCIE DC Written Exam may think it's important that the UCS B420 M3 Blade Server has a Matrox G200e video controller with integrated 2D graphics core with hardware acceleration and supports all display resolutions up to 1920 x 1200 x 16 bpp resolution at 60 Hz with a 24-bit color depth for all resolutions less than 1600x1200 and up to 256 MB video memory, I do not, but I digress.

Scheduling the CCIE Data Center Lab Exam

One of the biggest hurdles in obtaining the CCIE DC that I had not initially planned for was the availability, or lack thereof, of lab dates open for scheduling.  I’m not normally one to complain about this, because when I took the CCIE R&S Lab Exam back in January 2002 I believe that I scheduled the date somewhere around July of 2001. Back then it was the norm to have a 6 month wait for dates, so when you went to the lab you had better be really prepared for it, otherwise you had a long 6 months ahead of you afterwards trying to think of what you did wrong.  With Data Center though, this was a completely different ballgame.

By the time I got around to being ready to schedule a date, there was literally not a single date open on the schedule for any location.  Mark Snow had even scheduled a lab date in Brussels Belgium, and was going to fly from Los Angeles in order to take the lab because that was literally his only option.  Luckily right around that time the CCIE Program added  new dates on the schedule, and he was able to move his lab attempt to San Jose, where he ended up passing.

Anyways once these new dates were added to the schedule I knew that I had to act fast, or risk having to wait until 2015 (not really, but that’s what it felt like).  Unfortunately the date that I took was only a week after Cisco Live 2013 Orlando, so I couldn’t help but feel while we were partying it up at the conference I should have been at home studying instead.  Also I would have much preferred to go to RTP over San Jose, since RTP is much closer to Chicago and I’m much more familiar with the area.  In hindsight SJC was probably a better choice anyways, since I have lots of friends in RTP which means there would have been more distractions.

Traveling To San Jose

I scheduled my exam purposely on a Monday, which meant that I could get to San Jose either Friday or Saturday and then leisurely spend the rest of the weekend doing some last minute review and relaxing in the hotel without any distractions.  This is the first time I’ve done it this way, and if you have the option to this is the approach that I would recommend.

Having had all my attempts in RTP in the past I was never worried about travel time, since it’s only about two hours from Chicago.  Normally I would fly in the day before the lab in the afternoon, and then immediately go to the airport after the exam to fly home.  Worst case scenario I could drive from to Chicago to RTP, which I actually have done in the past.  I remember one time when teaching a class at Cisco’s RTP office I left the campus at about 5:30 on a Friday, drove to RDU and parked my car, bought a ticket at the desk, and still had time to make a 6:15 Southwest flight back to Chicago.  I could only dream that Chicago O’Hare or Midway was as delay-free as RDU.

SJC on the other hand doesn’t have as many flights to Chicago, so I wanted to play it safe and arrive more than one day early.  Luckily I did plan it this way, otherwise with the Asiana Flight 214 incident at SFO this past weekend I might not be writing this post at all right now; the moral or the story being that if you have the option to travel an extra day early before the exam, take that option.

For the hotel I stayed at the Hyatt Regency Santa Clara, which was nice.  They have a nice outdoor pool area that I spent some time relaxing with my laptop at.  It’s fairly close to the Cisco campus, being about a 5 – 10 minute cab ride to the office in the morning, and then after the lab I walked back to the hotel which took about a half an hour or so.  If you’re familiar with the area it’s directly next to the Techmart and the Santa Clara Convention Center.

The Day of the Lab

San Jose’s lab starts at 8:15am, so I scheduled a cab from the hotel at 7:20am.  I figured this way even if the cab didn’t show up I’d still have time to walk over to the office.  Admittedly I did arrive much too early to the office, but it’s always better to be early than late.  If you’ve ever had a class with Brian Dennis you’ve probably heard the same joke he’s been telling for the last 10 years: “I’ve been both early for the CCIE Lab Exam and I’ve been late for the CCIE Lab Exam.  The preferred method is to be early.”

Since it was only about 7:30am when I got there I walked around the campus for a while just to try to calm my nerves.  Ultimately I checked in with the receptionist, and made some small talk with some of the other candidates.  I was hoping to go incognito for the day, but immediately the first guy I said hi to said “aren’t you Brian McGahan from INE?”  Oh well… that's the price of being nerd famous.

The proctor Tong Ma came out to the lobby around 8:15am or so to collect us all and check IDs, and then did his spiel about the logistics of the lab location (e.g. where the bathroom was, the breakroom, lunch time, etc.).  8:30am was our official start time, so I sharpened my colored pencils, sat down at my terminal, logged in, and prepared for the fun.

Immediately all around me I heard the other candidates furiously pounding away on their keyboards.  This is what I like to call the “panic approach”. I on the other hand started with a different approach that had already worked for me three times in the past.  I took my first sheet of scratch paper, and started a quick drawing of the diagram I was presented with.  This was my first lab attempt using the new lab delivery system where everything is electronic, but regardless in past attempts you couldn’t draw on their diagrams anyways.

One point of drawing out the diagram for myself was to help me learn the topology, but more importantly so that I could take quick notes as to which technologies would need to be configured in which portions of the network, e.g. which devices were running vPC, FabricPath, OTV, FCoE, etc.

The next step was to read through the exam, to see what technologies were actually being tested on, and to plan my order of attack on how I was going to build the network.  One thing that I have found with my past CCIE tracks is that the order that they give you the questions in isn’t necessarily always the best order that you actually want to configure things.  After all they’re only grading the result at the end of the day, not the actual steps that you used to get there.

Once I had a basic understanding of what was covered, and had taken some notes on my diagram as to which features went where, I took my two other pieces of scratch paper (there were 3 total but you can always ask for more if you need), and drew out my two tables that I use to track my work.  For those of you that have attended a live class with me in the past or watched any videos I’ve done on lab strategy you may be familiar with this, but for those of you that haven’t seen this there’s basically two tables that I use to track my work during the day.  The first of which I use to track which sections that I have configured, how comfortable I am with the answer I gave, and which sections I skipped.  Throughout the day this helps me to know what sections I need to go back to at a later time.  Also at the end of the day this is the sheet I use to go back and check everything with a fine tooth comb.  The end result looks something similar to the picture below, but this one is just something I made up now it’s not from any real lab.

The way I read this at the end of the day is that all the tasks with a Check mark I’m 100% confident that the solution is correct.  Tasks with a ? mean that I configured something, but I’m not 100% if it’s correct or that I answered the question the way that they wanted.  Anything that is blank, like section 2.3 that means that I completely skipped that task, and that I’ll come back to it at a later time.  Once I’m done with all the tasks, I then circle back around to the tasks that I completely skipped to see if I can answer them, then revisit the ones with a ? that I wasn’t 100% sure about.  Finally the “2nd” column is for my double checking, where I start all the way at the beginning of the lab and re-read each question, re-interpret it, verify my answer, and if satisfied check it off again and continue.  In the case of the DC Lab Exam I ended up with two tasks with a question mark and one task with a blank at the end of the day.  In other words by my count there was one task I definitely was getting wrong, two tasks I had completed but wasn’t sure if I interpreted properly exactly what they wanted, and all other tasks I was 100% confident were correct.

The second of these scratch paper tables was to track my timing.  After all if they gave you a week to configure the lab, most candidates would probably pass.  With the 8 hour time limit ticking down though, not so much.  This is why it’s not only important to track your progress throughout the day to see which sections you’re confident about your answer, but also how long it’s taking you to configure them.  The end result of this table looks something like this:

The “Time” column represents the hour of the day.  The lab starts around 8 and ends around 5.  Between 8am and 9am, I got zero points.  The rest of the values in the table are made up, I don’t remember what the point values of the sections in my attempt were, but the first row is actually true.  From 8:30am to 9am I did not configure a single section, and I did not gain a single point.  Why?  Because I spent that half hour drawing the topology, reading through the tasks, and planning my attack.  While most people take the “panic approach” and immediately begin configuring the lab blind, I knew that even though it would cost me time up front to draw and read, it would save me time in the long run.  This did actually save time in the long run, because I finished about 95% of the exam by 2:30pm, which gave me a very relaxed two and half hours at the end of the day to double, triple, and quadruple check my work.

Getting back to the table above, between 9am and 10am, I completed – and was confident with the answers of – sections that were worth 2, 2, 3, 2, 4, 2, and 2 points.  Basically each time I completed a task and it had a check mark in the other table, I wrote the point value down here.  Each time I completed a section I also checked the clock on the wall to make sure I was writing the point value in the correct row.  The logic of using this table is simple:  the exam is broken down into sections totaling 100 points.  Excluding your lunch break, you get 8 hours to configure the lab.  This means at an absolute minimum you need to be averaging 10 points per hour in order to hit your goal of 80 points.  Now ultimately the totals on the right should be consistently be reading above 10 points for the early portion of the day, because you don’t want to configure exactly 80 points worth of sections with zero time left over at the end of the day to check your work.  In this situation it’s very likely that you’re failing the exam.  Instead you want to be consistently be hitting 14 points, 16 points, etc. especially early in the day, because then it makes you more relaxed that you’re not as rushed for time.  Remember that in the CCIE Lab Exam your biggest enemy is stress – well other than simply not knowing the technologies, that’s kind of an issue too – so whatever you can do to help calm yourself down during the day, do it.

For me personally constantly tracking my timing is one of those methods that helps to relax me.  When I hit about 1pm/2pm that day, I looked at my sheets that were tracking my work, sat back and said to myself “there’s no possible way you’re not passing this exam.” Now of course I didn’t really know for a fact that I was passing, ultimately only the score report can tell you that, but based on my point counts and how much time I had left to go back and double check I knew that I was golden.  This brings us to my next point, which is that “golden moment”.

The Golden Moment

Every CCIE Track and its associated CCIE Lab Exam has what has been commonly referred to as the “golden moment”.  This is basically the point in the exam that if you can reach, and you have everything working, your chances of passing are very high, i.e. you’re “golden”.  In the case of CCIE R&S it’s having full IP reachability everywhere.  In Security it’s when all your LAN to LAN and Remote Access VPNs work; in Service Provider it’s when you can ping between all your L2VPN and L3VPN sites; in Voice it’s when you can make all your phones ring, etc.  In the case of CCIE Data Center, the golden moment is undoubtedly marked by one point: can you get your UCS blades to boot from the SAN.

Pictured above, the Zen of Buddha passing over me when I knew that I had reached the golden moment.

 

The CCIE Data Center Lab Exam is very unique in my opinion based on the fact that essentially all tasks are cumulative and somehow interrelated in the exam.  For example in the case of CCIE R&S, you could theoretically skip complete sections, such as IPv6, Multicast, QoS, etc. and still pass the exam, as long as you can gain enough points from all the other sections.  For Data Center though, this is not the case.  All tasks in the exam essentially are getting you to work towards the golden moment, which is to actually get your servers to boot their OS.  All the technologies are so highly interrelated that the DC lab exam is a delicate house of cards. If one minor task is wrong, you’ve essentially bought yourself a $100 rack rental and a $1400 lunch for the day.

For example let’s take a theoretical CCIE DC lab scenario, and look at how the most minor of mistakes can snowball, and cause you to have a very bad day. Suppose we have two Data Center sites, “DC-A” and “DC-B”.  In DC-A we have our UCS B series blade servers, while in DC-B we have our Fibre Channel SAN.  Our ultimate goal is to get the blades in DC-A to boot from the SAN in DC-B. DC-A and DC-B have a Data Center Interconnect (DCI) provider between them that runs both IPv4 Unicast and IPv4 Multicast routing, and it’s up to us to configure everything so that it functions properly.  On one our our edge Nexus 7Ks though we forgot to enable jumbo MTU on the link to the DCI.  Minor problem, right?  Wrong, we just failed the exam! But why?

The UCS blade server was trying to boot to the Fibre Channel SAN.  Fibre Channel doesn’t natively run over the DCI though because it’s a routed IP network.  To fix this we first sent the FC traffic to our MDS 9200 switches.  The MDS switches in DC-A and DC-B then encapsulated the Fibre Channel into a Fibre Channel over IP (FCIP) tunnel between each other.  Additionally the MDSes were in the same IP subnet and VLAN in both DC-A and DC-B, so their FCIP traffic had to go over an Overlay Transport Virtualization (OTV) tunnel across the DCI.  The OTV tunnel was up and working.  The FCIP tunnel was up and working.  Both the UCS blade and the FC SAN successfully FLOGI’d into the Fibre Channel Fabric.  All of our Fibre Channel Zoning was configured properly.  The UCS blade server’s Service Profile associated properly.  We clicked the “Boot Server” button, connected to the KVM console of the blade, crossed our fingers, and got this:

Pictured above, someone having a very bad day in the CCIE Data Center Lab Exam.

 

No!  It didn’t boot the VMware ESXi instance!  This means that our Nexus 1000v didn’t come up either! I just lost 22 points and failed the exam!  Why did the CCIE Lab Exam Gods hate me!

Pictured above, 2274 bytes > 1500 bytes

 

Our OTV tunnel is actually Ethernet over MPLS over GRE over IP, or what is sometimes referred to as a Fancy GRE Tunnel. Our SAN traffic is SCSI over Fibre Channel over TCP over IP.  FCIP frames can go up to about 2300 bytes in size, but our default Ethernet MTU was 1500 bytes.  OTV doesn’t support fragmentation, and sets the DF bit in its header.  This means that since we forgot to type the single command mtu 9216 on the edge switch connected to the DCI, our SCSI over Fibre Channel over TCP over IP over Ethernet over MPLS over GRE over IP was dropped, and we had to do the walk of shame out of building C in San Jose as we knew the CCIE Lab Exam had defeated us that day.

This of course is just one of a thousand different possible examples of how the house of cards that is the Data Center Lab Exam fits together, but you get the idea.  Luckily for me however, when I clicked the “Boot Server” button in UCSM this week, the results were closer to the output below.

Pictured above, someone doing the happy dance in the CCIE DC Lab Exam.

 

In Conclusion

For those of you still reading I hope that you found this post both helpful and informative.  If you’re still on the fence about pursuing the CCIE Data Center Track I would definitely say that it’s worth it.  If you told me 12 years ago when I got out of server support that I’d be back in the server support market today I’d never have believed it, but without throwing around too many buzzwords like public/private cloud etc. this Data Center space is clearly here to stay, and will only continue to grow.  Especially with how rapidly the market has been changing within the past few years with virtualization and an entire plethora of new technologies and design philosophies, it’s more important than ever to try to differentiate your skill set in the market.

Thanks for all the well wishes, and good luck in your studies!

May
02

First off I’d like to thank all of you that participated in beta testing both the CCIE Data Center Technology Lab Workbook and DC Rack Rentals, and for all the constructive feedback that was submitted. Yesterday the DC rack system left beta, and is now publicly available for bookings.

Data Center rack rentals cost 20 tokens for the base topology, which includes Nexus 7000, Nexus 5000, and Virtual Machines. There are three add-ons to the topology, which are Nexus 2000 & SAN, UCS & SAN, and ACE 4710. Each of the add-ons cost 5 tokens apiece, which means the maximum cost for the full topology is 35 tokens per session. With bulk token pricing this equates to about $11 USD per hour for rental, which is much more affordable than any other vendor out there.

For more detailed information on how to book sessions and use the system, please see the CCIE Data Center Rack Rental Access Guide.

Before the end of the month we will also be launching a second "Mock Lab" Data Center topology that will be used for full-scale 8-hour lab scenarios. More information about the availability of rentals on this topology will be posted soon.

Happy Labbing!

Apr
18

INE's CCIE Data Center Rack Rentals are now available for public beta testing.  During this beta testing period, racks are 100% free to book for CCIE DC Workbook customers.  Simply login to your http://members.ine.com account, click the Rack Rentals option on the left, and you will see the CCIE Data Center racks listed, as seen below:

Click on "Schedule/Cancel Session" and the calendar will appear as below:

Each rack rental session for DC includes a "base" topology, and then one or more possible add-ons.  In the scheduler you can check which, if any, add-ons you want, and it will then search the schedule to show you which time slots are currently available for that variation.  You can see these as the check boxes as the top of the scheduler above.  The possible variations are as follows:

During the public beta you will be limited to four concurrent bookings at a time.  After the beta period is finished the token cost for DC rack rentals will be 20 tokens for the base rack, and then 5 additional tokens for each add-on, making the maximum token cost 35 tokens per session.  

The goal of the public beta is to get feedback from the community on the usability of both our new content delivery system and the DC rack rental system.  Additionally we will be finalizing and fine-tuning some of the automation features for the racks and the control panel, such as config backup, restore, etc.  As the system approaches full launch I will be posting a video walkthough of the system and the new workbook content delivery system to talk about all the new features we have integrated into it.

As always, feedback is both welcomed and encouraged.  As you are going through labs in the workbook please use the "Feedback" button on the bottom of the page to submit questions, comments, etc. about that particular lab.  If your comments are more general feel free to post them here or to email me directly at bmcgahan@ine.com.

Happy Labbing!

Apr
15

After many months of development, INE is happy to announce that our CCIE Data Center Technology Lab Workbook is now available! The initial release of the workbook contains over 100 hands-on lab scenarios that walk you through the technologies used in Cisco’s modern Data Center architecture. Whether you are preparing for the CCIE Data Center Lab Exam, have an upcoming implementation project with Cisco’s Nexus switches or Cisco’s Unified Computing System (UCS), or you simply want to gain hands-on experience with these cutting-edge technologies, this workbook is for you.

The labs in this workbook are all individually focused advanced technology labs that present topics in an easy-to-follow, goal-oriented, step-by-step approach. Every scenario features detailed breakdowns and thorough verifications to help you completely understand the technology. The workbook is subdivided into two sections, Nexus Technology Labs and UCS Technology Labs. A sample of the Nexus Technology Labs can be found here, and a sample of the UCS Technology Labs can be found here. The workbook is priced at $399, and can be purchased here.

This workbook is also the first to use INE’s new online content delivery system, which will continue to evolve over the next few weeks to add new features as well as additional content. Ultimately this new content delivery system will be extended to all workbooks and tracks, i.e. R&S, Service Provider, Security, Voice, Wireless, etc. We encourage all users to submit feedback not only on the technical nature of the content but also about the usability of the system. To do so use the “Feedback” link in the bottom right portion of the page of each lab, as seen below:

The rack rental scheduler for Data Center will appear on Wednesday for public beta bookings.  We've already been piloting both the workbook and the rack rentals in private beta for a few weeks, but this week we will be releasing the racks in public beta.  Basically what this means is that if you are a workbook customer you will be able to book DC rack sessions for free.  The goal during the beta period is to get feedback from our customers on the usability of the system as well as to fine-tune our automation system.  Once public beta is completed in about a week or two then the rack scheduler will be available for public consumption.

During this next week a lot of additional content will be added to the Nexus section of the workbook, including the OTV and FabricPath sections just to name a few, which are currently in final editing.  As I mentioned this is initial release of both the workbook and the new content system, so we are encouraging users to submit as much feedback as possible on it.  I will be posting a more detailed overview of the content system later this week along with some videos that walk through the specific features that are available, and the ones that are in development and are slated for upcoming release.

 

Thanks, and happy labbing!

Mar
24

Update: The CCIE Data Center Technology Lab Workbook is now available here

After a long and highly anticipated wait, INE’s CCIE Data Center Workbook and Rack Rentals are (almost ;) ) finally here! This post covers three items of business, the state of the DC Workbook & Rack Rentals release, and the Implementing Nexus 5-Day Bootcamp that I am running online next week.  This post ran a little longer than I had initially anticipated, so below are some quick links to the particular sections that you may be interested in:

Implementing Nexus 5-Day Bootcamp
DC Workbook Delivery
DC Rack Rental Topology
DC Workbook and Rack Rental Pricing

Implementing Nexus 5-Day Bootcamp

Like I said, next week I’m running an online version of our Implementing Nexus 5-Day Bootcamp.  Attendance for this class is split into two groups, those who formally registered, and AAP members. For those who formally registered for class you already know who you are, and you will be getting equipment access as this class is mostly a hands-on one. All other AAP members are more than welcome to attend the lecture portion of class, which will include me showing hands-on examples, explaining the technologies, and taking questions. Class starts tomorrow morning at 07:00 PDT (GMT -7). The link to attend can be found in your members account under the All Access Pass section, as seen below:

Data Center Workbook Delivery

Now onto the real order of business, the DC Workbook and Rack Rentals! Those of you that have been following our DC products know that we’re a little behind schedule for the release of both the workbook and the rack rentals, and I wanted to address here specifically the reasons why. Since Data Center is a completely brand new product line for INE, it has given us carte blanche to build it differently from the get-go, taking into account a lot of feedback that we’ve gotten from customers over the years on how INE’s content is produced and delivered for other tracks. This August will mark the 10-year anniversary since Brian Dennis and I started INE, and we’ve learned a lot over the years as to what works and doesn’t work in respect to building product lines.

The first big change for the DC product line is that the workbook will be delivered exclusively as online content. In the past our workbooks were formatted primarily for print, with online delivery secondary. With DC and with our other workbook products going forward, we’re going to be focusing on an online delivery primarily, with print still available at your own discretion, but not the main focus. To accomplish this we’ve internally developed a new content management and delivery system, which is part of the reason for a delay in our release. As they say in project management you have the options to have it cheap, fast, and good, but you only get to pick two out of the three. Actually in this case I think we only got to pick one out of three because it’s been neither cheap nor fast ;) Without boring you will all the specifics of the new system, here are the important highlights about it:

  • Content is still DRM-free

For me personally this is a big one, because I really really despise all forms of DRM. DRM only hurts paying customers, and at the end of the day doesn’t stop someone from stealing your content if they really want to. A lot of INE’s competitors still use DRM, and I’m whole-heartedly against it. Companies that use DRM do so because they care more about their bottom line than about their end customer’s experience, plain and simple. Case and point is EA’s recent debacle with the release of their new SimCity game. Because of DRM, SimCity is on track to be the worst rated product in Amazon’s history.  If you are like me and don’t want DRM-based products then vote with your dollars and don’t buy from companies that use it. Okay now I’ll get down off of my DRM soapbox ;)

  • Print is still available

For the first time in a long time, I recently bought a hardcopy book. This particular book was on VMware, I got about one chapter in, and then found myself reading the same book on my iPad through Safari Online the next day.  That poor book is still sitting in the same place that I set it down a few months ago, but I digress. Maybe you work for the LaserJet business unit at HP, or at Dunder Mifflin Paper Company, or maybe you just really hate trees, who knows? Regardless, if you want to print out the content you’re more than welcome to. This means you can also print to PDF to take the documents offline.

  • Content is now searchable and bookmark-able

This one is pretty self-explanatory. Bookmark a place in the content so you know where you left-off next time. Search will become increasingly more relevant for the delivery system once we begin to port our other content into it, such as the CCIE R&S Lab Workbook Volume 1, which has over 500 pages just in the QoS section alone!

  • Making updates and maintaining errata are now much easier for us

With the sheer volume of content that we currently manage, making even the most minor updates becomes a very large project for us. The new system fixes this on our end so we can make updates on the fly which everyone immediately has access to, which also eliminates the need to maintain errata.  This also means we can more quickly respond to feedback and bug reports, which leads us into the next point…

  • Simplified feedback submission and bug reporting

While most errors in our content are minor, it can be very frustrating for candidates when I wrote “R1” but I really meant “R2”. Technical accuracy of our content is key, and if you find a problem we want to know about it so it can be fixed. With the new system you’ll be able to submit feedback directly on the page where the content lives, which means that on our end it will make the tracking and fixing of problems must faster and efficient.  We also have more staff dedicated to technical editing and quality assurance of content, so this process will continue to improve from now on.

  • More community interaction

Discussion of a particular lab/task will be more tightly integrated with the content itself. This way you can see other peoples questions, comments, etc. that relate to what you are currently working on at the time.

The system of course will continue to grow and improve over time. As it stands the content for the DC workbook is essentially done, the backend for the content management and delivery system is done, but we’re still finishing up tweaking the front-end of it. The system will be going into beta sometime this week, and with your feedback we hope to go to full public launch shortly afterwards. Below you can see some preliminary screen shots of the front end of the system:

 

Data Center Rack Rental Topology

Last, but definitely not least, is the Data Center Rack Rentals.  After all, without equipment to actually do the labs on the content is useless, right?

The equipment build for DC has actually been the most complicated of my career, as our main goal has been to provide a mix of feature availability and affordability to you, our end users.  I thought that our CCIE SPv3 build was complicated, where when everything was said and done a rack costs us about $15,000 - $20,000 to build give or take, which includes two IOS XR routers.  With DC though it’s not that simple, as a single 10GigE M1 line card for Nexus 7000 retails for about $50,000.  It’s not even eligible for free Amazon Prime shipping!  What a rip-off! ;)

In the end we came up with some creativity in the build, which actually results in us having two separate topologies.  The first topology, which is the one I used for the Implementing Nexus 5-Day Bootcamp last week and am using again for the class next week, can be thought of as the “DC Technologies Topology.”  This is the rack rental topology that will be launching first, as it allows the most affordability and flexibility in which equipment you get access to.  The second topology can be thought of as the “CCIE Data Center Mock Lab Topology”, as it will more closely mirror the topology used in the CCIE DC Lab Exam.  First I’m going to talk about the specifics of the technology topology, and then afterwards talk about the mock lab topology.

Specifically, the DC Technologies Topology is made up of the following equipment:

  • 2 x Nexus 7010s, each with 10GigE M1 & F1 Linecards
  • 8 x Nexus 5020s, each with a Fibre Channel expansion module
  • 2 x Nexus 2232PP 10GigE Fabric Extenders, paired to the 5Ks
  • 8 x Windows VMs, dual 1GigE attached to the 5Ks
  • 1 x Windows Bare Metal, dual 10GigE attached to the 2Ks
  • 2 x UCS 6248 Fabric Interconnects
  • 1 x UCS B5108 Blade Chassis
  • 2 x UCS B22 M3 with VIC 1240
  • 1 x UCS B200 M2 with Emulex CNA
  • 1 x MDS 9222i Fibre Channel & IP Storage Switches
  • 3 x MDS 9216i Fibre Channel & IP Storage Switches
  • 1 x ACE 4710
  • 3 x Apache VMs for ACE testing
  • 2 x Catalyst 3750Gs for misc. 1GigE connectivity
  • Fibre Channel SAN

The above physical topology then breaks down into four logical racks.  In other words, the rack rental scheduler will start with four racks in inventory for every time slot.  Each rack rental is assigned base set of equipment that all rentals get, and then there are then three possible “add-ons” that you can choose.  The add-ons include one for Nexus 2K & SAN, one for UCS & SAN, and one for ACE.  From a high level overview the equipment groupings are as follows:

Specifically the groupings include the following equipment:

  • Base N7K & N5K Rental
    • 2 x Nexus 7K VDCs
    • 2 x Nexus 5Ks
    • 2 x Dual 1GigE attached Windows VMs
  • ACE Add-on
    • ACE 4710
    • 3 x Apache VMs
  • N2K & SAN Add-on
    • 2 x Nexus 2Ks
    • 1 x Dual 10GigE attached Windows bare metal
    • 2 x MDS 9200’s
    • Fibre Channel SAN
  • UCS & SAN Add-on
    • 2 x UCS 6248 Fabric Interconnects
    • 1 x UCS B5108 Blade Chassis
    • 2 x UCS B22 M3 with VIC 1240
    • 1 x UCS B200 M2 with Emulex CNA
    • 2 x MDS 9200’s
    • Fibre Channel SAN

Physically the racks are highly interconnected, which down the road will allow candidates to build larger topologies by renting multiple racks at the same time.  At launch time only one base rental will be allowed per booking, but this will be extended in the future.  Physically the overall topology looks as follows:

 

A more detailed version of the diagram can be found here.

Data Center Workbook and Rack Rental Pricing

This of course now begs the question, how much will it cost?  After all, a $700 workbook that requires you to spend $12,000 in rack rentals with little-to-no availability probably isn't the best investment of your time or money ;)  For INE's pricing the DC Technology Workbook will be priced at $399, and base rack pricing starts at 20 tokens for a 2.5 hour session.  Each add-on costs an additional 5 tokens per 2.5 hour session.  In other words if you wanted the base rental plus all three add-ons, the maximum cost is 35 tokens per 2.5 hour session.  With bulk token pricing this equates to about $11 per hour for the entire topology, or about $6 per hour for the base N7K & N5K topology.  As I said our main goal has been to find a happy medium between functionality and the feasibility of people to actually afford renting the sessions.

Feature-wise there are a few limitations of this topology, for example the Nexus 5020s don’t support Adapter FEX or Multihop FCoE like the 5500s do, but in the scheme of things the topics that you cannot do on this topology are minor.  However, for the sake of completeness of content we will also be building a second more advanced topology that will remove these few feature limitations.  This second “CCIE DC Mock Lab Topology” will more closely mirror the actual lab hardware list, and when you rent the mock lab topology you will always get all equipment, including all of the Nexus 7K VDCs, Nexus 5548s, Nexus 2232PPs, UCS B & C servers, MDSes, etc.

This then leads us to the question, how does the workbook relate to the rack rental variations?  From a high level, the workbook is split into three main sections: Nexus Technology Labs, UCS Technology Labs, and CCIE DC Mock Labs.  The initial release of the workbook will be the first two technology labs sections, followed-up by a separate product that is the CCIE DC Mock Labs.  As a parallel to our other product lines you could think of this as similar to the division between CCIE R&S Volumes 1 & 2.  The CCIE DC Mock Lab Workbook will require the CCIE DC Mock Lab Topology, as the name implies.  However for the technology labs, not all labs require access to all equipment.

As part of the introductory section to each technology lab, a note is included to tell you which particular topology is required.  For example the ACE labs obviously require you to rent the ACE 4710, but they don’t require you to rent the Nexus 2K or UCS add-ons.  Likewise a large number of the Nexus Technology Labs only require the base Nexus 7K/5K rental.  As you can probably guess, essentially all of the UCS Technology Labs require that you rent the UCS add-on.  Mostly this is self-explanatory, but it is clearly spelled out in each lab so that you can plan ahead with your rack rental and study schedule.

As I mentioned this week I’m running an Implementing Nexus 5-Day Bootcamp, so all of the equipment will be utilized for that class and won’t be available for beta testing the rack rentals.  The current plan is for DC rack rentals to go into public beta (we’ve already had a number of people doing a private beta for us) the week of April 1st, along with the workbook.  If you are interested in beta testing just send me an email at bmcgahan@ine.com with the subject line “DC Beta Testing”.  For those of you that already previously submitted an email I still have them and I will be contacting everyone shortly with more details on the beta process.

As always, questions and comments are welcome.  Also feel free to visit the CCIE Data Center General and CCIE Data Center Technical sections of our online community for more detailed discussions.  I’ll be posting another follow-up blog post as the beta progresses and then of course as we go into full public release.

 

Thanks!

Sep
05

This past week Mark Snow and I completed the first of three of our CCIE Data Center Live Online Bootcamps - Nexus Switching. This class focused on the core Layer 2 Switching and Layer 3 Routing features of Nexus NX-OS on the 7000, 5500, and 2000 platforms, and the Data Center specific applications of the platforms with technologies such as vPC, FabricPath, and OTV, just to name a few. The videos from class are now in post-processing, and will be available both for download and in streaming format, both of which have cross platform support (Desktop, iPhone/iPad, Android, Windows phone, etc.) Also as usual our videos are DRM free, so once you purchase them they are yours to do with as you please.

All Access Pass subscribers will get access to the videos in streaming format for no additional fee, and can also purchase the download version at a discounted rate. The download version can be purchased standalone here for people who are not AAP subscribers. Our next two classes, CCIE Data Center Storage and CCIE Data Center Unified Computing are coming up at the end of September and October respectively. All AAP subscribers can attend the live online classes for free, while anyone who want to purchase the download in advance also gets access to attend these classes.

Below are some excerpts from the class relating to the new FabricPath technology. FabricPath is a new alternative to running Spanning-Tree Protocol in the Layer 2 DC Core, and is a pre-standard version of the TRansparent Interconnection of Lots of Links (TRILL) feature. The videos below cover the underlying theory of FabricPath, it's basic configuration, it's more advanced configurations and verifications, and its integration with Virtual Port Channels (vPCs) with the vPC+ feature.

Enjoy!

CCIE Data Center :: Nexus :: FabricPath Overview


http://www.youtube.com/watch?v=Jcge3biM4Fk

CCIE Data Center :: Nexus :: FabricPath Initial Configuration


http://www.youtube.com/watch?v=wlr4vqU-MqE

CCIE Data Center :: Nexus :: FabricPath Configuration & Verification


http://www.youtube.com/watch?v=MKabiav9kLE

CCIE Data Center :: Nexus :: FabricPath & vPC+


http://www.youtube.com/watch?v=9P8nX5totNI
Jul
25

INE is proud to be able to bring you news that 2 CCIE Data Center seats will be tested per month in the Cisco RTP testing facility, with this number expected to increase to at least 1 per week (possibly 2 seats per week) very shortly based on demand. This is in addition to the CCIE DC seats that will already be available in San Jose, CA and Brussels, Belgium when the lab debuts in October. Also, another Security testing seat has been added to RTP, and many Saturday Security dates have been added as well in RTP, giving the RTP site many available Security seats open ahead of the pending blueprint update.

Jul
24

This weekend Mark Snow and I delivered a vSeminar on CCIE Data Center, and INE's plans for our CCIE DC training product lines. The vSeminar, which is broken down into two sections of about an hour apiece, is now available in its recorded form below for those of you that weren't able to join us live this weekend.


http://www.youtube.com/watch?v=0zcYbQqDrCs

http://www.youtube.com/watch?v=Xu3xpW6yq4Q

The first portion of the vSeminar deals with the hardware and software blueprint for the upcoming CCIE DC Written Exam and CCIE DC Lab Exam. Specifically this includes the following:

  • Nexus 7009 v6.0(2) w/ SUP1, 10GE F1, 10GE M1
  • Nexus 5548 v5.1(3)
  • Nexus 2232
  • Nexus 1000v v4.2(1)
  • UCS 5108 Blade Chassis
    • B200 M2 Blade Servers
    • 6248UP Fabric Interconnects v2.0(1x)
    • 6204 FEX
  • UCS C200 Series Server
  • Application Control Engine (ACE) 4710 vA5(1.0)
  • MDS 9222i v5.2(2)
  • JBODs
  • Cisco Data Center Manager software v5.2(2)
  • 2511 Router & Catalyst 3750 for mgmt access

The next section deals with the technical topics that are within the scope of the exam, and goes through a high level design overview of the hardware platforms and where their software features fit into the modern Data Center design. Specifically we have subdivided the topics into three main topic domains, which are:

  • Nexus Switching
  • Storage
  • Computing

Beyond this we talk more about our plans for delivery methods for CCIE Data Center training, the expected release schedule, and additional training domains moving forward.  Specifically the four main delivery methods we will be offering are:

  • Online & Live Onsite Classes
  • Streaming & Downloadable Videos
    • DRM free & cross platform support
  • Self-Paced Labs
  • Rack Rentals
The first of these products that we will be releasing are the Live Online Classes.   You can sign up for these classes here, which will give you access to attend the classes live, and also to the recorded versions of the classes which will be available about a week after each class's completion.  These classes and dates are:

More details about other products such as the primer, deep dive, & troubleshooting video series, self-paced labs, rack rentals, and live onsite classes will be available within the next coming days and weeks and will be posted here on our blog.

Let us know what questions you have going forward with CCIE Data Center training, and Data Center focused training in general, as we are as excited about this brand new track addition as you are!

Subscribe to INE Blog Updates