Archive for July, 2018

Jul
31

 

You may recall that, when using Named-Mode EIGRP configuration you have automatic access to EIGRP Wide Metrics.  In addition to providing you a new K-Value (K6 which is used against Jitter and Energy) the EIGRP Distance formula has been revised (what they call, “scaled”) to account for links above-and-beyond 10Gbps.  Remember that with Classic-Mode EIGRP, the formula looked like this:

metric = ([K1 * bandwidth + (K2 * bandwidth) / (256 - load) + K3 * delay] * [K5 / (reliability + K4)]) * 256

In the formula, the “**bandwidth**” value was represented as:

BW = 10^7 / minimum BW

 

In the formula above, the “minimum BW” was represented as Kbps. The problem with this “classic” method was all links with a bandwidth higher than 10Gbps (10,000,000,000 bps, represented as 10,000,000 Kbps in the formula) were given the same BW value as 10Gbps.  In other words, whether you put a single link of 10Gbps into that formula,  a link of 40Gbps, or an Etherchannel with a combined bandwidth of 80Gbps…they all equated to “1″. So in Classic Mode EIGRP, EIGRP couldn’t distinguish between these types of links to develop an accurate path to a destination.

When EIGRP Wide-Metrics were developed, Cisco applied an “EIGRP_WIDE_SCALE” factor against some portions of the formula (which equates to the value of 65,536) to account for faster links (as well as smaller delay values).  They also changed the terminology in the formula from “bandwidth” to “throughput”. So now the “new” formula for EIGRP Wide-Metrics does the following to the “minimum bandwidth” portion of the formula:

Minimum Throughput = (10^7 * 65536)/Bw), (remember that Bw is in Kbps) where 65536 is the “EIGRP_WIDE_SCALE” constant.

By multiplying 10^7 against 65,536 EIGRP, Wide-Metrics can now accurately differentiate between links of any speed/bandwidth. EIGRP Wide-Metrics also multiply this value of 65,536 (the “EIGRP_WIDE_SCALE” constant) against the Delay sum.

But here’s the problem,  the computed value of this new formula might NOT FIT into the IP Routing Table (called the “RIB” – Routing Information Base).

When you view the output of “show ip route” for any given route, you see two values contained in brackets.  For an EIGRP-learned route, the first number in the brackets represents the Administrative Distance.  The second value represents what I call the “EIGRP Distance”.  Others call this simply the route “metric” or “EIGRP Composite Cost”.  No matter what term you use, this field in the RIB is only 4-bytes long.

Routing-Metric

Here is the problem,  EIGRP wide metrics (because they have an “EIGRP_WIDE_SCALE” multiplier of 65,536 used against several of the vector-metrics such as bandwidth and delay) could come up with a distance value so large…that the resulting distance value doesn’t FIT within a 4-byte value in the RIB.

The maximum decimal value that can be contained within a 4-byte number is 4,294,967,296.  However, if you were to place one’s (1′s) in each placeholder the EIGRP wide-metrics formula, the resultant bandwidth value (by itself) would be so large that it would break the boundaries of a 4-byte placeholder in the RIB:

BW = (10^7 * 65536)/1) = 655,360,000,000

and that is even BEFORE adding the sum-of-the-delays into the mix.:

((K1*[655,360,000,000) + (K2*Scaled Bw)/(256 – Load) + (K3*Scaled Delay)*(K5/(Reliability + K4)))

The result would be, that while EIGRP was able to calculate a Distance value, that value would be too large to be placed into the RIB. This could happen in a couple of scenarios:

  • An EIGRP packet containing a really slow-speed link in the path (like a 56Kbps dialup link)
  • Redistribution of other protocols into EIGRP, and selecting a “bandwidth” value (within the “metric” keyword) that was too low.

 
Bandwidth is too small
 

And so here’s the rub…EIGRP Wide-Metrics supply the ability to differentiate between links of all kinds of different bandwidth values (due to the additional “EIGRP_WIDE_SCALE” factor of 65,536) but the resultant EIGRP Distance value could be too large to fit into the 4-byte “Metric” field within the RIB. If that were the case, this is what you’d see (notice the words, “FD is Infinity” below for the EIGRP routes to 111.111.111.1/32 as well as 1.1.1.0/24)

Well…those engineers at Cisco were pretty smart and incorporated a special little “tweak” into Wide-Metrics to account for just this problem. This tweak is called the “metric rib-scale”. What this does, is to take all EIGRP Feasible Distance values (which may-or-may-not be too large to fit into the 4-byte RIB “metric” value) and DIVIDE THEM by a value called…you guessed it, the “metric rib-scale”. The default value of the “metric rib-scale” is 128 which, for most normal routes, is enough to bring them down to size to fit into the RIB. This value can be seen in the following output:

This explains why, when viewing the EIGRP Topology Table, an entry for a prefix will display both the 64-bit EIGRP Distance value…as well as the “scaled” values (that was divided by 128) as the “RIB” value:

And here you can see that scaled RIB metric reflected in the IP Routing Table (since the original EIGRP Feasible Distance was too large to fit):

But sometimes, the 64-bit Feasible Distance of a route is so large, that scaling/dividing it by the default RIB-Scale value of 128 simply isn’t enough. As I previously showed you, these types of EIGRP Topology entries will show as “FD is Infinity”. It is for this reason, that one may need to adjust this value to a larger RIB-Scale factor (using the EIGRP command, “metric rib-scale”) such that the resulting quotient is small enough to fit into the RIB.

For example, let’s take a look at this output again…

Even if we divide the FD of 656,671,375,360 by the default RIB-Scale value of 128 the quotient would be 5,130,245,120 which is still larger than our maximum allowable RIB metric of 4,294,967,296. It is for this reason that we would need to adjust the RIB-Scale value to something else (larger than 128) to create a quotient that was smaller than 4,294,967,296. The RIB-Scale is a configurable number between “1″ and “255″. So by increasing the number beyond the default of 128 we can create quotients that are small enough to fit within the RIB (IP Routing Table).

So let’s apply a new RIB-Scale value to EIGRP and see how that same route, which was previously listed as “Infinity” can fit into the RIB;

(BEFORE…with the default RIB-Scale value)

 

(AFTER applying a larger RIB-Scale value)

 

Jul
27

East Coast, West Coast or International, we have a Bootcamp in a city near you! Check out our 2019 Bootcamp locations below, including a brand-new location; Salt Lake City, Utah.

Don’t see a city that works for you? We now offer online-live Bootcamp options as well. Check out our Bootcamps Site or contact a training advisor for more information.


Contact Us:

info@ine.com, +1 877-224-8987, +1 775-826-4344 (international Customers)

Jul
26




This course is taught by Esteban Herrera and is 3hours and 28 minutes long. You can view the course here if you’re an All Access Pass member.


About The Course:

The Certificate of Cloud Security Knowledge (CCSK) certification is currently one of the most important cloud computing certifications you can get. The Cloud Security Knowledge Certification addresses core security concepts in cloud computing such as governance and enterprise risk management, compliance and audit management, infrastructure, virtualization & containers, data security & encryption, and much more. This course will be based on the documentation provided by Cloud Security Alliance.

Jul
23

Did you know INE Inc. Is partnering with Aviator Brewing in this years Hops for Hope Competition to raise money for Children’s Flight of Hope?





To aid us in our efforts we’re offering a chance to win a FREE All Access Pass if you donate to this great cause. From now until July 31st donate $25 or more to Children’s Flight of Hope and you’ll be entered into a drawing to win a one year All Access Pass on us! Click Here to donate!


What is Hops for Hope?

Triangle Hops for Hope is a fundraising event that pairs corporate teams with local breweries to create an original beer and raise money for charity. Teams showcase their creations to hundreds of attendees at an epic beer competition on September 22, 2018 at the Raleigh Beer Garden. It’s the perfect opportunity to mix corporate social responsibility, employee engagement, and support local craft breweries.

All proceeds benefit Children’s Flight of Hope, a 501(c)(3) organization that provides air transportation for children to access specialized medical care. Last year’s event raised more than $70,000 for CFOH!

If you’re in the Raleigh-Durham area and want to buy tickets to this event you can do so here.

Jul
20






Data Science in the Cloud

Data Science, machine learning, deep learning; these are the different driving forces of the current revolution which is changing the way businesses, companies and people make decisions, work and innovate. Data Science is triggering profound innovations in healthcare, finance, transportation, manufacturing and many other sectors.

Data science is evolving at lightning speed with a multiplication of approaches, tools and platforms. In parallel to script-based data science, think python and scikit-learn, the major cloud providers are developing platforms to power and facilitate the data scientist’s daily work and the implementation of data science projects in production. The Google Cloud Platform offers one of the most innovative and user friendly data science ecosystem.

My name is Alexis Perrier, I am a data science consultant and I’m very excited to be the instructor on this data science course on the google cloud platform. I teach data science in colleges, bootcamps and also for company training sessions. I recently wrote a couple of books on Machine Learning on the Google Cloud Platform and on AWS, both are with Packt Publishing.

I have a PhD in signal processing from Telecom-ParisTech followed by over 20 years in software engineering and 5 years ago I went back to data science. Although the data science ecosystem is fast evolving, it is deeply grounded in solid applied mathematics. Case in point, in the mid 90s, my signal processing PhD was on echo cancellation in hands-free phones and for that we were already working with the stochastic gradient algorithm which is widely used now days to train deep learning networks.

In 2015, Google released Tensorflow which is now the most popular deep learning framework. And in recent years Google has launched several high performing services across the whole data science workflow: from data storage with Google Storage and BigQuery, distributed computing with the Google Compute Engine, specific Deep Learning APIs for text, images, videos and speech and the Google Machine Learning Engine dedicated to training deep learning models.


Learning Outcomes

In this course, you will focus on the cloud infrastructure, data storage, and machine learning services of the google cloud platform.

At the end of this course you will be able to:

  • Launch your own Compute Engine instances and build a data science stack running Jupyter notebooks
  • Use advanced features of Google Storage such as synchronization, access control lists, signed urls and others
  • Host and query your data in BigQuery which is google’s data warehouse solution
  • Launch Datalab instances to work collaboratively in Jupyter notebooks using data from BigQuery and other sources
  • Use Google Deep learning APIs to extract information from text, images, videos and speech
  • Use Google ml-engine to rapidly train tensorflow models in the cloud without having to configure virtual instances

Throughout the course we will work with the Google SDK command lines in the terminal and develop simple python scripts to interact with the google services.

I’ve created this course with the following objectives:

  • To enable you to leverage the powerful google infrastructure for all your data science projects
  • To demonstrate some of the limitations of these google services
  • To make sure that the important implementation details stand out from the overall technical documentation
  • To reflect real-world scenarios as much as possible by using non-trivial datasets whenever possible


Course Requirements

This course is intended for data scientists of all levels who want to get a full understanding as well as hands-on practice of the Google Cloud Platform services for data science projects.

You should be familiar with the overall concepts in data science but more importantly have some minimal experience with Python scripting, sql queries, Shell commands. Nothing we do in this course requires deep knowledge of Shell or Python scripts, but being comfortable working from the terminal will help.

All the Shell and scripts in the videos are available on this github repository.

Conclusion

The Google Cloud Platform offers a very powerful set of services for data science. And in my personal experience it is quite a user-friendly environment to work with. Since most services are server-less you are able to leverage the amazing power of the google cloud infrastructure without the pain of setting up, launching and scaling servers manually or programmatically.

Google Cloud is a fast evolving ecosystem with periodic updates, and frequent alpha and beta releases of new features and services. Mastering these google services will definitely boost your data science knowledge and skills.

Please feel free to drop me a line if you have any question or comments.

Jul
19

Rohit has been in the networking industry for more than 17 years, with a focus on Cisco networking for the past 15 years. Rohit not only brings his years of teaching experience to the classroom, but also years of real-world enterprise and service provider experience. Rohit has assisted hundreds of engineers in obtaining their CCIE certification, and has been conducting CCIE RS, CCIE SEC, CCIE SP and CCIE Collaboration for Cisco Systems worldwide. Rohit currently holds 5xCCIE’s (Routing Switching, Service Provider, Security, Voice and Collaboration). When not teaching or developing new products, Rohit consults with large ISPs and enterprise customers in India and UK. Rohit is currently pursuing his CCIE Data Center certification.

Jul
17

This course covers the basics of Docker and Kubernetes by showing how to build modern clouds with these technologies. By the end of this course students will be able to launch a Kubernetes cluster and deploy self-healing and scalable applications, as well as create their own continuous integration, and continuous delivery pipeline.



Why You Should Watch:

Kubernetes has quickly become the standard platform for running containerized workloads. All the major public clouds now have a Kubernetes-as-a-Service offering and popular container management tools, like Rancher, have migrated their underlying platform from in-house software, like Cattle, to Kubernetes. Even Docker themselves are now natively supporting Kubernetes.

This course is meant to teach you how to get started building modern clouds with Kubernetes and Docker, while covering the basic concepts of DevOps.


Who Should Watch:

This course is intended for anyone wanting to learn about Kubernetes and Docker. A basic familiarity with the Linux command line and the basic high-level concepts of the public cloud are recommended. The recommended public cloud platform for this course is Google Cloud Platform.


About The Instructor

David Coronel has been in the IT field since 2002. David started as a call center agent and quickly made his way to systems administration. David is a Certified Kubernetes Administrator, a Docker Certified Associate, a Certified OpenStack Administrator as well as an AWS Certified Solutions Architect Associate. David is currently employed as a Technical Account Manager at Canonical, the company behind Ubuntu.

Jul
13

This is a 2 and a half hour introductory course in Machine Learning. It’s taught by Yogesh Kulkarni, a practitioner and instructor in the field of Machine learning.


Why You Should Study This Topic:

Machine Learning is getting more popular each day. It is not just hype, but an essential technique made possible and powerful by the availability of data. Studying Machine Learning is imperative, and Python is a good programming environment to get started with the basics. This workshop will not only familiarize you with this powerful and popular techniques, but will also give you the confidence necessary to venture into this on your own, thereby improving your chances of a lucrative career ahead.


Who Should Watch:

This course is for anyone who wants to become more familiar with Machine Learning. It is recommended to have some knowledge of college level mathematics and programming using Python. You can be from any domain, such as Finance, Engineering, Agriculture, Biology, etc. you will know a new problem-solving technique which could be of great help in your own domain.


What You’ll Learn:

You will learn what Artificial Intelligence is, how it relates to Machine Learning and Deep Learning, what’s the core idea behind problem solving and how Machine Learning technique goes about finding solutions. You will also learn how Machine Learning solutions are implemented using Python programming and see an example a practical real-life case.


About The Instructor:

Yogesh H. Kulkarni is a consultant and an instructor in Data Science space, with a doctoral degree in Mechanical Engineering, specializing in Geometric Modeling algorithms. He has taught Python, Machine Learning and Natural Language processing in the corporate world as well as to students in Engineering colleges.

Jul
12

According to the 2018 CIO Survey many organizations are having trouble finding and retaining talent with the necessary skillset to fill positions related to some of today’s most popular and cutting edge technologies. Organizations point to education program’s inability to keep up with rapid changes in modern technology, as well as a general high demand for certain positions as the culprit (Florentine).

Luckily, at INE we add new courses every week on a variety of topics, including those that are considered among the newest and most cutting-edge. Continue Reading to see which IT jobs the CIO report has dubbed the highest in-demand.

This blog post is based off of an original CIO article by Sharon Florentine. To read the original article click here.

Jul
09

This course was created by Piotr Kaluzny and is 2 hours and 32 minutes long. It consists of multiple videos where the Instructor discusses all relevant theoretical concepts and technologies, (in-depth explanations, whiteboarding) and shows how to implement them on the current CCIE Security v5 lab exam hardware.




Why You Should Watch:

Security is no longer just an “important” component of an organization. A constantly-increasing number of aggressive cyber criminals launch their attacks not only from the outside, but also inside of the organization, making security an inherent component of any modern network/system design.

This course, like all other courses that are part of the “CCIE Security v5 Technologies” series, is meant to teach you Cisco security technologies and solutions using the latest industry best practices to secure systems and environments against modern security risks, threats, vulnerabilities, and requirements.


Who Should Watch:

This course is not only intended for students preparing to the current CCNA/CCNP/CCIE Security exam, but also for experienced Network (Security) Engineers or Administrators looking to refresh their knowledge on important Network Security concepts before moving forward with other certifications.


What You’ll Learn:

By completing this course, you will understand and learn about the different Layer 2 attacks and mitigation techniques, such as attacks on STP and switching infrastructure, Dynamic ARP Inspection, DHCP Snooping, IP Source Guard, Storm Control, Private VLANs or Protocol Storm Protection.


About the Instructor

Piotr Kaluzny has been in the IT field since 2002 when he was exposed to networking and programming during his studies. His career in production networks began in 2007, right after graduating with MSc in Computer Science. Piotr quickly progressed his career by working for multiple enterprise and non-enterprise companies in different Routing and Switching and Security roles, with his responsibilities ranging from operations and engineering to consulting and management.



Since the very beginning Piotr has been heavily focused on the Security track to finally prove his skills in 2009 by passing the CCIE Security certification exam (#25565) in the first attempt (he also holds R&S and Security CCNP and CCNA certifications).



Piotr already has an extensive background as a Senior Technical Instructor. For the past several years he has been solely responsible for designing, developing and conducting CCNA, CCNP and CCIE training courses for one of the largest and most respected Cisco training company in the world.

Categories

CCIE Bloggers