We've Updated Our Bootcamps Schedule!

Below you'll find all of the current Bootcamp dates and locations for our 2019 CCNA, CCNP and CCIE Bootcamps. For more information regarding our updated schedule visit


2019 CCIE Written:

   Routing & Switching   Security   Data Center  Service Provider    Collaboration  
January  X  14 - 18 ~ Online  X X X
February X X X X X
March  4 - 8 ~ Reno, NV & Online   11 - 15 ~ RTP, NC & Online  X X X
April X X X X X
May  X X X X X
June  17 - 21 ~ RTP, NC & Online  10 - 14 ~ RTP, NC & Online X X X
July  X X X X X
August X X X X X
September  X X X X X
October X X X X X
November X X X X X
December X X X X X



2019 CCIE Lab:

   Routing & Switching   Security   Data Center   Service Provider   Collaboration 


1 - 7 ~ Online

28 - 3 (Feb.) ~ Amsterdam, NL & Online

21 - 27 ~ Amsterdam, NL & Online  

28 - 3 (Feb.) ~ RTP, NC



4 - 10 ~ RTP, NC

25 - 3 (March) ~ Reno, NV
& Online
25 - 3 (Mar.) ~ Orlando, FL

X 18 - 24 ~ Reno, NV 


4 - 10 ~ Orlando, FL

25 - 31 ~ RTP, NC & Online 25 - 31 ~ Reno, NV
18 - 24 ~ RTP, NC & Online X
April 1 - 7 ~ Reno, NV   X    8 - 14 ~ Amsterdam, NL  

 29 - 5 (May) ~ Amsterdam, NL & Online  

1 - 7 ~ RTP, NC & Online  

   22 - 28 ~ Amsterdam, NL & Online
   29 - 5 (May) ~ RTP, NC
May X X 6 - 12 ~ RTP, NC X X

3 - 9 ~ Online

 3 - 9 ~ RTP, NC & Online

 17 - 23 ~ Orlando, FL X   24 - 30 ~ RTP, NC & Online 

  24 - 30 ~ Orlando, FL

 24 - 30 ~ Amsterdam, NL 
July  X X X X X
August X X X X X
September X X X X X
October X X X X X
November X X X X X
December X X X X X




2019 CCNP:  

           2019 CCNA:  

   Routing & Switching 
January  28 - 3 (Feb.) ~ Online
February X
March 11 - 17 ~ Online
April 8 - 14 ~ RTP, NC
May X
June 24 - 30 ~ Online
July X
August X
September X
October X
November X
December X

     Routing & Switching 
  January  14 - 18 ~ RTP, NC
  February 18 - 22 ~ Online
  March X
  April 29 - 3 (May) ~ Online
  May 13 - 17 ~ RTP, NC
  June 10 - 14 ~ Online
  July X
  August X
  September X
  October X
  November X
  December X







Javier Guillermo's SDN: OpenDaylight course is the perfect building block for those interested in becoming an expert in SDN technologies.




OpenDaylight fundamentals is a comprehensive course that will cover the basics of SDN technology and its evolution from the traditional networking model to the architecture of the open source OpenDaylight controller. It will also cover most common use cases and lab exercises that will teach you how to set up, configure and use OpenDaylight.



Have you wondered why your tablet, computer, Xbox, PS4 or streaming TV is getting disconnected from your home network? Have you met that person who claims they can fix your home network and all they do is reset everything? Why does that only work sometimes? Becoming a certified technology professional will help you understand the answers to your “why” questions. You will also learn “how.” Sometimes, the “how” answer includes resetting equipment involved in the information technology infrastructure as an acceptable remedy, but do you understand why it had to be reset?

It could be the equipment is too close or too far from the wireless router or access point. How do you know which is correct? Maybe there was an update to the system that impacted the network. Do you understand how an update to a different device can impact the quality of your connection?

The Certified Technology Professional is that person who will ask “why” is this happening, find the “how” and apply it to the answer. They will then make the proper changes to minimize the chance of the issue reoccurring.

When thinking about IT certifications, one of the best places to start is CompTIA. Why? Because by using their 36 years of experience in the Information Technology Training field they created the three (3) primary certifications that are still the industry standard for Informational Technology knowledge.

Did you know that almost every modern device has an Operating System, also called OS? Your automobile, television, tablet, cell phone, some smart appliances (such as refrigerators that connect to the internet) and every item that is part of the “Internet of Things.” Basically, if it connects to the internet it has an operating system, and IP address. Are these devices secure? How many items in your home are connected? Are you overloading your router with device data requests? Are you getting the internet speed you are paying for? By getting the A+ certification, followed by the Network + and Security + you will understand how the technology works together in our modern world.

The A+ certification covers all the basics such as hardware, Operating Systems including Windows command line which is where the true power is, the basics of networking, how to troubleshoot and secure a network such as your home or small business, why you should have a “guest” side and not give out the main password and some of the best practices in the Information Technology field.

That means you can make your home even faster, maybe even help your friends and family enjoy modern technology even more.

Now what? You have mastered part of the foundation of the modern world, everywhere you go, everywhere you look you will see Information Technology and how it impacts the world. You are now ready for the next step to expand your foundation of understanding how the modern world works.

The next step you will take is the Network +. This certification expands your understanding of networking. This will include expanding your knowledge on items such as how to implement various styles of networks, the difference between the physical and logical network, pros and cons of both wired and wireless networks, how to troubleshoot the network, is it a network or application issue, what are the best practices to keep the network running, and which tools you need to use to monitor your network.

The final certification to complete your foundation of knowledge of Information Technology in the modern world is Security +. You have learned about how devices interact to create the modern world through the ever-expanding global internet, now how do you protect yourself, your devices, your home and your company? That is what you will learn in this Certification. This is one of the most challenging and rewarding parts of Information Technology field as it can change by the hour. What tools are used to gain access to your devices and network? What tools do you use to stop them? How secure of password do you really need? Where are your security weak points? Are your weak points the Operating Systems, Applications or the Hardware on your network? With these three certifications, you have the best foundation of understanding how the modern world works, how to support it, fix it, and improve it.


Mel Hallock

About The Author 

With more than 15 years Of industry experience, Melissa's background includes multiple CompTIA certifications, a MCTS, a Bachelor of Applied Science and a Master of Information Systems. Melissa's most loved challenge is bringing the "aha" moment to every learner.





CCIE Technology v5 Technologies: Network Management Security is the ninth installment of our CCIE Security v5 Technology series. This course focuses on various features commonly deployed to protect the Management Plane, including fundamental network infrastructure hardening techniques, logging, or Management Plane protocol security.



Key Topics Covered Include:
  • Securing Administrative Access
  • Management Plane Protection
  • Logging
  • Securing SNMP
  • Securing NTP


Every day we are bombarded with articles relating to the latest security breach and how cyber-crime is an ever-increasing market for the unscrupulous among us. While it is always hard to apportion blame in these instances; security is very much like an onion, being many layered. The more layers that you have, the less likely you are to cry.

Let’s take a look at three external security layers that we can wrap around our application or service.

  • Service/Application account: The security context in which the code executes;
  • Linux Capabilities: Granular assignment of administrative privileges;
  • Mandatory Access Control Lists: Quantify with exact precision what the application can have access to.

Because of the free nature of Linux, many internet facing or consumer items are based on this OS. If you are developing an internet connected refrigerator to reorder your favorite beer before the big game on Saturday, you don’t want to have to worry about licenses for the operating system that runs within the fridge. This means that you, as a developer, will increasingly work with applications that sit on top of Linux.

1. The first thing to consider is creating a user account for your application. Your application can then run in the context of this user, with only the privileges that it needs. This user should not be able to run an interactive login shell, ensuring that your account does not become a gateway into the system. We don’t want rye-bread appearing in the fridge in-place of the hallowed beer. A user that cannot log in to the system interactively is shown in the following extract from the /etc/passwdfile on a Raspberry Pi running Debian Linux:

systemd-timesync:x:100:103:Time Synchronization,,,:/run/systemd:/bin/false

The user account name is system-timesysnc, the first value. The shell is shown as the last value of /bin/false. This prevents the user from running an interactive shell, locally or remotely. The user account you create for your application is now effectively restricted to the code that you permit within the application

2. Your application or service may need super-user access. If it does need such access, then please don’t make use of the Set User ID bit for this. The Set User ID or SUID bit is a Linux permission usually used for giving the application root access to the system. The application is executed by the service account you created but gains full access as root. This works perfectly well, in much the same way as removing locks to your house gives great access to your home. Given this scenario, you should realize that giving your application too many rights is not a great idea. The Internet-enabled fridge now is able to order our beer, which is good, and also empty our bank account, not so good.

Taking the example that your application may need to read a single restricted file, but nothing else is required with elevated privileges, Capabilities is a better approach. Choosing to use Linux Capabilities in place of the SUID bit, we can be very granular with the root privileges that we assign.

Assigning the CAP_DAC_READ_SEARCH capability to your application, instead of the SUID bit, we are able to read restricted files but not write to those files. No other supervisory permissions are applied. With Capabilities, we are able to be very granular in the assignment of privileges, this is not possible with the SUID bit which gives all or nothing. Of course, security is still controlled by what we program into the application or service but we double up with the extra security granted through Capabilities. For more information on what Capabilities are available, on any Linux system, use the command:

man 7 capabilities

3. Utilize Mandatory Access Control Lists. Mandatory Access Control Lists (MACL) systems like SELinux or AppArmor takes your application security to the next level of granularity. For example, a MACL can ensure that your application can only read from the /etc/shadowfile even if it is executed with the Linux Capability we saw previously. Mandatory Access Control Lists affect the root account as well as standard users, essentially in preventing malicious attacks within trusted applications. AppArmor profiles can be added to /etc/apparmor.d/ during the installation of the application or service. An example of restricting read access to the shadow file is shown in the following policy extract.

/etc/shadow r,

It has been proven that adequate use of Mandatory Access Control Lists does prevent malicious activity on your system. Designing your application to make use of AppArmor or SELinux is going to improve the security and reliability of your application. Security should be part of the design process and not an afterthought applied by administrators trying to secure the system.

The Linux Security and Server Hardening course from INE instructor Muhamad Elkenany will help you understand more about MACL and Linux Security.

Applying a MACL to the application will ensure it has access to exactly what it needs whilst preventing it from having access where it shouldn’t. It can read the beer level in the fridge and place orders to the liquor store, but it cannot spend money elsewhere or access other parts of the system. Your enjoyment of the game on Saturday is assured and your health app on your IoT health scales has no idea of your beverage consumption.



Andrew Mallet

About The Author

Andrew is an avid Linux author and advocate with 5 book titles to his name and over 900 videos on his youtube channel. Having started training and consulting in Linux during the late 1990's, he is well versed in many Linux distributions and services.







This associate level course provides an introduction and general overview of REST API, making it a great basis for IT Professionals looking to sharpen their software development skills.



Topics covered in this course include:

  • What is REST?

  • Principles of REST

  • What is not REST?

  • Examples of what a REST interface looks like

  • REST in transit (structure of requests)

  • Security Principles

  • REST patterns and best practices


NFP: Control Plane is the third of 8 modules in this CCNA Security certification curriculum. Network Foundation Protection is a security framework that provides with strategies to protect three functional areas of a device: Management Plane, Control Plane, and Data Plane. In this course we will focus on the Control Plane functionality and we will look at topics such as MAC address table, CAM table overflow, VTP, CDP, routing protocol authentication (RIP, OSPF, EIGRP, BGP), BGP TTL-Security, FHRP, passive interface, control plane policing, and control plane protection.



What Pre-Requisites are There for This Course?

If this was a single course covering the entire CCNA Security blueprint, the pre-requisite would have been the CCENT certification or equivalent knowledge. Since this is just a subset of the CCNA Security blueprint, for this specific portion it is recommended that you start with the INE Security Concepts and NFP: Management Plane courses. Additionally, you should have basic routing and switching knowledge.


Why Should You Watch This Course?

This course covers topics for the CCNA Security certification and is the perfect first step towards becoming a security expert. Our CCNA Security content gives you the foundational network infrastructure security knowledge to not only become a hero at work, but eventually become a master in the security field. With the expertise obtained from this course, you will be able to implement valuable skills such as maximizing device uptime and implementing secure routing solutions.


Who Should Watch?

This course is for anyone interested in pursuing the CCNA Security certification, or simply interested in gaining knowledge about network security in the control plane functionality of infrastructure devices.


                                     About the Instructor

Gabe Rivas

Gabe started his network engineering career in 2010 as a Co-Op at Cisco Systems in Herndon, VA. He landed a full-time position as a network consulting engineer and moved to Raleigh, NC, where he worked at Cisco from 2011-2013. He later moved to a network support role at ePlus Technology, a Cisco Gold Partner, where he worked from 2013-2016. Gabe is currently working at Cisco as a test engineer and has been teaching CCNA R&S, CCNA Security, and CCDA classes at Wake Technical Community College. Certifications that Gabe holds are: CCNA R&S, CCNA Security, CCNA Wireless, CCDA, CCNP R&S, and CCDP. Gabe is currently busy developing the CCNA Security course for INE and studying for his CCNP Security certification.


Our New Microsoft Azure: Infrastructure Implementations course is an essential for any budding Azure expert in the IT Field. 



With the help of expert instructor Mbong Hudson Ekwoge, you will learn how to provision and manage services in Azure, and how to implement infrastructure components such as virtual networks, virtual machines (VMs), web and mobile apps, and storage in Azure. As a student, you will also discover how to plan for and manage Azure Active Directory and configure it's integration with on-site Active Directory domains.


It is undeniable that Artificial Intelligence and Automation are in the minds of the public. With major corporations such as Google, Amazon, Facebook, and Microsoft making the news on their artificial-intelligence research and products, and personalities such as Elon Musk, Bill Gates, and Stephen Hawking holding interviews warning of an A.I. apocalypse, it's no wonder people are talking about it.

Artificial Intelligence has recently migrated into Information Technology, with several companies providing solutions for IT Operations. Executives and managers are quickly eyeing it up, excited by its abilities to make employees more efficient, reduce downtime, and minimize staffing. The marketing for these products is very positive, extolling the simplicity of operations and their effectiveness. The algorithms, as it is explained, will handle everything.

There is a configuration cost to get it up and running and to keep it running smoothly that management may not see at first. There is no "Easy" button here. Depending on the organization, implementing an A.I. and automation platform may require thousands of hours of work. This article aims to provide some thoughts on prerequisites to using A.I. in your IT infrastructure.

The first requirement is management access. These A.I. algorithms work with large amounts of data. They want to see everything, so it can be potentially correlated. Thus, we need access to everything from where the A.I. system will be installed. All devices need to be accessible via some form of management network including servers, switches, routers, firewalls, power strips, UPS's, KVM’s, and more. Effectively, anything that has the option of connecting an Ethernet cable and configuring an IP address, needs to have that done.

Unless you have an existing inventory of every device that uses a power cable, this step will probably also require a full inventory of all equipment at every location. Many of these devices may be managed by other departments as well, requiring internal resources and collaboration. This is also an important step for many other reasons, and is highly recommended before continuing.

Be sure to name these devices in a consistent manner. Most of the algorithms in use require similar wording used between devices in a logical or physical area in order to increase matching probability. This will require the formulation of a corporation-wide naming standard, and potentially renaming hundreds or thousands of devices.

Regarding the network itself, depending on your environment, you may not have a management network, or you may have an unfinished one. So you'll need to design and create one for each of your locations, and get that routed properly. Or, maybe you have a very large environment with many management networks for various purposes and departments. Those will need to be identified, routes may need to be created, VPN SA’s may need reconfiguration, and ACL’s opened to the location of the A.I. system.

Now that there is a management network that can communicate between all devices and your A.I. system, you need to provide management services to it. The first thing that comes to mind is SNMP. A modern network should have SNMPv3 configured if a device supports it, which requires some security design effort as well. MIB’s may have to be found, or OID’s walked. Devices will need to be configured to report all SNMP traps possible, and to allow polling from the A.I. collector.

Next up would be Syslog. Preferably with encryption if supported for each device. This step would be best designed with a series of Syslog collection servers, local to each location, then forwarding those localized collections to the A.I. collector. This would require design and implementation time for such a distributed Syslog system. Part of that system would most likely include an ELK stack implementation on top of it for additional analysis, which can be very involved.

There may be other monitoring systems already in-place, performing up/down detection, resource utilization alerting, and synthetic transactions. Similarly, systems such as vCenter and AWS Cloudwatch may be used. Each of these systems would need to be configured to copy all alerts to the A.I. collector. These configurations may also need to be customized for the collector, as the A.I. will want to know about events sooner and more frequently than an email alert to IT personnel.

It’s very likely these reporting systems may send alerts to a ticketing system or collaboration service, which should also be integrated into the A.I. platform as an output. Once the algorithms detect a highly-probable issue, a ticket can be created for front line personnel. This may also require configuration and scaling considerations for your email server, depending on how it is integrated.

So far, we’ve talked about the setup of the networked devices, to allow for detection of issues. Once these alerts are investigated, they need an action performed. If an organization wishes to enable automation, that is, the automatic resolution of alerts from these A.I. systems, there needs to be remote management access provided to all devices. Not in the form of data flow from the networked devices, but the remote access of them. Remote access methods such as SSH, and Powershell are most common today. If a device is too old or not licensed to run SSH or Powershell for example, that device will need to be replaced or upgraded. The configuration of this remote access requirement may also be lengthy.

The automation methods provided usually rely on scripts of some kind. Scripts you may want to run via an automation system such as Ansible, rather than individual shell scripts. Again, we find a system that needs planning and implementation. This also requires personnel to write resolution scripts and playbooks for each issue that is detected, which would require personnel who know how to code, and certainly take a lot of time initially.

Finally, these A.I. alerts and resolutions only happen when the algorithm has a high level of confidence that an issue is correct. That means personnel need to train the system, especially in the beginning. There are usually many algorithms that work together, each one using a different set of rules, which requires care and validation. Algorithms are diverse and may include the ability to detect relationships between alerts based on source type, physical or logical proximity, time, language usage, and topology analysis.

As you can see, there is no "Easy" button here. A.I. platforms, their automation systems, and their algorithms are extremely powerful today, but they require planning, lots of preparatory work, and training once running. They cannot be implemented quickly, as a quick fix for lack of enough personnel, and in fact, will require more personnel during the implementation and configuration phase. When properly planned for and implemented, an A.I. system can be an important enhancement to IT Operations.


Andrew Crouthamel About The Author

Andrew is a seasoned IT engineer with over 12 years of experience. He started out in IT as an Assistant Computer Technician, blowing dust out of computers for a school district, moving up through the ranks to Systems Administrator, Network Engineer, and IT Manager. He currently works for an international satellite communications company, ensuring LAN and WAN connectivity for a large network of ground stations and customers such as NASA, ESA, JAXA, Boeing, The U.S. Air Force, and more. Andrew holds numerous Cisco and CompTIA certifications and is a part-time Cisco Instructor.

Andrew's hobbies outside of technology include many outdoor activities, such as hiking and canoeing. He is currently learning woodworking, and is working on a 17' cedar wood-strip canoe in his garage, much to his wife's dismay. He lives in Pennsylvania, where his family has been for generations, dating back to 1754. Andrew lives with his wife, young daughter, and too many pets. You can reach Andrew by visiting his website, 




Tune into Marco Alves new SD-WAN Overview Course to learn about the past, present and future of the SD-WAN landscape and technologies.

This course provides an overview of SD-WAN, including a basic introduction to the technology, a vendor landscape highlighting the differences between the main competitors in the space, as well as a market adoption discussion and industry forecast. After completing this course you will be able to discuss what SD-WAN is and what the current offerings available are.

You can learn more about this course by visiting or by logging into your All Access Pass members account. Don't have an All Access Pass? Start your 7-day free trial here.


Subscribe to INE Blog Updates

New Blog Posts!