Oct
23

I love to drive! Whether it's a car, truck, tractor, bus, or lawn mower, I've always enjoyed driving. Since I grew up on a farm in rural East Texas, I've had plenty of opportunity and room to practice.

I don't know exactly why I love to drive. If you were to ask my wife, I'm sure she'd say it's all about "control". While I disagree with that idea, I must admit to finding myself stomping my right foot on the floorboard repeatedly when riding in the passenger seat with others.

There's just something about the open road that I really enjoy. My favorite is driving down a road I've never been on before, preferably something rural, winding through farmland and large acreage, full of animals or crops. Though I like driving down new roads, I never do so in an unplanned fashion. I'm no free spirit in that regard. My engineering mind won't let me waste time, money, or gas on my trips. I pre-plan all trips, including short trips to local places. Then, I route out all stops and destinations along the way to make sure I maximize my travel time. There is nothing worse than being in the car for more time than necessary.

I generally only need two basic things for a trip; my keys and directions. With keys in one hand and my mapped-out directions in the other, I take off on my adventure...my completely planned out and optimized adventure, mind you.

Recently, while driving across central North Carolina with my wife, a thought occurred to me. DevOps and its automated processes operate similarly to a road trip.

  • There is a starting point and destination.
  • There are roads, or pipelines (either Continuous Integration or Continuous Delivery), you must follow to reach your destination.
  • There is a vehicle used to move along those roadways, such as the source code of your project.
  • There are forks in the road you must navigate. For DevOps, your forks tend to focus on different environments in which you must deploy.

Both a road trip and DevOps require a set of keys and clear directions. For a road trip, your car keys and phone can take you anywhere you want to go. From a DevOps perspective, Git is able to serve as the keys to ignite your DevOps pipelines as well as the set of directions needed to navigate your code through the maze of your pipeline infrastructure.

In this post, I will cover two use cases for Git you need to consider in your DevOps pipeline infrastructure: Git as the Keys and Git as the Directions.

laura-gariglio-tPxnF8LVNEI-unsplash

Git as the Keys

Just as keys are used to start the engine of your automobile, Git should start the engine of your DevOps pipelines and processes. Changes made in your Git repository can trigger automated jobs to carry out build, deployments, and test pipeline phases automatically, removing reliance on manual or timed processes.

Depending on your Git implementation, these changes can be detected either through a pull or push based system.

In a pull-based system, tools such as Jenkins periodically poll the selected Git repository looking for new changes. Once a change is detected, the associated Jenkins job(s) executes and carries out the stages described.

When using Git implementations such as GitHub or BitBucket in a push-based system, a webhook is created between the Git system and your CI system of choice. Once a change is made in Git, the webhook is triggered and the associated pipeline executes.

Automatic triggering of CI/CD pipelines means that teams can build out complex testing and deployment scenarios, testing smaller changesets and identifying potential bugs earlier than when testing a larger changesets. However, this approach to DevOps pipeline triggering is not without its challenges. Teams will need to make sure their pipelines and infrastructure are created in such a way to support this approach.

Creating the Right Environments

One option is to use ephemeral environments when deploying. Ephemeral environments are short-lived, highly separated environments used for testing purposes. The goal would be to create such an environment on demand, use it as intended for a short duration, then tear it down.

The second option would be to have a single environment able to support multiple changes over time. If you are unable to support ephemeral environments, you must at least make sure your deployments are idempotent, meaning they can be deployed over and over without breakage or adverse impacts to the environment.

Git is the key giving the engine of your DevOps pipeline the start it needs. But simply starting the engine only accomplishes so much. You must also provide a set of directions. Fortunately, Git is great at that as well.

sylwia-bartyzel-D2K1UZr4vxk-unsplash

Git as the Directions

Most deployments require some level of configuration details in order to properly deploy a solution. These details give instructions on where and how code is to be deployed, built, or tested.

Consider configuration details (typically stored in a configuration file) as a set of directions on a map. They define the when, where, and how a process deployment should take place. Teams use these configuration files as a way to differentiate between different types of environments.

One type of environment defined by a configuration file might be a development environment, where things often change. Another environment could be a testing environment, where teams are looking for more stability as they execute manual and automated tests against the deployed solution. There may be a staging environment, where teams carry out demos to executives or last minute test cases. Finally, a production environment can be used for internal or hosted solutions.

Each of these environments is different and may require varying levels of configuration. Yes, you could manually configure each environment as needed, but why not use an automated solution? Tools such as Ansible and Chef make this really easy.

For completely automated Continuous Integration and Continuous Delivery processes, build your solution using your configuration file as a parameter. This will allow your pipelines to be much more dynamic. Utilize a separate configuration file for each environment and provide the appropriate configuration file when executing your pipeline.

Now that you have configuration files accepted by your pipelines and various sets created for each environment, it's time to decide how to handle them. Fortunately, since your configuration values are stored in a file, utilizing Git for configuration storage is a natural fit.

Configuration Files in Git

Git can serve as the single source of truth in your DevOps implementation. When I say "single source of truth", I'm not only referring to the source code housed in your Git repository. I'm also speaking of any associated environment configuration files as well. Git excels at storing files, so utilizing it when storing your configuration files makes sense.

There are several advantages to storing and maintaining your configuration files in Git:

  • A single location for all environmental files - No need for any member of your team to hunt around trying to find the file for each environment or navigating through different VMs to find configuration details. All configurations are nicely stored together in Git.
  • Automatic versioning of your configuration file - Git allows for easy roll-back using familiar Git commands. If you need to revert to a previous version of a configuration file, it's easy to do using the Git commands you're already adept at using.
  • Automatic triggering of Continuous Deployment jobs - In the same way you're able to trigger automated builds when checking code into Git, triggering jobs based on configuration file changes are also possible. This is your first step towards Infrastructure as Code (a topic to be explored in a future post).

One warning when storing configuration files in your Git repository; make sure you don't include sensitive information, such as passwords, as part of your configuration file. Utilize other security management tools for such information.

jesse-bowser-c0I4ahyGIkA-unsplash

Conclusion

When you were a kid, did you ever sit in the front seat of your parent's car and pretend to be a race car driver, frantically trying to move the locked steering wheel from side to side? Sure it was a lot of fun, but you never actually went anywhere, at least not anywhere outside of your imagination. The same is true with DevOps. You can talk about DevOps, maybe even dream about DevOps, but you will never actually achieve your goals until you implement the basic DevOps infrastructure. Utilize Git as the key to start your engine and a set of directions to navigate your code to the end user. Then you're ready to hit the road.

 

Learn more about Git and DevOps, with expert instruction and hands-on activities, in Eric Parker's INE course "Git Fundamentals". Sign in with your All Access Pass today!

Watch Now 

 

 

Eric Parker
About Eric Parker

Eric is a native Texan and a graduate of Texas A&M University and more than happy to tell you all about it! He has spent the last 12+ years building software solutions in the Raleigh, NC area and has architected JAVA, .NET, and JavaScript based software projects in both mid-size and enterprise level companies. He currently focuses on DevOps, native AWS Cloud development and Internet of Things (IoT).

Subscribe to INE Blog Updates

New Blog Posts!