This is the second edition of my hands-on DevOps course building upon nemonik/hands-on-DevOps.
The content of this course is actively under development whereas the prior is not.
This newest version of my Hands-on DevOps class is a re-platforming of sorts -- A sort of rewrite if you will. The prior version relied on multiple Vagrants (i.e., virtual machines) and was a beast to maintain. Several Vagrants were created through automation to run a multi-node Kubernetes cluster as well asa development VM. The approach modeled how I pre-flighted my work on my laptop vice using minikube. The Kubernetes cluster my class used made use of was k3s and as it matured k3d was introduced. K3d is a lightweight wrapper to run k3s in Docker and provides a rather elegant solution to create and manage a single or multi-node k3s cluster for development vice standing up multi-VMs and the burden they impose on the host (in my case my laptop). Around this same time I was considering using k3d I also gave Docker Desktop's means of providing a Kubernetes cluster a try but I found it lacking, so I stuck with k3d. Around this same time I also moved fully off my MacBook for personal development moving to Arch Linux where I used Docker. This new version of my class infrastructure-as-code automation focuses instead of directly configuring the host or if need be a single Vagrant for the purpose of development.
A hands-on DevOps course covering the culture, methods and repeated practices of modern software development involving Vagrant, VirtualBox, Ansible, Kubernetes, k3s, k3d, Traefik, Docker, Taiga, GitLab, Drone CI, SonarQube, Selenium, InSpec, Heimdall 2, Arch Linux...
A reveal.js presentation written to accompany this course can found at https://nemonik.github.io/hands-on-DevOps/.
This course will
- Discuss DevOps,
- Have you spin up a DevOps toolchain and development environment, and then
- Author two applications and their accompanying pipelines, the first a continuous integration (CI) and the second a continuous delivery (CD) pipeline.
After this course, you will
- Be able to describe and have hands-on experience DevOps methods and repeated practices (e.g., use of Agile methods, configuration management, build automation, test automation and deployment automation orchestrated under a CICD orchestrator), and why it matters;
- Address challenges transitioning to DevOps methods and repeated practices;
- Have had hands-on experience infrastructure-as-code to provision and configure an entire DevOps Factory (i.e. a toolchain and development environment) including Docker Registry, a Kubernetes cluster, Taiga, GitLab, Drone CI, SonarQube, Heimdall 2;
- Have had hands-on experience authoring code to include authoring and running automated tests in a CICD pipeline all under Configuration Management to ensure an application follows style, adheres to good coding practices, builds, identify security issues, and functions as expected;
- Have had hands-on experience with
- using Infrastructure as Code (IaC) in Vagrant and Ansible;
- creating and using Kanban board in Taiga;
- code configuration in git and GitLab;
- authoring code in Go;
- using style checkers and linters;
- authoring a Makefile;
- various commands in Docker (e.g., building a container image, pushing a container into a registry, creating and running a container);
- authoring a pipeline for Drone CI;
- using Sonar Scanner CLI to perform static analysis;
- authoring security test in InSpec;
- author an automated functional test in Selenium;
- authoring a dynamic security test in OWASP Zap; and
- using container platform to author and scale services;
- Have had hands-on experience authoring code to include authoring and running automated tests in a CICD pipeline all under Configuration Management to ensure an application follows style, adheres to good coding practices, builds, identify security issues, and functions as expected.
We will be spending most of the course hands-on working with the tools and in the Unix command line making methods and repeated practices of DevOps happen, so as to grow an understanding of how DevOps actually works. Although, not necessary I would encourage you to pick up a free PDF of The Linux Command Line by William Shotts if you are no familiar wit the Linux command line.
Don't fixate on the tools used, nor the apps we develop in the course of learning how and why. How and why is far more important. This course like DevOps is not about tools although we'll be using them. You'll spend far more time writing code. (Or at the very least cutting-and-pasting code.)
- Michael Joseph Walsh [email protected], [email protected]
See the License file at the root of this project.
The following skills would be useful in following along but aren't strictly necessary.
What you should bring:
- Managing Linux or Unix-like systems would be tremendously helpful, but not necessary, as we will be living largely within the terminal.
- A basic understanding of Vagrant, Docker, and Ansible would also be helpful, but not necessary.
- 1. Preface
- 2. DevOps
- 3. Author
- 4. Copyright and license
- 5. What you should bring
- 6. Table of Contents
- 7. DevOps unpacked
- 7.1. What is DevOps?
- 7.2. What DevOps is not
- 7.3. The tool exist to
- 7.4. To succeed at DevOps you must
- 7.5. If your effort doesn't
- 7.6. Conway's Law states
- 7.7. DevOps is really about
- 7.8. What is DevOps culture?
- 7.9. How is DevOps related to the Agile?
- 7.10. How do they differ?
- 7.11. Why?
- 7.12. What are the principles of DevOps?
- 7.13. Much of this is achieved
- 7.14. What is Continuous Integration (CI)?
- 7.15. How?
- 7.16. CI best practices
- 7.16.1. Utilize a Configuration Management System
- 7.16.2. Automate the build
- 7.16.3. Employ one or more CI services/orchestrators
- 7.16.4. Make builds self-testing
- 7.16.5. Never commit broken
- 7.16.6. Stakeholders are expected to pre-flight new code
- 7.16.7. The CI service/orchestrator provides feedback
- 7.17. What is Continuous Delivery?
- 7.18. But wait. What's a pipeline?
- 7.19. How is a pipeline manifested?
- 7.20. What underlines all of this?
- 7.21. But really why do we automate err. code?
- 7.22. Monitoring
- 7.23. Crawl, walk, run
- 8. Reading list
- 9. Prerequisites
- 10. Installing the software factory
- 10.1. Ansible
- 10.2. Run the Ansible playbook
- 10.3. Spin up the Factory
- 10.4. The long-running tools
- 10.4.1. Taiga, an example of Agile project management software
- 10.4.2. GitLab CE, an example of configuration management software
- 10.4.3. Drone CI, an example of CICD orchestrator
- 10.4.4. SonarQube, an example of a platform for the inspection of code quality
- 10.4.5. PlantUML Server, an example of light-weight documentation
- 10.4.6. Heimdall 2
- 11. Golang
helloworld
project- 11.1. Create the project's backlog
- 11.2. Create the project in GitLab
- 11.3. Setup the project
- 11.4. Author the application
- 11.5. Align source code with Go coding standards
- 11.6. Lint your code
- 11.7. Build the application
- 11.8. Run your application
- 11.9. Author the unit tests
- 11.10. Automate the build (i.e., write the
Makefile
) - 11.11. Author Drone-based Continuous Integration
- 11.12. The completed source for
helloworld
- 12. Golang
helloworld-web
project- 12.1. Create the project's backlog
- 12.2. Create the project in GitLab
- 12.3. Setup the project
- 12.4. Author the
helloworld-web
application - 12.5. Build and run the
helloworld-web
application - 12.6. Run golangci-lint on the
helloworld-web
application - 12.7. Author the unit tests
- 12.8. Perform static analysis (i.e., sonar-scanner) on the command line
- 12.9. Automate the build (i.e., write the Makefile)
- 12.10. Containerize the application
- 12.11. Run the container
- 12.12. Push the container image to the private registry
- 12.13. Configure Drone to execute your CICD pipeline
- 12.14. Add Static Analysis (
sonar
) step to your CICD pipeline - 12.15. Add the
build
step to the pipeline - 12.16. Add the
nemonik\helloworld-web:latest
container imagepublish
step to pipeline - 12.17. Deploy
helloworld-web
application to the Kubernetes cluster - 12.18. Add a
deploy
rule to the Makefile - 12.19. Add a
deploy
step to the pipeline - 12.20. Add compliance-as-code (
inspec
) test to the pipeline- 12.20.1. Author our InSpec tests
- 12.20.2. Execute the InSpec tests on your
helloworld-web
deployment - 12.20.3. Add an
inspec
rule to the Makefile - 12.20.4. Add a
inspec
step to the pipeline - 12.20.5. Viewing the
inspec
results in Heimdall 2 - 12.20.6. Add an automated functional test (
selenium
) step to the pipeline
- 12.21. Add the DAST (
owasp-zap
) step to the pipeline - 12.22. All the source for
helloworld-web
- 13. Additional best practices to consider around securing containerized applications
- 14. That is all
DevOps (a clipped compound of the words development and operations) is a software development methodology with an emphasis on a reliable release pipeline, automation, and stronger collaboration across all stakeholders with the goal of delivery of value in close alignment with business objectives into the hands of users (i.e., production) more efficiently and effectively.
Ops in DevOps gathers up every IT operation stakeholders (i.e., cybersecurity, testing, DB admin, infrastructure and operations practitioners -- essentially, any stakeholder not commonly thought of as directly part of the development team in the system development life cycle).
Yeah, that's the formal definition.
In the opening sentences of Security Engineering: : A Guide to Building Dependable Distributed Systems — Third Edition, author Ross Anderson defines what a security engineer is
Security engineering is about building systems to remain dependable in the face of malice, error, or mischance. As a discipline, it focuses on the tools, processes, and methods needed to design, implement, and test complete systems, and to adapt existing systems as their environment evolve.
The words security engineering could be replaced in the opening sentence with each one of the various stakeholders (e.g., development, quality assurance, technology operations).
The point I'm after is everyone is in it to collectively deliver dependable software.
Also, there is no need to overload the DevOps term -- To Dev wildcard (i.e., *) Ops to include your pet interest(s), such as, security, test, whatever... to form DevSecOps, DevTestOps, DevWhateverOps... DevOps has you covered.
About the tools or deploying faster.
There are countless vendors out there, who want to sell you their crummy tool.
Facilitate collaboration between the stakeholders.
Combine software development and information technology operations in the systems development life cycle with a focus on collaboration across the life cycle to deliver features, fixes, and updates frequently in close alignment with business objectives.
If the effort cannot combine both Dev and Ops in collaboration with this focus the effort will most certainly fail.
grok (i.e, Understand intuitively) what DevOps is in practice and have performed the necessary analysis of the existing culture and a strategy for how to affect a change the effort again will likely fail.
I say this because the culture is the largest influencer over the success of both Agile and DevOps and ultimately the path taken (i.e., plans made.)
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
From "How Do Committees Invent?"
Followed with
Ways must be found to reward design managers for keeping their organizations lean and flexible.
This was written over 50 years ago.
If your communication structure is broke, so shall your systems be.
Providing the culture, methods and repeated practices to permit stakeholders to collaborate.
culture noun \ ˈkəl-chər
the set of shared attitudes, values, goals, and practices that characterizes an institution or organization
I love when a word means precisely what you need it to mean.
With the stakeholders sharing the same attitudes, values, goals, using the same tools, methods and repeated practices for their particular discipline you have DevOps Culture.
Agile Software Development is an umbrella term for a set of methods and practices based on the values and principles expressed in the Agile Manifesto.
For Agile, solutions evolve through collaboration between self-organizing, cross-functional teams utilizing the appropriate practices for their context.
DevOps builds on this.
While DevOps extends Agile methods and practices by adding communication and collaboration between
- development,
- security,
- quality assurance, and
- technology operations
functionaries as stakeholders into the broader effort to ensure software systems are delivered in a reliable, low-risk manner.
In Agile Software Development, there is rarely an integration of these individuals outside the immediate application development team with members of technology operations (e.g., network engineers, administrators, testers, security engineers.)
As DevOps matures, several principles have emerged, namely the necessity for product teams to:
- Apply holistic thinking to solve problems,
- Develop and test against production-like environments,
- Deploy with repeatable and reliable processes,
- Remove the drudgery and uncertainty through automation,
- Validate and monitor operational quality, and
- Provide rapid, automated feedback to the stakeholders
Through the repeated practices of Continuous Integration (CI) and Continuous Delivery (CD) often conflated into simply "CI/CD" or "CICD".
WARNING: After tools, CICD is the next (albeit mistakenly) thing thought to be the totality of DevOps.
It is a repeated Agile software development practice lifted specifically from Extreme programming, where members of a development team frequently integrate their work to detect integration issues as quickly as possible thereby shifting discovery of issues "left" (i.e., early) in the software release.
Each integration is orchestrated through a CI service/orchestrator (e.g., Jenkins, Drone CI, GitLab Runners, Concourse CI) that essentially assembles a build, runs unit and integration tests every time a predetermined trigger has been met; and then reports with immediate feedback.
For the software's source code, where the mainline (i.e., master branch) is the most recent working version, past releases held in branches, and new features not yet merged into the mainline branch worked in their own branches.
By accompanying build automation (e.g., Gradle, Apache Maven, Make) alongside the source code.
To perform source code analysis via automating formal code inspection and assessment.
In other words, ingrain testing by including unit and integration tests (e.g., Spock, JUnit, Mockito, SOAPUI, go package Testing) with the source code to be executed by the build automation to be executed by the CI service.
Or untested source code to the CMS mainline or otherwise risk breaking a build.
Prior to committing source code in their own workspace.
On the success or fail of a build integration to all its stakeholders.
It is a repeated software development practice of providing a rapid, reliable, low-risk product delivery achieved through automating all facets of building, testing, and deploying software.
With additional stages/steps aimed to provide ongoing validation that a newly assembled software build meets all desired requirements and thereby is releasable.
Is achieved through delivering applications into production via individual repeatable pipelines of ingrained system configuration management and testing
A pipeline automates the various stages/steps (e.g., Static Application Security Testing (SAST), build, unit testing, Dynamic Application Security Testing (DAST), secure configuration acceptance compliance, integration, function and non-functional testing, delivery, and deployment) to enforce quality conformance.
Each delivery pipeline is manifested as Pipeline as Code (i.e., software automation) accompanying the application's source code in its version control repository.
I and the community of practice argue DevOps will struggle without ubiquitous access to shared pools of software configurable system resources and higher-level services that can be rapidly provisioned (i.e., cloud).
Although, it is actually possible to DevOps on mainframes The video is in the context of continuous delivery, but read between the lines.
In 2001, I think Larry Wall in his 1st edition of Programming Perl book put it best with "We will encourage you to develop the three great virtues of a programmer:
laziness,
impatience, and
hubris."
The second edition of the same book provided definitions for these terms
Well...
Let me explain.
The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don't have to answer so many questions about it. Hence, the first great virtue of a programmer._ (p.609)
The anger you feel when the computer is being lazy. This makes you write programs that don't just react to your needs, but actually anticipate them. Or at least pretend to. Hence, the second great virtue of a programmer._ (p.608)
Excessive pride, the sort of thing Zeus zaps you for. Also, the quality that makes you write (and maintain) programs that other people won't want to say bad things about. Hence, the third great virtue of a programmer._ (p.607)
- Faster, coordinated, repeatable, and therefore more reliable deployments.
- Discover bugs sooner. Shifting their discovery left in the process.
- To accelerates the feedback loop between Dev and Ops (Again, Ops is everyone not typically considered part of the development team.)
- Reduce tribal knowledge, where one group or person holds the keys to how things get done. Yep, this is about making us all replaceable.
- Reduce shadow IT (i.e., hardware or software within an enterprise that is not supported by IT. Just waiting for its day to explode.)
Once deployed, the work is done, right?
So, that improvements can be gauged and anomalies detected.A development team's work is not complete once a product leaves CICD and enters production; especially, under DevOps where the development team includes members of ops (e.g., security and technology operations).
Is working software, but this is not the only, measurement. The key to successful DevOps is knowing how well the methodology and the software it produces are performing. Is the software truely dependable?
Is achieved by collecting and analyzing data produced by environments used for CICD and production.
So, that improvements can be gauged and anomalies detected.
To formulate and prioritize reactions weighting factors, such as, the frequency at which an anomaly arises and who is impacted.
Could be as simple as operations instructing users through training to not do something that triggers the anomaly, or more ideally, result in an issue being entered into the product's backlog culminating in the development team delivering a fix into production.
Are surfaces through monitoring resulting in for example additional testing for an issue discovered in prodcuction.
Yep. News flash. DevOps will not entirely stop all bugs or vulnerabilities from making it into production, but this was never the point.
Through re-scoping of requirements, re-prioritizing of a backlog, or the deprecation of unused features. Again, all surfaced through monitoring.
- With DevOps one does not simply hit the ground running.
- One must first crawl, walk and then ultimately run as you embrace the necessary culture change, methods, and repeated practices.
- Collaboration and automation are expected to continually improve so to achieve more frequent and more reliable releases.
AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis William J. Brown, Raphael C. Malveau, Hays W. "Skip" McCormick, and Thomas J. Mowbray ISBN: 978-0-471-19713-3 Apr 1998
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler)) David Farley and Jez Humble ISBN-13: 978-0321601919 August 2010
The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations Gene Kim Jez Humble, Patrick Debois, and John Willis ISBN-13: 978-1942788003 October 2016
Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations Nicole Forsgren PhD, Jez Humble, and Gene Kim ISBN-13: 978-1942788331 March 27, 2018
Site Reliability Engineering: How Google Runs Production Systems 1st Edition Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy ISBN-13: 978-1491929124 April 16, 2016 Also, available online at https://landing.google.com/sre/book/index.html
Release It!: Design and Deploy Production-Ready Software 2nd Edition Michael T. Nygard ISBN-13: 978-1680502398 January 18, 2018
The SPEED of TRUST: The One Thing That Changes Everything Stephen M .R. Covey ISBN-13: 978-1416549000 February 5, 2008 The gist of the book can be found at SlideShare https://www.slideshare.net/nileshchamoli/the-speed-of-trust-13205957
RELATIONSHIP TRUST: The 13 Behaviors of High-Trust Leaders Mini Session Franklin Covey Co. https://archive.franklincovey.com/facilitator/minisessions/handouts/13_Behaviors_MiniSession_Handout.pdf
How to Deal With Difficult People Ujjwal Sinha Oct 25, 2014 The SlideShare can be found here https://www.slideshare.net/abhiujjwal/how-2-deal-wid-diiclt-ppl
Leadership Secrets of the Rouge Warrior: A Commando's Guide to Success Richard Marcinko w/ John Weisman ISBN-13: 978-0671545154 June 1, 1996
Security Engineering: A Guide To Building Dependable Distributed Systems Ross Anderson ISBN-13: 978-0470068526 April 14, 2008 The second edition of this book can be downloaded in whole from https://www.cl.cam.ac.uk/~rja14/book.html and Mr Anderson has released chapters from his 3rd edition under development.
How Do Committees Invent? Melvin E. Conway Copyright 1968, F. D. Thompson Publications, Inc. http://www.melconway.com/Home/Conways_Law.html
The Pragmatic Programmer: Your Journey To Mastery, 20th Anniversary Edition (2nd Edition) David Thomas and Andrew Hunt ISBN-13: 978-0135957059 September 23, 2019
The supported host operating systems for this class are OSX, Windows 11 and Arch Linux. By "host operating system", I mean the computer you will use to work the class.
It's a good idea to inspect the install scripts from projects you don't yet know. You can do that now by tromping around the project on GitHub. The project makes use of a Makefile, several Bash scripts, Vagrant and Ansible code. Looking through everything before you run it. If you dork up your host this was never my intention nor can I be held responsible as per the License, but I've made every effort to prevent this from happening. If read anything beyond this README please read the License file.
The class automation will configure Bash, Zsh and fish shells, as well as neovim (nvim). A Unix shell is a command-line iterpreter, a command-line interface for the Unix or Unix-like operating systems, such as Linux. The shell exists in a terminal emulator. In this course we will either be iTerm2 for OSX or Arch Linux' Gnome Terminal.
NOTE
- This class will link to an application, tool, library, etc's canonical git repository whenever possible, Wikipedia or its landing page.
- This class makes use of NOTE sections like this to call out things that are important to know or to drop a few tidbits. Reading these notes may save you some aggravation.
If you are on OSX and already have iTerm2 installed open a terminal window or use OSX built-in Terminal application by searching for "terminal" by clicking the Spotlight Search icon (if shown) in the Apple menu bar that runs along the top of the screen on your Mac beginning with an Apple icon. The Spotlight Search icon is a magnifying glass. Optionally, you can press Command
-Space bar
to open Spotlight Search.
If you have spent considerable time configuring your chosen shell, neovim editor, etc it is advisable to back up your configuration by performing the following in the shell:
cp ~/.bash_profile ~/.bash_profile.back
cp ~.profile ~.profile.back
cp ~/.zshrc ~/.zshrc.back
cp ~/.zshenv ~/.zshenv.back
cp ~/.zprofile ~/.zprofile.back
cp ~/.zlogin ~/.zlogin.back
cp ~/.config/fish/config.fish ~/.config/fish/config.fish.back
cp ~/.config/nvim/init.vim ~/.config/nvim/init.vim.back
cp ~/.config/nvim/coc-settings.json ~/.config/nvim/coc-settings.json.back
NOTE
- When cutting-and-pasting from GitHub click on the clipboard icon in the uppper right-section of the code block.
- On or more of the above commands may fail if you don't have the file on your host. If you are sure you typed the command correctly you can ignore the error message.
- Use these backup files to recover your prior shell configuration. Just reverse the direction of the copy.
Now, we're going to to reset our configuration by performing the following
rm ~/.bash_profile ~/.zshrc ~/.zshenv ~/.zprofile ~/.zlogin ~/.config/fish/config.fish ~/.config/nvim/init.vim ~/.config/nvim/coc-settings.json
touch ~/.bash_profile ~/.zshrc
You will need to install a number of upfront dependencies.
If your host (e.g., your laptop, personal computer) is running Windows 10 or OSX you will need to install Docker Desktop. If you're using Arch Linux, the Ansible automation will take care of installing Docker for you and you can skip ahead to installing Ansible. If you are using a version of Linux other than Arch then what's wrong with you? I'm kidding. You can use the Vagrant to execute the factory. Subsequent versions of this class will be updated to support Ubuntu, Rocky Lunux, etc.
This class will use Docker and so Docker Desktop must be installed and configured.
If you're on an OSX host perform the following:
- Download https://www.docker.com/products/docker-desktop
- Drag the Docker app to your Application folder.
- Find the Docker app in your applications folder and click to start the application.
- You will need to verify that you want to trust the application by clicking
Open
. - The Docker Engine, actually a virtual machine (VM), will take sometime to start. You will then be asked to deny or accept
com.docker.backend
from accepting incoming network connections. ClickAllow
. - Find the Docker icon on the right side of your Apple menu bar and click and then select
preferences
from the menu. - In the
docker
window that opens, select the gear icon in the upper-right portion of the window. - Under
General
make sureStart Docker Desktop when you log in
is checked off otherwise you will need to start docker everytime you restart your host. - Then select
Resources
on the left-hand side of the window. - As Docker runs its containers in a virtual machine (VM), you will need to give this VM more processing power and host memory to run heavier container load. What you give the Docker Desktop](https://www.docker.com/products/docker-desktop) VM is dependent on two factors the resources your host can spare and the load the class containers will place on your host. I'd advise trying 8 CPUs and 12 GBs of memory and scale as you see fit. Also, the class brings in a good number of container images. Give you yourself a
Disk image size
of 100 GB or larger. You may need more depending on whether not you have pull other images and are cacheing them. Consider doin a docker image prune to recover space. Pay attention to how much of your disk image is being used. - Click
Apply and Restart
to restart the Docker Desktop VM. The VM will take some amount of time to restart. The containers on the back of the whale icon (Moby Dock) will cycle the Apple menu bar will cycle until Docker is ready.
TODO: Complete this section. I have plans to add this within a few weeks.
Docker will be installed for you via the Ansible automation. Ansible will be discussed later.
If your using an OSX host, you can use Apple's default Terminal app for command line terminal, but I'd advise you to install the superior iTerm2.
Perform the following tasks:
-
Download the latest release from
-
Find the iTerm2 release zip file in your Downloads folder and double click.
-
Drag the iTerm2 app to your Application folder to install.
-
You will need to verify that you want to trust the application by clicking
Open
. -
Use iTerm2 to perform the remaining command line tasks for this class.
This class uses a software factory hosted on a Kubernetes (K8s) cluster. K8s is an abbreviation of Kubernetes ("K" followed by 8 letters "ubernete" followed by "s"). (What a Kubernetes cluster is will be covered later.) To spin up the K8s cluster you will need to perform the following tasks in your shell.
The class uses Ansible to install operating systems dependencies necessary for the class.
Ansible is a "configuration management" tool that automates software provisioning, configuration management and application deployment, two core repeated practices in DevOps, so for the class Ansible addresses this concern in the configuration of either your host operating system or a VM, if you've chosen to execute the class from a Vagrant.
Ansible was open-sourced and then later subsumed by Red Hat.
There are other notable open-source "configuration management" tools, such as Chef and Puppet. Further, still there are others, such as BOSH and Salt, but they hold little or no community of practice or market share.
In his seminal essay, "The Cathedral and the Bazaar", Eric S. Raymond states
while coding remains an essentially solitary activity, the really great hacks come from harnessing the attention and brainpower of entire communities
You want to leverage the work of vibrate community and not some back water effort.
In Ansible, one defines playbooks to declaratively state desired configuration of a host. Yes, utilizing declarative programming vice imperative programming. With declarative programming your code essentially describes what you want, but not how to get what you want. With imperative programming, ones' code states what you want to happen step-by-step. The class will makes use of Ansible, Kubernetes resource files and Helm charts to declare the desired end-state. These will be unpacked later in the class material. The truth is the two are often intermixed. Your Ansible playbooks can be a mix of declarative and imperative programming. One strives for the former rather than the later.
Each Ansible playbook is written in a YAML-based DSL (domain specific language) following the ansible-playbook schema enumerating all the tasks to be performed.
The playbooks for this class are located in the ansible/ project sub-folder
ansible
├── common.yaml
├── docker.yaml
├── files
│ ├── coc-settings.json
│ └── init.vim
├── go.yaml
├── inspec.yaml
├── inventory.yaml
├── main.yaml
├── neovim.yaml
├── pyenv.yaml
├── ruby.yaml
├── sonar-scanner-cli.yaml
├── template-shell-configs.yaml
├── templates
│ ├── bash_profile.tpl
│ ├── config.fish.tpl
│ └── zshrc.tpl
└── yay.yaml
Each playbook is responsible for a unit of configuration. ansible/files/ contains a number of files copied into the user space to configure the neovim editor.
It also possible to collect these tasks into a collection referred to as a role
. This class presently doesn't make use of roles.
The following sub-sections detail how to install Ansible. Skip to the section that applies for your supported host.
I prefer to install the Xcode Command Line tools myself, but you could skip this step and have HomeBrew install it for you.
-
In iTerm2 enter the following into the command line.
xcode-select --install
It is possible your host may already have the Xcode Command Line Tools installed and will be immediately told so if this is the case skip to the next section
-
A dialog will pop on the screen asking if you'd like to install the command line developer tools. Click
Install
. -
You will then be presented a License Agreement. After consulting your lawyer, click
Agree
. -
Wait fo the download and install to complete, then click
Done
.
Homebrew is as the project refers to itself, "The Missing Package Manager for macOS." These days the project also tacks on "(or Linux)". Package managers A package manager automates the process of installing, upgrading, configuring, and removing binaries from an operating system.
I could of had the Ansible playbook install this dependency, but I'd rather you become familiar with the fact there is in fact a community driven package manager for OSX.
Install brew by performing the following:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Success resembles
Password:
==> This script will install:
/usr/local/bin/brew
/usr/local/share/doc/homebrew
/usr/local/share/man/man1/brew.1
/usr/local/share/zsh/site-functions/_brew
/usr/local/etc/bash_completion.d/brew
/usr/local/Homebrew
Press RETURN to continue or any other key to abort
==> /usr/bin/sudo /usr/sbin/chown -R nemonik:admin /usr/local/Homebrew
==> Downloading and installing Homebrew...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (8/8), done.
remote: Total 20 (delta 8), reused 8 (delta 8), pack-reused 12
Unpacking objects: 100% (20/20), 4.12 KiB | 175.00 KiB/s, done.
From https://github.com/Homebrew/brew
* [new branch] dependabot/bundler/Library/Homebrew/sorbet-0.5.6442 -> origin/dependabot/bundler/Library/Homebrew/sorbet-0.5.6442
Updating files: 100% (2703/2703), done.
HEAD is now at 63ed6da2c Merge pull request #11564 from cnnrmnn/new-maintainer-checklist-typo
Updated 2 taps (homebrew/core and homebrew/cask).
==> Installation successful!
==> Homebrew has enabled anonymous aggregate formulae and cask analytics.
Read the analytics documentation (and how to opt-out) here:
https://docs.brew.sh/Analytics
No analytics data has been sent yet (or will be during this `install` run).
==> Homebrew is run entirely by unpaid volunteers. Please consider donating:
https://github.com/Homebrew/brew#donations
==> Next steps:
- Run `brew help` to get started
- Further documentation:
https://docs.brew.sh
On Arch ensure you have Python 3 and pip installed by performing the following in the shell
sudo pacman -Syu python3 python-pip
Output will resemble
:: Synchronizing package databases...
core 134.2 KiB 2033 KiB/s 00:00 [########################################] 100%
extra 1565.3 KiB 25.5 MiB/s 00:00 [########################################] 100%
community 5.6 MiB 58.3 MiB/s 00:00 [########################################] 100%
:: Starting full system upgrade...
resolving dependencies...
looking for conflicting packages...
Packages (2) python-3.9.6-1 python-pip-20.3.4-1
Total Installed Size: 54.20 MiB
Net Upgrade Size: 0.00 MiB
:: Proceed with installation? [Y/n] y
(2/2) checking keys in keyring [########################################] 100%
(2/2) checking package integrity [########################################] 100%
(2/2) loading package files [########################################] 100%
(2/2) checking for file conflicts [########################################] 100%
(2/2) checking available disk space [########################################] 100%
:: Processing package changes...
(1/2) reinstalling python [########################################] 100%
(2/2) reinstalling python-pip [########################################] 100%
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...
Rocky 8 will already include both Python 3 and pip installed, but Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12, so lets fix that
sudo dnf install python39 python39-pip -y
Output will resemble
Last metadata expiration check: 0:05:12 ago on Mon 06 Sep 2021 05:36:33 PM UTC.
Dependencies resolved.
========================================================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
========================================================================================================================================================================================================================================================================================
Installing:
python39 x86_64 3.9.2-1.module+el8.4.0+574+843c4898 appstream 31 k
python39-pip noarch 20.2.4-3.module+el8.4.0+574+843c4898 appstream 2.0 M
Installing dependencies:
python39-libs x86_64 3.9.2-1.module+el8.4.0+574+843c4898 appstream 8.1 M
python39-pip-wheel noarch 20.2.4-3.module+el8.4.0+574+843c4898 appstream 1.3 M
python39-setuptools-wheel noarch 50.3.2-3.module+el8.4.0+574+843c4898 appstream 496 k
Installing weak dependencies:
python39-setuptools noarch 50.3.2-3.module+el8.4.0+574+843c4898 appstream 870 k
Transaction Summary
========================================================================================================================================================================================================================================================================================
Install 6 Packages
Total download size: 13 M
Installed size: 45 M
Downloading Packages:
(1/6): python39-3.9.2-1.module+el8.4.0+574+843c4898.x86_64.rpm 63 kB/s | 31 kB 00:00
(2/6): python39-pip-wheel-20.2.4-3.module+el8.4.0+574+843c4898.noarch.rpm 1.0 MB/s | 1.3 MB 00:01
(3/6): python39-pip-20.2.4-3.module+el8.4.0+574+843c4898.noarch.rpm 1.0 MB/s | 2.0 MB 00:01
(4/6): python39-setuptools-50.3.2-3.module+el8.4.0+574+843c4898.noarch.rpm 1.0 MB/s | 870 kB 00:00
(5/6): python39-setuptools-wheel-50.3.2-3.module+el8.4.0+574+843c4898.noarch.rpm 370 kB/s | 496 kB 00:01
(6/6): python39-libs-3.9.2-1.module+el8.4.0+574+843c4898.x86_64.rpm 1.7 MB/s | 8.1 MB 00:04
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 2.6 MB/s | 13 MB 00:05
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : python39-setuptools-wheel-50.3.2-3.module+el8.4.0+574+843c4898.noarch 1/6
Installing : python39-pip-wheel-20.2.4-3.module+el8.4.0+574+843c4898.noarch 2/6
Installing : python39-libs-3.9.2-1.module+el8.4.0+574+843c4898.x86_64 3/6
Installing : python39-3.9.2-1.module+el8.4.0+574+843c4898.x86_64 4/6
Running scriptlet: python39-3.9.2-1.module+el8.4.0+574+843c4898.x86_64 4/6
Installing : python39-setuptools-50.3.2-3.module+el8.4.0+574+843c4898.noarch 5/6
Running scriptlet: python39-setuptools-50.3.2-3.module+el8.4.0+574+843c4898.noarch 5/6
Installing : python39-pip-20.2.4-3.module+el8.4.0+574+843c4898.noarch 6/6
Running scriptlet: python39-pip-20.2.4-3.module+el8.4.0+574+843c4898.noarch 6/6
Verifying : python39-3.9.2-1.module+el8.4.0+574+843c4898.x86_64 1/6
Verifying : python39-libs-3.9.2-1.module+el8.4.0+574+843c4898.x86_64 2/6
Verifying : python39-pip-20.2.4-3.module+el8.4.0+574+843c4898.noarch 3/6
Verifying : python39-pip-wheel-20.2.4-3.module+el8.4.0+574+843c4898.noarch 4/6
Verifying : python39-setuptools-50.3.2-3.module+el8.4.0+574+843c4898.noarch 5/6
Verifying : python39-setuptools-wheel-50.3.2-3.module+el8.4.0+574+843c4898.noarch 6/6
Installed:
python39-3.9.2-1.module+el8.4.0+574+843c4898.x86_64 python39-libs-3.9.2-1.module+el8.4.0+574+843c4898.x86_64 python39-pip-20.2.4-3.module+el8.4.0+574+843c4898.noarch python39-pip-wheel-20.2.4-3.module+el8.4.0+574+843c4898.noarch
python39-setuptools-50.3.2-3.module+el8.4.0+574+843c4898.noarch python39-setuptools-wheel-50.3.2-3.module+el8.4.0+574+843c4898.noarch
Complete!
As we have multiple Python versions installed we have to configure Rocky to default to version 3.9. We can do that be peforming the following
sudo update-alternatives --set python /usr/bin/python3.9
sudo update-alternatives --set python3 /usr/bin/python3.9
We can verify
python --version
python3 --version
Output will resemble for both
Python 3.9.2
Also, let's update pip
python -m pip install --upgrade pip --user
Output will resemble
Collecting pip
Downloading pip-21.2.4-py3-none-any.whl (1.6 MB)
|████████████████████████████████| 1.6 MB 484 kB/s
Installing collected packages: pip
Successfully installed pip-21.2.4
Ansible is based on Python ans distributed as a Python module that you can install by pip. Pip refers to itself as "the package installer for Python". There are others, but most everyone uses pip.
In the command line perform the following task:
Type the following
python3 -m pip install --user ansible
This will install the Ansible module into the Python user install directory for your platform. Typically this results in the Ansible binaries being installed into the .local/bin
sub-folder in the user's home directory (i.e., $HOME/.local/bin
)
Output will resemble
In order to use the paramiko connection plugin or modules that require paramiko, install paramiko
python3 -m pip install paramiko
Output will resemble
Ansible is now installed in your home directory in $HOME/.local/bin
path, where is $HOME
is an environment variable holding the path to your home directory.
But if you enter the following into the shell
which ansible-playbook
the output will likely be
ansible-playbook not found
The which
command will attempt to locate a program file in the user's path.
You are likely using the Bash shell at this point. To check type echo $SHELL
into your shell. Depending on what is returned perfom what is appropriate for your shell.
If your shell is Bash:
if [[ "$OSTYPE" == "darwin"* ]] && [[ "${PATH}" != *"$HOME/Library/Python/3.8/bin"* ]]; then 'export PATH=$HOME/Library/Python/3.8/bin' >> ~/.bash_profile; fi
if [[ "${PATH}" != *"/usr/local/bin"* ]]; then echo 'export PATH=/usr/local/bin:$PATH' >> ~/.bash_profile; fi
if [[ "${PATH}" != *"$HOME/.local/bin"* ]]; then echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bash_profile; fi
source ~/.bash_profile
If your shell is Zsh:
if [[ "$OSTYPE" == "darwin"* ]] && [[ "${PATH}" != *"$HOME/Library/Python/3.8/bin"* ]]; then 'export PATH=$HOME/Library/Python/3.8/bin' >> ~/.bash_profile; fi
if [[ "${PATH}" != *"/usr/local/bin"* ]]; then echo 'export PATH=/usr/local/bin:$PATH' >> ~/.zshrc; fi
if [[ "${PATH}" != *"$HOME/.local/bin"* ]]; then echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc; fi
source ~/.zshrc
If you're using fish:
echo 'set -U fish_user_paths $HOME/Library/Python/3.8/bin $fish_user_paths' >> ~/.config/fish/config.fish
echo 'set -U fish_user_paths $HOME/.local/bin $fish_user_paths' >> ~/.config/fish/config.fish
echo 'set -U fish_user_paths /usr/local/bin $fish_user_paths' >> ~/.config/fish/config.fish
source ~/.config/fish/config.fish
to add the Ansible executables and the binaries the Ansible automation may install to your path. The PATH
environment variable is a list of directories that your shell searches through when you enter a command.
Note that we updated our PATH
and sourced our shell configuratiion thereby updating our present shell we can verify ansible
has been installed via
ansible-playbook --version
Output will resemble
ansible-playbook [core 2.11.4]
config file = None
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.6/site-packages/ansible
ansible collection location = /home/vagrant/.ansible/collections:/usr/share/ansible/collections
executable location = /home/vagrant/.local/bin/ansible-playbook
python version = 3.6.8 (default, May 19 2021, 03:00:47) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 3.0.1
libyaml = True
Let's test to see if Ansible works on our host by executing
ansible localhost -m ping
Output should resemble
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | SUCCESS => {
"changed": false,
"ping": "pong"
}
The fact that ping
returns pong
indicates Ansible has been installed correctly.
You will need to perform the following in the shell to add community.general Ansible module from Ansible Galaxy, Ansible’s official hub for sharing Ansible content.
For example, on an OSX host, we'll need this to install brew
packages and for Arch Linux we'll need this to install operating systems package via pacman package manager.
ansible-galaxy collection install community.general
Successful output should resemble
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Downloading https://galaxy.ansible.com/download/community-general-3.3.1.tar.gz to /home/nemonik/.ansible/tmp/ansible-local-1333943849hwygv/tmplq_el1ud/community-general-3.3.1-d99hf7_o
Installing 'community.general:3.3.1' to '/home/nemonik/.ansible/collections/ansible_collections/community/general'
community.general:3.3.1 was installed successfully
The rest of the class will require a number of operating system dependencies be installed. We will accomplish the via executing the ./ansible/main.yaml playbook.
First we will need to git clone the class repository by performing the following in your shell
mkdir -p $HOME/Development/workspace
cd $HOME/Development/workspace
git clone https://github.com/nemonik/hands-on-DevOps-gen2.git
cd hands-on-DevOps-gen2
If you haven't reviewed the playbooks, now is a good time to do so.
Enter the project's ansible/ sub-folder
cd ansible
The ansible-playbook
command will execute a series of playbooks across an inventory of hosts. For the class we have just one host and this is localhost
, your host.
The contents of inventory.yaml
file is in ansible
folder written in YAML following Ansible Inventory shema and resembles
all:
children:
factory:
hosts:
localhost:
ansible_connection: local
ansible_host: localhost
ansible_python_interpreter: /usr/bin/python3
vars:
default_delay: 10
default_retries: 60
ruby_version: 3.0.1
supported_host_os:
- MacOSX
- Archlinux
ungrouped: {}
Where
factory
is a group name used in classifying hosts.localhost
is the alias for your host.ansible_host
,ansible_connection
andansible_python_interpreter
are behavioral inventory parameters used to control haw Ansible interacts with your host.ansible_host
describes the name of the host to connect to. This parameter is redundant in this instance.ansible_connection
describes the connection type used to connect to the host. In our case this set tolocal
. Another type would bessh
if the host was remote.ansible_python_interpreter
describes the target host's python path. Python is "batteries included" on OSX and will be found on Arch Linux.
vars
describes a number of variables uses across the playbooks:default_delay
,default_retries
,ruby_version
andsupported_host_os
.default_delay
sets the number of seconds to delay between retries.default_retries
sets the number times a task will be retried before Ansible gives up.ruby_version
sets the ruby version to install, andsupported_host_os
is a list of supported operating systems.
A playbook is composed of one or more plays in an ordered list, where plays are executed in order from top to bottom. Most Ansible modules (also referred to as “task plugins” or “library plugins”) check whether the desired state has already been achieved and the playbook will move on without performing any actions once the desired state has been achieved. This is refered to as being idempotent.
Let's look at the ansible/main.yaml playbook.
---
- name: Install factory dependencies
hosts: factory
tasks:
- name: Echo ansible distribution
ansible.builtin.debug:
msg: "{{ inventory_hostname }} host is running {{ ansible_distribution }}:{{ ansible_distribution_release }} with an IP address if {{ ansible_default_ipv4.address }}"
- name: Fail if OS is not MacOSX or ArchLinux
ansible.builtin.fail:
msg: "{{ ansible_distribution }} - {{ ansible_distribution_release }} is not MacOSX or ArchLinux"
when: ansible_distribution not in supported_host_os
The lines above name the playbook and state the group name (in this case factory
correlated with the inventory) this playbook will be applied to. Followed by two tasks that will be executed.
The first task, ansible.builtin.debug
is used for debugging purposes. The second, ansible.builtin.fail
tests whether or not the host is supported.
We can retrieve the documentation for both in the shell via using the ansible-doc
command. For example, if we enter
ansible-doc ansible.builtin.debug
into the shell, it will return
> ANSIBLE.BUILTIN.DEBUG (/Users/nemonik/.local/lib/python3.9/site-packages/ansible/modules/debug.py)
This module prints statements during execution and can be useful for debugging variables or
expressions without necessarily halting the playbook. Useful for debugging together with the
'when:' directive. This module is also supported for Windows targets.
* note: This module has a corresponding action plugin.
OPTIONS (= is mandatory):
- msg
The customized message that is printed. If omitted, prints a generic message.
[Default: Hello world!]
type: str
- var
A variable name to debug.
Mutually exclusive with the `msg' option.
Be aware that this option already runs in Jinja2 context and has an implicit `{{ }}' wrapping,
so you should not be using Jinja2 delimiters unless you are looking for double interpolation.
[Default: (null)]
type: str
- verbosity
A number that controls when the debug is run, if you set to 3 it will only run debug when -vvv
or above.
[Default: 0]
type: int
version_added: 2.1
version_added_collection: ansible.builtin
NOTES:
* This module is also supported for Windows targets.
SEE ALSO:
* Module ansible.builtin.assert
The official documentation on the ansible.builtin.assert module.
https://docs.ansible.com/ansible/2.11/modules/ansible.builtin.assert_module.html
* Module ansible.builtin.fail
The official documentation on the ansible.builtin.fail module.
https://docs.ansible.com/ansible/2.11/modules/ansible.builtin.fail_module.html
AUTHOR: Dag Wieers (@dagwieers), Michael DeHaan
VERSION_ADDED_COLLECTION: ansible.builtin
EXAMPLES:
- name: Print the gateway for each host when defined
ansible.builtin.debug:
msg: System {{ inventory_hostname }} has gateway {{ ansible_default_ipv4.gateway }}
when: ansible_default_ipv4.gateway is defined
- name: Get uptime information
ansible.builtin.shell: /usr/bin/uptime
register: result
- name: Print return information from the previous task
ansible.builtin.debug:
var: result
verbosity: 2
- name: Display all variables/facts known for a host
ansible.builtin.debug:
var: hostvars[inventory_hostname]
verbosity: 4
- name: Prints two lines of messages, but only if there is an environment value set
ansible.builtin.debug:
msg:
- "Provisioning based on YOUR_KEY which is: {{ lookup('env', 'YOUR_KEY') }}"
- "These servers were built using the password of '{{ password_used }}'. Please retain this for later use."
And the playbook continues on to importing and executing each of the following playbooks
- name: When ArchLinux ensure Docker is installed
ansible.builtin.import_playbook: docker.yaml
- name: When ArchLinuc ensure yay AUR helper is installed
ansible.builtin.import_playbook: yay.yaml
- name: Ensure common dependencies are installed
ansible.builtin.import_playbook: common.yaml
- name: Ensure pyenv is installed and configured
ansible.builtin.import_playbook: pyenv.yaml
- name: Ensure sonar-scanner cli is installed and configured
ansible.builtin.import_playbook: sonar-scanner-cli.yaml
- name: Ensure rvm and ruby {{ ruby_version }} is installed
ansible.builtin.import_playbook: ruby.yaml
- name: Ensure InSpec is installed
ansible.builtin.import_playbook: inspec.yaml
- name: Ensure neovim is installed and configured
ansible.builtin.import_playbook: neovim.yaml
- name: Ensure Go is installed and configured
ansible.builtin.import_playbook: go.yaml
- name: Template in shell configs
ansible.builtin.import_playbook: template-shell-configs.yaml
I'd encourage you to review them all, but lets look at a portion of the first to be imported and executed, the ./ansible/common.yaml playbook
---
- name: Ensure common dependencies are installed
hosts: factory
Again, the playbook is named and states the group name this playbook will be applied to.
tasks:
- name: Set fact for $HOME
ansible.builtin.set_fact:
HOME: "{{ lookup('env', 'HOME') }}"
Followed in by executing a number of tasks. The first being to create a fact to hold the HOME environment variable.
- name: When MacOSX ensure Homebrew packages are installed
block:
- name: Update homebrew and upgrade all packages
community.general.homebrew:
update_homebrew: yes
upgrade_all: yes
- name: Check if /usr/local/Cellar/bash-completion exists
ansible.builtin.stat:
path: /usr/local/Cellar/bash-completion
register: bash_completion
- name: Ensure bash-completion is not installed, so bash-completion@2 can be installed
ansible.builtin.shell: brew unlink bash-completion
when: bash_completion.stat.exists
- name: Ensure HomeBrew packages are installed
community.general.homebrew:
name:
- bash
- bash-completion@2
- zsh
- zsh-completion
- fish
- vim
- nano
- pwgen
- openssl
- watch
- gettext
- k3d
- helm
- curl
- wget
- git-secrets
- tmux
- yamllint
- jq
- tree
- htop
- kubectl
state: latest
retries: "{{ default_retries }}"
delay: "{{ default_delay }}"
register: result
until: result is succeeded
- name: Get HOMEBREW_PREFIX
block:
- name: Execute brew --prefix
ansible.builtin.shell: brew --prefix
register: brew_prefix
- name: Create brew_fact with stdout of of prior command
ansible.builtin.set_fact:
HOMEBREW_PREFIX: "{{ brew_prefix.stdout }}"
when: ( ansible_distribution == 'MacOSX' )
First, the when
condition will be evaluated to determine if the host to be "ansible-ized" is running OSX before Ansible runs each of the tasks in the block.
The block collects the following tasks:
community.general.homebrew
is used to update the host's installed packages,ansible.builtin.stat
checks and holds in a register where or not the/usr/local/Cellar/bash-completion
path exists on the host.ansible.builtin.shell
executesbrew
to unlinkbash-completion
thereby ensuring the the wrongbash-completions
package is not installed, but only if the path had been found.community.general.homebrew
tries repeatedly to install a list of HomeBrew packages until they're installed or the maximum number of retries are reached.- A sub-block ends out the run with
ansible.builtin.shell
executingbrew --prefix
and storing the result in a registeransible.builtin.set_fact
is used to hold the standard output (stdout) ofbrew--prefix
command in a Ansible fact,HOMEBREW_PREFIX
.
The ./ansible/common.yaml playbook continues until completion and then you are returned to the ./ansible/main.yaml to execute the next playbook. I'd encourage you to review each.
If your on LinkedIn or search many of the job boards you'll find many employers equate infrastructure-as-code as DevOps. Infrastructure-as-code is a DevOps methodology but not the entirety of DevOps.
Now that we've reviewed the playbook lets execute it via the Make target install-dependencies
in the root of the project in our shell
if [[ "$OSTYPE" == "darwin"* ]]; then brew install bash; /usr/local/bin/bash; fi
export PATH="~/.local/bin/:/usr/local/bin:$PATH"
cd $HOME/Development/workspace/hands-on-DevOps-gen2
make install-dependencies
NOTES
- Pay attention to the playbook's run as it may stop to ask you for your password.
- The password asked for out of the gate is needed, so that Ansible can become the root and install system-wide packages and software.
- You may be asked again when installing fonts, so keep an eye out for this.
- The first line runs only if you re on OSX to install a Bash 5. OSX ship with Bash 3.2.57(1)-release.
- Some tasks are long running. I've put debug statements prior to these tasks. Look for them if you think Ansible has froze. It likely has not.
- How long this takes to run is dependent on the speed of your Internet connection.
The output should resemble
The last bit of output is important
META: ran handlers
META: ran handlers
PLAY RECAP ******************************************************************************************************************************************************************************
localhost : ok=135 changed=65 unreachable=0 failed=0 skipped=34 rescued=0 ignored=1
If failed
equals something other than 0
then you have an issue to debug. Debuging will require you to review the task that resulted in the failure likely the last task run. Review the output, determine what playbook you were in, open and review the playbook and the offending task and then try the equivalent in the command line to debug the issue. The host may be in a state the playbook cannot handle. Perhaps a dependency is missing. Perhaps. Perhaps. Perhaps.
XTerm2 must be further configured to benefit from the Nerd Fonts installed by Ansible. We're going to configure XTerm2 to use Meslo Nerd Font
and use Solarized Dark
color theme. Optionally, you can select another Nerd font.
- Open iTerm2's
Preferences
. - In the
Preference
window that opens, selectProfile
. - In the
Default
profile, selectText
. - In the
Text
profile, selectMeslo Nerd Font
from theFont
panel. - In the
Preferences
window, selectColor
. - Click
Color Preserts...
and selectSolarized Dark
. - Close the
Preference
windows, and re-start your terminal window for your changes to take effect. - Close iTerm2 and restart it.
The Gnome Terminal must be further configure to benefit from the Nerd Fonts installed Ansible. We're going to configure Terminal to Meslo Nerd Font
and use Solarized Dark
color them. Optionally, you can select another Nerd Font.
- Open Terminal's
Preferences
. - In the
Preference
window that opens, select theUnamed
profile. - In the
Text
panel, check offCustom font
and selectMeslo Nerd Font
. Size as per your eyesight. - In the
Colors
panel, unclickUse colors from system theme
and selectSolarized dark
from the available built-in themes. - Close the
Preference
windows. - Close and restart Gnome Terminal and your Terminal should of updated as per your selections.
The class automation will attempt to configure Bash, Zsh and fish, but let's try something perhaps new. Further information on fish can be found in its documentation, but essentially it syntax highlighting, autosuggestions, and tab completion along with some other improvements that in my opinion pushes it past my prior shell, Zsh.
Let's use fish as our shell
On OSX type
sudo chsh -s $(which fish) $(whoami)
On Arch Linux
chsh -s $(which fish) $(whoami)
sudo reboot
On OSX, simply closing your current terminal and opening a new one should be enough for the change to take, but for Arch you appear to have to logout or reboot.
NOTE
- Effort was take to support Bash, Zsh, and fish, but preference was given to fish, so if there are flaws in the course they'll be discovered using Bash and Zsh.
If you are going to use the fish shell consider using the tide prompt. To configure
tide configure
NOTE
- A shout out to @IlanCosman for helping me figure out my fish path issue.
The class automation will attempt to configure neovim (nvim) installing junegunn/vim-plug a plugin manager, a number of additional plugins to include a language server.
The plugins installed
- tpope/vim-commentary to comment stuff out when in nvim's "normal" mode you just type
gcc
. - junegunn/vim-easy-align to align text when in "visual" mode.
- ctrlpvim/ctrlp.vim to provide a finder.
- preservim/nerdtree to provide a file tree explorer, when in "normal" mode press thje
space
key followed by then
key. - Xuyuanp/nerdtree-git-plugin to extend NERDTree to show git status,
- tiagofumo/vim-nerdtree-syntax-highlight to extend NERDTree with extra syntax and highlighting.
- airblade/vim-gitgutter to show in sign column which lines have been added, modified, or removed when in a git repository.
- tpope/vim-fugitive to provide a git command functionality in the editor.
- vim-airline/vim-airline to provide a status/tabline.
- vim-airline/vim-airline-themes to provide themes to the vim-airline status/tabline.
- preservim/nerdcommenter to comment stuff out. I haven't settled on whether i I like thise plugin or
tpope/vim-commentary
. To comment out a line in either normal/visual mode typespace
followed bycc
. - NLKNguyen/papercolor-theme to provide both light and dark schemes. I've selected
PaperColor
color scheme later in the ansible/files/init.vim. - fatih/vim-go to install the official Go development plugin.
- neoclide/coc.nvim to install and configure a language server. I've configured the language server to install a number of extensions:
- coc-go, a Go language server extension using
gopls
. - coc-pyright, a Python3 language server extension.
- coc-solargraph, a Ruby language server extension using
solargraph
. - coc-spell-checker, a basic spell checker that works with camelCase code.
- coc-json, a Json language server extension.
- coc-yaml, a fork of vscode-yaml that provides a YAML language server extension.
- coc-angular, an Angular language server extension.
- coc-html, an HTML language server extension.
- coc-snippets provides a snippets solution.
- coc-prettier provides Coc extension to format a numbe of file types using prettier. To run the prettier command line interface enter
npx prettier
in the shell.
- coc-go, a Go language server extension using
I'll be honest I use most of these plugins and extensions, some more than others. A few I have yet to fully learn.
When you first start nvim
on the command-line you will be greated with a number of warning/error messages. This is because the ansible/files/init.vim copied to your ~/.config/nvim/init.vim
is pre-configure to use the plugins yet to be installed.
You must now install them while in visual mode. Just press the esc
to get past the error messages and then type :PlugInstall
and junegunn/vim-plug
will install the plugins described above. coc
language server should then take over and install its own extensions, but you may have to close the status window type :q!
and then restart nvim
for langauge server to install its extensions. Pressing :q!
will close Coc's status window.
I would really love to teach you about vi
, vim
and nvim
, but doing so is really outside the scope of this class. I've really been an avid nano user, but capable of using vi
in a pinch as it almost always garunteed to be installed in Unix-like operating system. Neovim pushed me over the cliff to use it fill time. And I'm focused to learn nvim
. This is is why nano
is aliased to nvim
, so if you enter nano
into the the shell it will start instead nvim
. You can override this by typing nano's full path, /usr/local/bin/nano
on OSX and /usr/bin
on Arch. You can strip the alias out of your shell initialization file or for a session map it back the the alias
command.
vi
, vim
and nvim
pointers:
- I'd encourage you to read vim's docs.
- Read through the FAQ.
- Give VIM Adventures some play.
- Print out Allison McKnight's cheat sheet or search the Internet for another.
We will be using nvim
in the class, but I won't know if you're using something else.
So, now that you have the prerequisite dependencies, it is time to move on to spinning up the factory.
The factory tools are entirely execute on a containerized Kubernetes cluster hosted on k3s (Rancher Lab’s minimal Kubernetes distribution) on Docker created by k3d. Kubernetes is used to orchestrate the life cycles the long-running tools (e.g., Taiga, GitLab, Docker CI, SonarQube, Heimdall 2. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Essentially, Kubernetes serves as an operating system for a cluster of computing resource (in the cases of k3d these computing resources themselves are containers) and manages the life cycle and discovery of the applications running upon the cluster.
Initially, k3s was billed as a light-weight (Its less than 40 MB binary completely implements the Kubernetes API.), fully CNCF certified Kubernetes distribution designed for resource-constrained environments (It can run on host with as little as 512MB RAM.), not needing the added steps and dependencies a full Kubernetes cluster would require. Keep in mind memory and compute equates to money when it comes to cloud, so why burn memory and computer err. money when you don't have to. It installs in a fraction of the time it takes to launch a canonical Kubernetes cluster. As k3s has matured it has become just a darn good Kubernetes distribution.
It's canonical source can be found at
It's landing page can be found here
The official documentation can be found here
https://rancher.com/docs/k3s/latest/en/
I've chosen to author the automation for spinning up the factory in GNU Make. GNU Make bills itself as "a tool which controls the generation of executables and other non-source files of a program from the program's source files." Created by, Stuart Feldman Make introduced in PWB/UNIX has been around since 1976. Yep, over 45 years ago. Initially, its purpose was to automate software builds. Yeah, automation one of the core methods of DevOps has been around quite a long time. I've chosen to use Make since the inception of my class to drive this point home. There's alway a few "grey beards" in my class that perk up and smile after hearing it mentioned. Make lends itself well to the task of spinning up the cluster, the tools, etc as a makefile is essentially a collection of rules. An individual rule in the makefile tells Make how to execute a series of commands. The ./Makefile is found at the root of the repository. As I stated earlier, typically Make is utilized for building code, but because of its ubiquity across Linux and OSX it is often used for a wide variety of tasks. We're going to use it stand up a Kubernetes cluster and a top that entire DevOps factory. Maybe this was a wrong decision. Only time will tell.
First let's inspect the Makefile in piecemeal.
# Copyright (C) 2021 Michael Joseph Walsh - All Rights Reserved
# You may use, distribute and modify this code under the
# terms of the the license.
#
# You should have received a copy of the license with
# this file. If not, please email <[email protected]>
The above is the copyright. The BSD 3-clause license allows you nearly almost unlimited freedom with the course material so long as you include the BSD copyright and license notice. I can not be held responsible if you damage you host for example. You may also not use my name in the endorsement of derived products.
.PHONY: all install-dependencies pull-class-images install-k3s-air-gap-image start install start-registry delete-registry start-pullthrough stop-pullthrough uninstall-pullthrough start-cluster delete-cluster patch-coredns install-traefik uninstall-traefik install-gitlab uninstall-gitlab install-drone uninstall-drone install-taiga uninstall-taiga install-sonarqube uninstall-sonarqube install-heimdall uninstall-heimdall install-plantuml uninstall-plantuml decrypt-vault encrypt-vault load-cached-images
Generally, a Makefile is comprised of rules that look like this
target [optionally, additional targets...] : prerequisite [optionally, additional prerequisite]
recipe
...
Each rule begins with a line that defines usually one sometimes more than one target followed by a colon and optionally a number of files or targets on which the target depends. Followed by a tab indented recipe comprised of one or more tab indented lines. When building source code, the target is a file, but in the instance where you want your makefile to run a series of commands that do not represent physical files on the file system you are executing what make considers a "phony" target. Phony targets are the name of the recipe. GNU Make provides a built-in target named .PHONY
where you can make your target a prerequisite of it thereby declaring your target to be phony. This is what I've done above for all of my targert. The .PHONY
line could be skipped and the makefile will still work, but including the line makes the makefile more readable once you know what a phony target is.
What follows next are the makefile's rules
all: install-dependencies start install
start: start-pullthrough start-registry install-k3s-air-gap-image pull-class-images start-cluster patch-coredns
install: install-traefik install-gitlab install-drone install-taiga install-sonarqube install-heimdall install-plantuml
uninstall: delete-cluster
install-dependencies:
./install_dependencies.sh
start-pullthrough:
cd pullthrough-registry && ./install.sh
stop-pullthrough:
cd pullthrough-registry && ./stop.sh
uninstall-pullthrough:
cd pullthrough-registry && ./uninstall.sh
start-registry:
./start_registry.sh
delete-registry:
./delete_registry.sh
pull-class-images:
./pull_class_images.sh
install-k3s-air-gap-image:
cd k3s-air-gap-image && ./install.sh
start-cluster:
./start_cluster.sh
delete-cluster:
./delete_cluster.sh
patch-coredns:
cd coredns && ./patch.sh
install-traefik:
cd traefik && ./install.sh
uninstall-traefik:
cd traefik && ./uninstall.sh
install-gitlab:
cd gitlab && ./install.sh
uninstall-gitlab:
cd gitlab && ./uninstall.sh
install-drone:
cd drone && ./install.sh
uninstall-drone:
cd drone && ./uninstall.sh
install-taiga:
cd taiga && ./install.sh
uninstall-taiga:
cd taiga && ./uninstall.sh
install-sonarqube:
cd sonarqube && ./install.sh
uninstall-sonarqube:
cd sonarqube && ./uninstall.sh
install-heimdall:
cd heimdall2 && ./install.sh
uninstall-heimdall:
cd heimdall2 && ./uninstall.sh
install-plantuml:
cd plantuml-server && ./install.sh
uninstall-plantuml:
cd plantuml-server && ./uninstall.sh
load-cached-images:
./load_cached_containers.sh
decrypt-vault:
./decrypt-vault.sh
encrypt-vault:
./encrypt-vault.sh
Earlier we entered make install-dependencies
in our shell to install all the dependencies via Ansible. Well, we can see above the install-dependencies
target executes a ./install_dependencies.sh shell script.
In the shell, if you were to enter make all
make would execute the all
target, who will in turn
- execute the
start
target, who in turn will call- the
start-pullthrough
target, who will descend into the ./pullthrough-registry sub-folder and execute install.sh Bash script - tehe
start-registry
target, who will execute the ./start_registry.sh Bash script - the
pull-class-images
target, who will execute the ./pull_class_images.sh Bash script - the
install-k3s-air-gap-image
, who who will descend into the ./k3s-air-gap-image sub-folder and execute install.sh Bash script - the
pull-class-images
target, who will execute the ./pull_class_images.sh Bash script - the
start-cluster
target will be called execute the ./start_cluster.sh script - and then finally the
patch-coredns
target will be called, descend into the ./coredns sub-folder and execute the patch.sh script
- the
- then execute the
install
target, who will call- the
install-traefik
target, descend into the ./traefik sub-folder and execute the install.sh script - then the
install-gitlab
target to descend into the ./gitlab sub-folder and execute the install.sh script - then the
install-drone
target to descend into the ./drone sub-folder and execute the install.sh script - then the
install-taiga
target to descend into the ./taiga sub-folder and execute the install.sh script - then the
install-sonarqube
target to descend into the ./sonarqube sub-folder and execute the install.sh script - then the
install-heimdall
target to descend into the ./heimdall sub-folder and execute the install.sh script - and finally, the
install-plantuml
target to descend into the ./plantuml-server sub-folder and executes the install.sh script
- the
The factory will pull a great number of images. Docker permits anonymous free users the ability to pull 100 on the docker.io container registry per six hours and authenticated free users a total of 200 pulls per six hours. You can sign up for a free authenticated account at https://hub.docker.com/signup.
Back in your shell you can login you use the docker cli to login. The apropos is docker login [OPTIONS] [SERVER]
. If you don't provide a [SERVER]
it is assumed your intention is to log into Docker's public registry by default.
docker login -u <your username>
In the above example you are providing your username out of the gate.
If during the class you encounter errors where you cannot pull the necessary images. Consider paying the 7 dollars for a Pro account https://www.docker.com/pricing It is 7 dollars if you pay monthly or 5 dollars/month if pay 60 dollars for a year up front. With the Pro account you can make 5000 pulls on docker.io's container registry per day.
The class can be configured to make use of a pull through registry to mitigate the need, but really nothing can be done if you're anonymous image requests are from a private network with others anonymous users doing the same. A pull through registry will cache the image you request, so the next time you request the same image the registry will pull the image from the cache vice docker.io.
To enable the pull through registry, edit the ./.env file at the root of the project and enable the pull through registry via
nvim ./..env
then scrolling to you see
## pullthrough container registry
pullthrough_registry_enabled=false
and setting pullthrough_registry_enabled
equal to true
.
You will then need to configure Docker daemon to use it.
-
Open the Docker Desktop dashboard
-
Select
Settings
(The gear icon on the upper-right) -
Select
Docker Engine
-
In the box under
Configure the Docker daemon by typing a json Docker daemon configuration file.
{ "registry-mirrors": [ "http://localhost:5001" ], "builder": { "gc": { "defaultKeepStorage": "20GB", "enabled": true } }, "debug": true, "experimental": false }
And add the lines
"registry-mirrors": [ "http://localhost:5001" ],
If you are using Arch linux, edit /etc/docker/daemon.json
and add the line. I'll update the Ansible at a later date to do this for you.
The class will also cache all its images into ./image_cache folder that is empty when you clone the class. You can re-install these images into you local Docker cache via
~/Development/workspace/hands-on-DevOps-gen2/load_cached_images.sh
K3s will pull the images it needs to run directly from docker.io when using the canonical container image. I've provided a Dockerfile in ./k3s-air-gap-image that will build in the containers. The K3s project provides all these images as tar-ball.
To use this class:
- I will have provided you the password to decrypt the vault file containing Let's Encrypt wildcard SSL certificate and private key for the nemonik.com domain, or
- You will need to own a domain for which you can register a certificate for and then place the full certificate chain and key into the vault file as I did.
You likely will fall into the later catagory as I don't typically handout this password and will need to perform the following. If I'm teaching you this class in person I likely will provide the vault password and so you can skip to followig section on starting the cluster.
If you have your own domain, you can generate a Let's Encrypt wildcard SSL certficate for free using Certbot provided by the Electronic Frontier Foundation (EFF). If you don't have a domain, how to register a domain is outside the scope of this class. Might I suggest you use dynadot.com. My Refer-a-friend code is 06U759R6b8a9D9H
, you can enter this code upon registering your domain for a $5 account credit.
I've used the term wildcard SSL certificate a couple times now. What is it? Well, a wildcard certificate is a certificate with a wildcard character (*) in the domain name filed to permit the certificate to secure multiple sub domains of a base domain. For example, all the tools of the factory are configure by default to exist as subdomains of nemonik.com. GitLab exists at https://gitlab.nemonik.com, Taiga exists at https://taiga.nemonik.com, etc. Access to these applications are reverse proxied by Traefik. Traefik also handles encrypting the HTTP traffic to these tools. Arguably we could create self-signed certificates, but self-signed certificates can be problematic resulting in your Web browser, curl and wget thowing certificate errors. Let's Encrypt SSL certificates are commonly used in practice, so I figured I might as well cover their creation and use in this class.
First open the dotenv file (./.env) in the neovim editor and edit the following line
domain="nemonik.com"
replacing nemonik.com
with your domain.
The Electronic Frontier Foundation (EFF) provides instructions for how to use Certbot at https://certbot.eff.org/instructions for many operating systems.
The following is not meant to replace these instuctions.
If you are using Archlinux ensure the certbot
package is installed
sudo pacman -S certbot
If you are using OS X ensure the brew package is installed
brew install certbot
certbot
will be installed by Ansible ./ansible/common.yaml playbook in the future.
The following commands can be used to both create and renew Let's Encrypt SSL certificates.
sudo certbot -d "*.nemonik.com" --server https://acme-v02.api.letsencrypt.org/directory --manual --preferred-challenges dns certonly
replacing nemonik.com
with your domain.
Certbot will produced the following output:
Please deploy a DNS TXT record under the name
_acme-challenge.nemonik.com with the following value:
a random-string-of-characters
Before continuing, verify the record is deployed.
(This must be set up in addition to the previous challenges; do not remove,
replace, or undo the previous challenge tasks yet. Note that you might be
asked to create multiple distinct TXT records with the same name. This is
permitted by DNS standards.)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue
You will now need to add a TXT record containing the random-string-of-characters
for the _acme-challenge
sub-domain with your domain registrar.
Registrars have various ways of doing this. dynadot.com documents how to do this at <dynadot.com>. GoDaddy documents how to do this for their service at https://www.godaddy.com/help/add-a-txt-record-19232. Just use your registrar's search or web search for your registrar's name plus create a txt record
to learn how to do the same for your registrar.
You can move things along by setting a short time to live.
Do not immediately press Enter
to continue in Certbot as it takes time for your new DNS entry to propoagate across the Internet.
The best way I found to be sure your change has propogated is to use https://www.whatsmydns.net/#TXT/_acme-challenge.nemonik.com replacing _acme-challenge.nemonik.com
with the fully qualified domain name of your your TXT record. You can refresh your browser to watch as the TXT record is propogated across the Internet.
You also can use either the dig
or host
command to check on the propoagation of the your TXT record like so
dig -t txt _acme-challenge.nemonik.com
will output
; <<>> DiG 9.16.20 <<>> -t txt _acme-challenge.nemonik.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28925
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;_acme-challenge.nemonik.com. IN TXT
;; ANSWER SECTION:
_acme-challenge.nemonik.com. 3590 IN TXT "ozbMVFpscNW6p0RpnDJUP9_mWAiXC-IyNDdfEnoKGMA"
;; Query time: 39 msec
;; SERVER: 192.168.86.1#53(192.168.86.1)
;; WHEN: Sun Sep 05 16:30:47 EDT 2021
;; MSG SIZE rcvd: 112
when the TXT record has propogated.
host
executed like so
host -t txt _acme-challenge.nemonik.com
will return rather simply
_acme-challenge.nemonik.com descriptive text "ozbMVFpscNW6p0RpnDJUP9_mWAiXC-IyNDdfEnoKGMA
If you press Enter
and Certbot fails you will have update your TXT record and try again.
The TXT record is no longer needed once you pres Enter and Certbot creates your certificate chain and key.
The certifate chain and key wil placed into /etc/letsencrypt/live
. The README file will explain what the file are. I'd encourage you to read it. You will not be able to access the contents of this directory unless your are the root user.
sudo su
cd /etc/letsencrypt/live/
cd
into the path that starts with your domain. In my case this is nemonik.com-0002
, so I would
cd nemonik.com-0002
This path also contains a README. I'd encourage you to read it. Its contents are
This directory contains your keys and certificates.
`privkey.pem` : the private key for your certificate.
`fullchain.pem`: the certificate file used in most server software.
`chain.pem` : used for OCSP stapling in Nginx >=1.3.7.
`cert.pem` : will break many server configurations, and should not be used
without reading further documentation (see link below).
WARNING: DO NOT MOVE OR RENAME THESE FILES!
Certbot expects these files to remain in this location in order
to function properly!
We recommend not moving these files. For more information, see the Certbot
User Guide at https://certbot.eff.org/docs/using.html#where-are-my-certificates.
You will need the fullchain.pem
and privkey.pem
file.
You willl need to base64 encode each of these files like so
cat fullchain.pem | base64 > fullchain.pem.base64
cat privkey.pem | base64 > privkey.pem.base64
Doing so will create two new files that contain the contents of the original base64 encoded.
In another shell in the root of the project create a new vault file like software
mv vault vault.origin
nvim vault
Start the fill with the following content
#!/usr/bin/env bash
read -d '' traefik_tls_crt << EOF
Copy the base64 encoded contents of fullchain.pem.base64
directly following indenting each line 3 characters like software
#!/usr/bin/env bash
read -d '' traefik_tls_crt << EOF
LS0t...
several lines each indented
...LS0t
Cg==
In your editor add on a new line the following
EOF
In your editor add another line with the contents read -d '' traefik_tls_key << EOF
then copy the base64 contents of privkey.pem.base64
followed by a new line containing EOF
, so the file now looks like
#!/usr/bin/env bash
read -d '' traefik_tls_crt << EOF
LS0t...
several lines each indented
...LS0t
Cg==
EOF
read -d '' traefik_tls_key << EOF
LS0t...
several lines each indented
...LS0tLS0K
EOF
You can opt not to encrypt the ./vault file, but I'd encurage you to encrypt it like so
export VAULT_PASSWORD=`pwgen -Bsv1 20`
echo $VAULT_PASSWORD
Copy the value of VAULT_PASSWORD
to your Keychain, BitVault or some other password manager so as to not forget it.
The contents of ./vault file are retrieved by the Traefik automation to create traefik-cert
Secret Kubernetes resource on the cluster in the traefik
namespace, so that Traefik can SSL encrypt the traffic to the factory tools.
You can now proceed to the following section. Note, you wil not be accessing the tools on the nemonik.com domain though. You will need to use your domain to access the tools (e.g., https://gitlab.YOUR-DOMAIN, https://traefik.YOUR-DOMAIN).
Starting the cluster involves a number Make rules as described earlier.
We will utilize at least one container registry for our Kubernetes cluster, a private registry and optionally a pull through registry. We utilize these registries for essentially two reasons
- So, that K3s pulls from this registry vice going directly to docker.io even for our custom container images. If you've enabled and configured your Docker to use pull through registry it pull through this registry before making requests of the docker.io registry.
- To hold our private container images.
For our factory to access either registry it will need to be able to resolve the fully qualified domain name of the registry. For our host to do this we will utilize entries in our host's hosts
file. Both OSX and Linux operating systems have this file located at /etc/hosts
.
Edit your /etc/hosts
with nvim
like so
sudo nvim /etc/hosts
and add to the end the following, so these domains can be resolved
127.0.0.1 host.k3d.internal
127.0.0.1 k3d-registry.nemonik.com
127.0.0.1
is your host's loopback address. The first entry, host.k3d.internal
is the name the cluster refers to the host as, and k3d-registry.nemonik.com
is entry for the private container registry. You will be making additional edits to this file so that your browser can resolve the fully qualified domains of the factory's long running tools.
You can move forward by entering the vault file password, but you'll be asked repeatedly for it, I would suggest setting an environment variable to hold the value
export VAULT_PASSWORD=super-secret-password
If you put a space bar
character before export
the environment variable VAULT_PASSWORD
and its value wont be entered into your shell's history thereby protecting its value from being plucked.
Then execute the makefile start
rule
cd $HOME/Development/workspace/hands-on-DevOps-gen2
make start
The output will resemble
In this particular instance I also enabled the pull through container registry and so Make created it, whose output resembled
cd pullthrough-registry && ./install.sh
Setting unsecured variables into current context...
pullthrough registry already exists.
Now running...
Ensure your docker daemon configure file contains:
{...
"registry-mirrors": ["http://host.k3d.internal:5001"],
...}
to use use your pullthrough registry.
./start_registry.sh
Setting unsecured variables into current context...
Creating registry k3d-registry.nemonik.com:5000
FATA[0000] Failed to create registry: A registry node with that name already exists
Ignore the Fail notice. This is okay.
Waiting til k3d-registry.nemonik.com:5000 is running...
Now running.
NOTES
- The Bash scripts will make use of color for informational purposes.
- Yellow is used to notify
- Red is used to warn
- Blue is use to clue you in that the script expects user input
The private container registry will be started in this case the registry all ready existed and just needed to be restarted, but if it hadn't it would of been created. Output to create the registry would resemble the following
./start_registry.sh
Setting unsecured variables into current context...
Creating registry k3d-registry.nemonik.com:5000
INFO[0000] Creating node 'k3d-registry.nemonik.com'
INFO[0000] Successfully created registry 'k3d-registry.nemonik.com'
INFO[0000] Starting Node 'k3d-registry.nemonik.com'
INFO[0001] Successfully created registry 'k3d-registry.nemonik.com'
# You can now use the registry like this (example):
# 1. create a new cluster that uses this registry
k3d cluster create --registry-use k3d-registry.nemonik.com:5000
# 2. tag an existing local image to be pushed to the registry
docker tag nginx:latest k3d-registry.nemonik.com:5000/mynginx:v0.1
# 3. push that image to the registry
docker push k3d-registry.nemonik.com:5000/mynginx:v0.1
# 4. run a pod that uses this image
kubectl run mynginx --image k3d-registry.nemonik.com:5000/mynginx:v0.1
Waiting til k3d-registry.nemonik.com:5000 is running...
Now running.
If the registry container is already running or needs to be restart this will be handled as well.
Folllowing the private registry, Make will build a private K3s container with the air gapped container images, whose out output will resemble
cd k3s-air-gap-image && ./install.sh
Setting unsecured variables into current context...
Using the templates/Dockerfile.tpl template to generate the Dockerfile:
Using existing k3s air gap file found at /Users/mjwalsh/Development/workspace/hands-on-DevOps-gen2/k3s-air-gap-image/k3s-airgap-images-amd64.tar.gz
[+] Building 0.7s (8/8) FINISHED
=> [internal] load build definition from Dockerfile
=> => transferring dockerfile: 293B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> [internal] load metadata for docker.io/rancher/k3s:v1.21.2-k3s1
=> [1/3] FROM docker.io/rancher/k3s:v1.21.2-k3s1@sha256:a467df2b1b49040d18fdd4925a25d36efb891c96fbf682154a55aed3157ea66f
=> [internal] load build context
=> => transferring context: 52B
=> CACHED [2/3] RUN mkdir -p /var/lib/rancher/k3s/agent/images/
=> CACHED [3/3] COPY ./k3s-airgap-images-amd64.tar.gz /var/lib/rancher/k3s/agent/images/k3s-airgap-images-amd64.tar.gz
=> exporting to image
=> => exporting
=> => writing image sha256:e5d9b1b86e5889b33141a726415230d0dfa1e0d6fdef951f512167f39fb9112b
=> => naming to docker.io/nemonik/k3s:v1.21.2-k3s1
The push refers to repository [k3d-registry.nemonik.com:5000/nemonik/k3s]
28ff70a2d474: Layer already exists
0675b5e9d601: Layer already exists
1efd6240933e: Layer already exists
39f7b4c6fb81: Layer already exists
7f52e5437a9f: Layer already exists
v1.21.2-k3s1: digest: sha256:3ce05be5df2e24dcbe3630c7b4bcb27390f5b8e784a0c1f936de8513223d6e90 size: 1356
K3s' canonical container image will pull a number of container images directly from the docker.io registry. The K3s project provides guidance on how to configure K3s when executing K3s directly on a host in air gap environment, the instructions apply to when K3s is containerized. The install-k3s-air-gap-image
rule will retrieve a tarball of the container images K3s needs to run and will build a new container image including the tarball, so as to remove the need to for the nodes of the cluster to retrieve these container images on start up.
Then the class images will be pulled from their remote registries, tagged and pushed into the private registry. In this case I already had many of the images in Docker's cache and so I did not need to retrieve them. There is a lot of redundant output in this Make rule not worth copying here.
Make then moves on to executing the start-cluster
rule, where the ./start_cluster.sh script will pause
Attempting to load secrets from /Users/nemonik/Development/workspace/hands-on-DevOps-gen2/vault...
Enter vault password to decrypt vault to access secured variables in /Users/nemonik/Development/workspace/hands-on-DevOps-gen2/vault:
seeking input from the user. What the script is asking for is password to decrypt the vault at the root of the project. This file contains the following variable whose values are secret. I will share them with MITRE students, but not others. When unencrypted the file resembles
# Copyright (C) 2021 Michael Joseph Walsh - All Rights Reserved
# You may use, distribute and modify this code under the
# terms of the the license.
#
# You should have received a copy of the license with
# this file. If not, please email <[email protected]>
read -d '' traefik_tls_crt << EOF
LS0tLS1CRUd...
... a whole bunch more lines
Cg==
EOF
read -d '' traefik_tls_key << EOF
LS0tLS1CRUd...
... a whole bunch more lines
EOF
The traefik_tls_crt
variable holds the Lets Encrypt certificate chain for wildcard dns entry (*.nemonik.com) and traefik_tls_key
holds the private key.
Why are these needed? Well, the cluster's HTTP reverse Proxy service,Traefik will respond to requests recieved. Each factory tool will register a fully qualified domain name with Traefik, for example GitLab will register gitlab.nemonik.com
and since most modern browsers force the use of HTTPS a wildcard cert must be configure in Traefik so that proper certificate are presented to the browser in response otherwise the browser will choke and warn that it doesn't trust Traefiks default self-signed certificate.
Back to make start-cluster
if you entered the password or had set it in an environmental variable the output will resemble
./start_cluster.sh
Setting unsecured variables into current context...
Using existing container registry: http://k3d-registry.nemonik.com:5000
Cluster doesn't exist, so created it...
Pulling images and placing into k3d-registry...
Pulling, tagging and pushing nemonik/k3s:v1.21.2-k3s1 into k3d-registry.nemonik.com: container image repository...
All ready have nemonik/k3s:v1.21.2-k3s1 in docker cache.
Error response from daemon: manifest for nemonik/k3s:v1.21.2-k3s1 not found: manifest unknown: manifest unknown
The push refers to repository [k3d-registry.nemonik.com:5000/nemonik/k3s]
28ff70a2d474: Layer already exists
0675b5e9d601: Layer already exists
1efd6240933e: Layer already exists
39f7b4c6fb81: Layer already exists
7f52e5437a9f: Layer already exists
v1.21.2-k3s1: digest: sha256:3ce05be5df2e24dcbe3630c7b4bcb27390f5b8e784a0c1f936de8513223d6e90 size: 1356
nemonik/k3s:v1.21.2-k3s1 already cached in /Users/mjwalsh/Development/workspace/hands-on-DevOps-gen2/image_cache/
Pulling, tagging and pushing rancher/k3d-proxy:v4.4.7 into k3d-registry.nemonik.com: container image repository...
All ready have rancher/k3d-proxy:v4.4.7 in docker cache.
v4.4.7: Pulling from rancher/k3d-proxy
Digest: sha256:025e2a9cbc78b1c7fa40297bbe25e71fad0fc7d7ec9ae8c95c2b21db24648369
Status: Image is up to date for rancher/k3d-proxy:v4.4.7
docker.io/rancher/k3d-proxy:v4.4.7
The push refers to repository [k3d-registry.nemonik.com:5000/rancher/k3d-proxy]
527e006fdb09: Layer already exists
690e52cafac9: Layer already exists
6f47ae38de6a: Layer already exists
c10f66aae549: Layer already exists
4689e8eca613: Layer already exists
3480549413ea: Layer already exists
3c369314e003: Layer already exists
4531e200ac8d: Layer already exists
ed3fe3f2b59f: Layer already exists
b2d5eeeaba3a: Layer already exists
v4.4.7: digest: sha256:bccaf03a96505a74d556751bdf19af959533c984638aa808d7a95dfbb65cf8ce size: 2401
rancher/k3d-proxy:v4.4.7 already cached in /Users/mjwalsh/Development/workspace/hands-on-DevOps-gen2/image_cache/
Pullthrough registry is running. Configuring cluster to use.
k3d cluster create hands-on-devops-class --api-port 6443 -p 80:80@loadbalancer -p 443:443@loadbalancer -p 9000:9000@loadbalancer -p 2022:2022@loadbalancer --k3s-server-arg "--no-deploy=traefik" --registry-use k3d-registry.nemonik.com:5000 --image k3d-registry.nemonik.com:5000/nemonik/k3s:v1.21.2-k3s1 --servers 1 --agents 1 --registry-config ./pullthrough-registry/registries.yaml
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-hands-on-devops-class' (9b7412270c50e411603066b8f5c5ae7326879e802ade1879613387475576678c)
INFO[0000] Created volume 'k3d-hands-on-devops-class-images'
INFO[0000] Container 'k3d-registry.nemonik.com' is already connected to 'k3d-hands-on-devops-class'
INFO[0001] Creating node 'k3d-hands-on-devops-class-server-0'
INFO[0001] Creating node 'k3d-hands-on-devops-class-agent-0'
INFO[0001] Creating LoadBalancer 'k3d-hands-on-devops-class-serverlb'
INFO[0001] Starting cluster 'hands-on-devops-class'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-hands-on-devops-class-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting Node 'k3d-hands-on-devops-class-agent-0'
INFO[0018] Starting helpers...
INFO[0018] Starting Node 'k3d-hands-on-devops-class-serverlb'
INFO[0019] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0021] Successfully added host record to /etc/hosts in 3/3 nodes and to the CoreDNS ConfigMap
INFO[0022] Cluster 'hands-on-devops-class' created successfully!
INFO[0022] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0022] You can now use it like this:
kubectl config use-context k3d-hands-on-devops-class
kubectl cluster-info
In the case above run, I had set an VAULT_PASSWORD
environment variable to hold the password.
Make will then move onto executing patch-coredns
rule, descend into the coredns sub-folder and execute the patch.sh script.
The output will resemble
cd coredns && ./patch.sh
Setting unsecured variables into current context...
Pulling images and placing into k3d-registry...
Pulling, tagging and pushing traefik:2.2.8 into k3d-registry.nemonik.com: container image repository...
All ready have traefik:2.2.8 in docker cache.
2.2.8: Pulling from library/traefik
Digest: sha256:f5af5a5ce17fc3e353b507e8acce65d7f28126408a8c92dc3cac246d023dc9e8
Status: Image is up to date for traefik:2.2.8
docker.io/library/traefik:2.2.8
The push refers to repository [k3d-registry.nemonik.com:5000/traefik]
90a7e4076ff6: Layer already exists
a35039a172cc: Layer already exists
4dca0fb1912d: Layer already exists
3e207b409db3: Layer already exists
2.2.8: digest: sha256:2468d73cafe08a8973ac3d4e7d0163c1e86c36c8b1bc1f212fdf88999a799fb5 size: 1157
traefik:2.2.8 already cached in /Users/mjwalsh/Development/workspace/hands-on-DevOps-gen2/image_cache/
Get host ip...
pod/get-host-ip created
pod/get-host-ip condition met
Block waiting for CoreDNS to start responding...
PING host.k3d.internal (192.168.65.2): 56 data bytes
64 bytes from 192.168.65.2: seq=0 ttl=36 time=0.286 ms
--- host.k3d.internal ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.286/0.286/0.286 ms
found
pod "get-host-ip" deleted
Patching DNS in the cluster to resolve application FDQNs using 192.168.65.2 ip...
kubectl patch cm coredns -n kube-system -p='{"data": {"NodeHosts":"172.18.0.3 k3d-hands-on-devops-class-server-0\n172.18.0.4 k3d-hands-on-devops-class-agent-0\n192.168.65.2 host.k3d.internal\n192.168.65.2 gitlab.nemonik.com\n192.168.65.2 drone.nemonik.com\n192.168.65.2 taiga.nemonik.com\n192.168.65.2 sonar.nemonik.com\n192.168.65.2 plantuml.nemonik.com\n192.168.65.2 heimdall.nemonik.com\n192.168.65.2 k3d-registry.nemonik.com\n192.168.65.2 helloworld.nemonik.com "}}'
configmap/coredns patched
Forcing retart of coredns so that the tests can run immediately...
deployment.apps/coredns restarted
Waiting for deployment "coredns" rollout to finish: 0 of 1 updated replicas are available...
deployment "coredns" successfully rolled out
pod/coredns-98b49d8b8-5gx8m condition met
Setting unsecured variables into current context...
Using k3d-registry.nemonik.com:5000/traefik:2.2.8 container to query coreDNS for entries...
Pulling images and placing into k3d-registry...
Pulling, tagging and pushing traefik:2.2.8 into k3d-registry.nemonik.com: container image repository...
All ready have traefik:2.2.8 in docker cache.
2.2.8: Pulling from library/traefik
Digest: sha256:f5af5a5ce17fc3e353b507e8acce65d7f28126408a8c92dc3cac246d023dc9e8
Status: Image is up to date for traefik:2.2.8
docker.io/library/traefik:2.2.8
The push refers to repository [k3d-registry.nemonik.com:5000/traefik]
90a7e4076ff6: Layer already exists
a35039a172cc: Layer already exists
4dca0fb1912d: Layer already exists
3e207b409db3: Layer already exists
2.2.8: digest: sha256:2468d73cafe08a8973ac3d4e7d0163c1e86c36c8b1bc1f212fdf88999a799fb5 size: 1157
traefik:2.2.8 already cached in /Users/mjwalsh/Development/workspace/hands-on-DevOps-gen2/image_cache/
pod/test-coredns created
pod/test-coredns condition met
Block waiting for CoreDNS to start responding...
This may go forever.
PING host.k3d.internal (192.168.65.2): 56 data bytes
64 bytes from 192.168.65.2: seq=0 ttl=36 time=0.313 ms
--- host.k3d.internal ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.313/0.313/0.313 ms
found
Query the DNS server for the FDQNs added...
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: gitlab.nemonik.com
Address: 192.168.65.2
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: drone.nemonik.com
Address: 192.168.65.2
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: taiga.nemonik.com
Address: 192.168.65.2
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: sonar.nemonik.com
Address: 192.168.65.2
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: plantuml.nemonik.com
Address: 192.168.65.2
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: heimdall.nemonik.com
Address: 192.168.65.2
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: k3d-registry.nemonik.com
Address: 192.168.65.2
Server: 10.43.0.10
Address: 10.43.0.10:53
Name: helloworld.nemonik.com
Address: 192.168.65.2
pod "test-coredns" deleted
=======================================================
Your host IP is 192.168.65.2
Writing 192.168.65.2 into /tmp/host_ip
=======================================================
=======================================================
Ensure the following lines are in your /etc/hosts file:
=======================================================
127.0.0.1 host.k3d.internal
127.0.0.1 gitlab.nemonik.com
127.0.0.1 drone.nemonik.com
127.0.0.1 taiga.nemonik.com
127.0.0.1 sonar.nemonik.com
127.0.0.1 plantuml.nemonik.com
127.0.0.1 heimdall.nemonik.com
127.0.0.1 k3d-registry.nemonik.com
127.0.0.1 helloworld.nemonik.com
The purpose of the rule is to add DNS entries for the factory tools into Kubernetes, so the tools can resolve each other. Without this for example when you you attempt to authenticate into Drone CI, and it OpenAuths off of GitLab, GitLab won't be able to resolve Drone in DNS. None of this would be necessary if we had control of an external DNS server.
The script ends reminding you to add the following lines to your /etc/hosts
file
127.0.0.1 gitlab.nemonik.com
127.0.0.1 drone.nemonik.com
127.0.0.1 taiga.nemonik.com
127.0.0.1 sonar.nemonik.com
127.0.0.1 plantuml.nemonik.com
127.0.0.1 heimdall.nemonik.com
127.0.0.1 k3d-registry.nemonik.com
127.0.0.1 helloworld.nemonik.com
You do this via running nvim as root (i.e., sudo nvim /etc/hosts
) to edit the hosts file and add the lines above.
NOTES
-
Your host IP address (
192.168.65.2
) will likely be different for you. -
If you are using your own domain then
nemonik.com
will b:e replaced with whatever you've provided thedomain
variable in the .env file.
The k3s cluster should now be up and running. Let's verify this by entering into your shell
kubectl get nodes -o wide
Output should resemble
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-hands-on-devops-class-server-0 Ready control-plane,master 11m v1.21.1+k3s1 172.18.0.3 <none> Unknown 5.10.25-linuxkit containerd://1.4.4-k3s2
k3d-hands-on-devops-class-agent-0 Ready <none> 11m v1.21.1+k3s1 172.18.0.4 <none> Unknown 5.10.25-linuxkit containerd://1.4.4-k3s2
The STATUS
of each node should be Ready
. Our cluster by default has two nodes: one server node who provides the control-plane and is a master node, whereas the other is an agent node.
Above is the wide output providing more information. You could of just entered kubectl get nodes
and only get back in return the first five columns worth of information.
Our cluster is also already running pods. Enter the following into your shell
kubectl get pods -A -o wide
Output will resemble
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system local-path-provisioner-5ff76fc89d-nqs99 1/1 Running 0 17m 10.42.1.2 k3d-hands-on-devops-class-agent-0 <none> <none>
kube-system metrics-server-86cbb8457f-f5zrj 1/1 Running 0 17m 10.42.0.3 k3d-hands-on-devops-class-server-0 <none> <none>
kube-system coredns-85546dbd9-t244v 1/1 Running 0 16m 10.42.1.3 k3d-hands-on-devops-class-agent-0 <none> <none>
You'll see the coredns service whose data we patched.
The -A
option tells kubectl
to list the pods across all namespaces. The -o wide
option again returns additional information. The -o
option can also return output in yaml
, json
, etc. If you leave -o wide
off, you'll just get the first 6 columns of output.
docker ps
will be show these containers runnning
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7111e5c82139 rancher/k3d-proxy:v4.4.7 "/bin/sh -c nginx-pr…" 9 minutes ago Up 9 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:2022->2022/tcp, 0.0.0.0:6443->6443/tcp, :::2022->2022/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp k3d-hands-on-devops-class-serverlb
0c69199d68fc k3d-registry.nemonik.com:5000/nemonik/k3s:v1.21.2-k3s1 "/bin/k3s agent" 9 minutes ago Up 9 minutes k3d-hands-on-devops-class-agent-0
373cea67cdb9 k3d-registry.nemonik.com:5000/nemonik/k3s:v1.21.2-k3s1 "/bin/k3s server --n…" 9 minutes ago Up 9 minutes k3d-hands-on-devops-class-server-0
33efc4954421 registry:2 "/entrypoint.sh /etc…" 47 minutes ago Up 47 minutes 0.0.0.0:5000->5000/tcp k3d-registry.nemonik.com
d3b94140c2a7 registry:2 "/entrypoint.sh /etc…" 47 minutes ago Up 47 minutes 0.0.0.0:5001->5000/tcp, :::5001->5000/tcp hands-on-devops-pullthrough-registry
Now that we have a cluster up and running we can install all the long running factory tools (Taiga, GitLab, Drone CI, etc) upon it.
cd $HOME/Development/workspace/hands-on-DevOps-gen2
make install
NOTE
-
To save yourself from enterinng the
VAULT_PASSWORD
repeatedly set it as an environment variable,export VAULT_PASSWORD=super-secret-password
then execute themake install
. -
If the GitLab install appears stuck doing the following for like ever. It is normal for it to loop doing this for a bit, but not forever.
Still waiting for GitLab to respond to https requests... Still waiting for GitLab to respond to https requests... Still waiting for GitLab to respond to https requests... Still waiting for GitLab to respond to https requests... Still waiting for GitLab to respond to https requests...
You didn't update your
/etc/hosts
file with the values provided by the ./coredns/patch.sh script.To fix this perform the following.
cd $HOME/Development/workspace/hands-on-DevOps-gen2 make patch-coredns
And add the entries it tells you to as covered in the prior section and then re-run the
make install
.
This is a long running process as each install rule will be execute triggering a tool's install script. Each install script retrieves the container images related to the factory tool being installed and then executes one or more Helm charts and applies zero or more Kubernetes resource files followed by possibly additional steps to ensure the desired state of the tool is on the cluster
In another shell you can watch the tools spin up
watch -n 15 kubectl get pods -A
Whose output will resemble.
Pop some corn it will be a while. Faster than the video as all the container images where cached.
The result is the following factory spun up
PlantUML source for this diagram
The class makes use of two types of tools: those that are long-running (e.g., GitLab, Drone, SonarQube) and those used to perform short-lived individual tasks (e.g., [Makefile](https://en.wikipedia.org/wiki/Make_(software)#Makefile, InSpec, OWASP-ZAP.)
This section will describe the long-running tools leaving subsequent sections to describe the latter as you use the short-lived tools.
Taiga is an Open Source project management platform for managing Agile Development projects.
Typically, Agile teams work using a visual task management tool such as a project board, task board, Kanban or Scrum visual management board. These boards can be implemented using a whiteboard or open space on the walls of a room with colored index cards taped to them or in software. The board is at a minimum segmented into a few columns To do, In process, and Done, but the board can be tailored. I've personally seen boards for very large projects consume every bit of wall space of a very large cavernous room, but as Lean-Agile has matured, teams have grown larger and more disparate, tools have emerged to provide a clear view into a project's management to all levels of concern (e.g., developers, managers, product owner, and the customer) answering:
- Are deadlines being achieved?
- What is the work in progress?
- Are team members overloaded?
- How much is complete?
- What's next?
Further, the Lean-Agile Software tools should provide the following capabilities:
- Dividing integration and development effort into multiple projects.
- Defining, allocating, and viewing resources and their workload across each product.
- Defining, maintaining, and prioritizing the accumulation of an individual product's requirements, features or technical tasks which, at a given moment, are known to be necessary and sufficient to complete a project's release.
- Facilitating the selection and assignment of individual requirements to resources, and the tracking of progress for a release.
- Permit collaboration with external third parties.
The 800 pound Gorilla in this market segment is JIRA Software. I and some of my co-workers loath it. It is part of the Atlassian suite providing provides collaboration software for teams with products including JIRA Software, Confluence, Bitbucket, and Stash. Back when Atlassian (Stocker ticker: TEAM) was trading at 50-dollars it was a good investment. It feel more like a ticketing systems in comparison to the others.
NOTE
- Lean-Agile Project Management software's primary purpose is to integrate people and really not much else.
Taiga's documentation can be found at
It's canonical source can be found at
https://github.com/kaleidos-ventures/taiga-front
dedicated to the front-end, and
https://github.com/kaleidos-ventures/taiga-back
dedicated to the back-end.
Taiga up until recently didn't directly offer a container images for Taiga, but that has since changed. You can find this here
https://hub.docker.com/u/taigaio
They also have a project using Docker-compose ( a tool for defining and running multi-container applications) to stand up Taiga attempt
https://github.com/kaleidos-ventures/taiga-docker
But this is not used to stand up Taiga in the class. I authored a Helm chart to do this found here
https://github.com/nemonik/taiga-helm
And the chart is published to my Helm Chart repository
https://nemonik.github.io/helm-charts
Once, stood up the Kubernetes cluster your instance of Taiga will reachable by default at
The default username is
whose password is
password
(Shhhh. Our little secret.).
NOTE
- The URL for the class Taiga will be this unless you changed the value of the
taiga_fdqn
variable in the .env file to something else, generated a Let's Encrypt cert for the wildcard of your domain and entered the valuestraefik_tls_crt
andtraefik_tls_key
into the vault at the root of the project. The same goes for the rest of the long running tools.
GitLab is installed on the Kubernetes cluster, where it will be accessible at https://gitla.nemonik.com.
GitLab Community Edition (CE) is an open source end-to-end software development platform with built-in version control, issue tracking, code review, and CICD.
For Agile teams to collaborate, a Configuration management (CM) is necessary to coordinate the development of new feature, changes, and experimentation. Also, a CM system (CMS) provides a history of changes, and thereby, the ability to roll back to a version known to be acceptable.
At a minimum, the following items will be placed under revision control in CM:
-
Source code,
-
If a database is needed, schema initialization and the migration between versions,
-
Text documentation containing
-
a synopsis (i.e., project name, overview, etc.),
-
version description,
-
guidance covering
- build,
- unit testing, and
- installation
-
a contributor enumeration,
-
license and/or ownership declaration with contacts, etc.,
-
A single CMS and the associated workflow (e.g., GitHub Workflow) can serve as the focal point for the entire enterprise thereby provide centralized version control if all documentation is authored in a lightweight markup language with plain text formatting syntax (e.g., Markdown, PlantUML).
A CMS must facilitate best practices, not limited to:
-
A means for developers to copy and work off a complete repository thereby permitting
- Private individual work to later be synchronized via exchanging sets of changes (i.e., patches) through a means described as "distributed version control", and
- Pre-flight build and test of their source code in their own private workspace, so as to minimize the chance of committing broken or untested source code thereby encouraging
- The committing of completed source code only.
-
Granular commits that communicate the motivation for the commit (i.e., the what and why). For example, for a change these could be:
- the inclusion of a new feature,
- a bug fix,
- the removal of dead code
-
Reducing the risk breaking a build by
- Utilizing branching to separate different lines of development, and
- Standardizing on CMS workflows (e.g., GitHub Workflow),
-
Make builds be self-testing (i.e., ingrain testing) by including unit and integration test with the source code so that it can be executed by
- the build automation, and
- the Continuous Integration service.
-
Trigger follow-on activities orchestrated by the Continuous Integration Service.
GitLab's documentation can be found at
It's canonical source can be found at
https://gitlab.com/gitlab-org/gitlab-ce
I'm using Sameer Naik's container image for GitLab built from GitLab CE's source. Up until recently GitLab did not offer a monolithic container image for running GitLab, but now does at https://docs.gitlab.com/ce/install/docker.html. I'm using Sameer's container image simply out preference as it is has been long maintained and solid reputation.
Sameer's container image can be found at
https://hub.docker.com/r/sameersbn/gitlab/
Whose canonical source is located at
https://github.com/sameersbn/docker-gitlab
I've written a Helm chart for it located at
https://github.com/nemonik/sameersbn-gitlab-helm
Once, stood up your instance of GitLab will reachable by default at
You will be using the GitLab's root account to host your repositories versus creating your own, but if you want you can. There is nothing stopping you.
For the purposes of the class the
root
account's password has been set to the uber-secure
paswword
So, you will be asked to change it to your own 8-character password when using GitLab for the first time.
Drone is essentially a Continuous Delivery system built on container technology.
Drone is distributed as a set of container images. Drone CI can be run with an internal SQLite database, but it can be run with an external database . It also integrates with multiple version control providers (i.e., GitHub, GitLab, BitBucket, Stash, and Gogs). Both CMS and database are configured using environment variables passed along when the Drone CI container is first to run. The .drone.yml
is authored in a domain specific language (DSL) that is a superset of the docker-compose DSL. The file is used to describe the build with multiple named steps with each step executed in a separate containers having shared disk access to the checked out branch of the source repository. Steps make use of Drone plugins to deploy code, publish artifacts, send a notification, etc. Drone's approach is novel as plugins are really just container images distributed in the typical manner. Each plugin is designed to perform pre-defined tasks and is configured as steps in your pipeline. The containers are executed with read/write/execute access at the root of the source branch therefore permitting the pipeline to interact with that specific, checked out branch of the source.
Drone and its brethren (e.g., Jenkins, GitLab Runners) are used to facilitate Continuous Integration (CI), a software development practice where members of an Agile team frequently integrate their work in order to detect integration issues as soon as possible. Each integration is orchestrated through a service that essentially assembles a build and runs tests every time a predetermined trigger has been met and then reports with immediate feedback.
I don't use Jenkins unless I have to. Why? I'm simply not a fan. Initially, because its plugin architecture is painful to manage and with prior versions your pipelines existed entirely in the Jenkins tool itself. Later, Jenkins introduced Groovy-based Jenkin Pipelines that are CMed (i.e., placed under configuration management) with your project's source. And while every orchestrator has based their DSL on YAML and although I love the Groovy language for its power, but I don't think it makes for a good orchestration language. Your opinion may differ. I'm okay with that. Really. I am.
There are also SaaS CICD tools, such as Travis CI and Circle CI. These are great, free CICD orchestrators.
My java-stix project hosted on GitHub.com uses both Travis CI and Circle CI as part of its continuous integration.
The Travis CI orchestration contains
language: java
dist: precise
jdk:
- oraclejdk7
before_install:
- chmod +x gradlew
env: GRADLE_OPTS=-Dorg.gradle.daemon=true
env: CI_OPTS=--stacktrace
install: /bin/true
script: "./gradlew -x signArchives"
Whereas, the Circle CI orchestration contains
machine:
java:
version: oraclejdk7
environment:
GRADLE_OPTS: -Dorg.gradle.daemon=true
CI_OPTS: --stacktrace --debug
test:
override:
- ./gradlew -x signArchives
They are essentially similar with both requiring the Oracle JDK and use Gradle to build and unit test the code. Both of these services under the covers use containers to run the builds.
In another GitHub hosted project, the java-stix-validator the Travis CI orchestration contains
language: java
jdk:
- oraclejdk8
before_install:
- chmod +x gradlew
env: CI_OPTS=--stacktrace
install: "/bin/true"
script: "./gradlew build -d"
deploy:
provider: heroku
api_key:
secure: m9Gbt0Oyqtjwyu4Y8CVobNNnj1q5mFt+Ygi2wiDWlf/RunLOj2CE8YAYuRyEAbpCOd1lrmrhmQb8uQAfiydauYBcQE5yyOlyIhNkrLi2m1+we0VeWWr6gxIVz57VuAhfbzoMtvkhmxl/Ey0U+1vI7tYurK0thzFUyQFqZh1wNq6EldIfHxVNxDZbEVtkDzFtK5cmVnPE8HM9xaQmuV7k3NhrvRS4pzN87uvndfFVb0vDhLmg5DulF+PLkdpP9UC5jsAE1HXMBL0cTtsvSHUkIyO7qhLb0RFAzVdRMvn7kEW2Q0ekoK09sPR13VwmfjewzHSNrWIf+rjJx7EzoBzbq5/VmC9nxH1oiGpXxoAG08pJjcQYMSxsa2JZLH8dSIEaMgOFNOxkrAhcqP59xXWZ9WVLCYPSN4atmg4L6etJOzFqfz3jAp40AB4Eu2QU49c60r6BH31Xj8ymjKMKqnlL199qCoqfZtv7FYqOFG3keLeWvL/F7JhmtV+JdvuqVPEvNq2D1b3kdCKk2cw4lmRCwC9hdT2oXTCwhjQvYwSm0sHQ98aeV55FkE7DuH4B+CuzYw4N9K78j3eQtW2Oas1lLCoHSDXgA/4O79RlM8p0nLa3MjdVq5OSIjbcCqhDLBe8nc5ucSpMMjnjNvhAKvcyrc5AbXdIVaLVvE2azMuZJLo=
app: agile-journey-9583
on:
repo: STIXProject/java-stix-validator
To deploy the web application to Heroku, one of the first PaaS (Platforms-as-Service). The code was last committed in 2015 and the code is still running free on Heroku at http://agile-journey-9583.herokuapp.com/#/
More details on Drone is sprinkled across the class. As you can see I favor being a polyglot when it comes to software development to include CICD.
Drone's main site is at
Its documentation is at
Its plugin market is at
Drone's canonical source can be found at
https://github.com/drone/drone
Drone is distributed as a container image and can be found respectfully at
https://hub.docker.com/r/drone/drone
and
https://hub.docker.com/r/drone/agent
Once, stood up your instance of Drone CI will reachable by default at
Drone will authenticate you off of GitLab with the default root account and whatever password you set in GitLab.
SonarQube provides the capability to show the health of an application's source code, highlighting issues as they are introduced. SonarQube can be extended by language-specific extensions/plugins to report on duplicated code, coding standards, unit tests, code coverage, code complexity, comments, bugs, and security vulnerabilities.
SonarQube's main site is at
Its documentation is at
https://docs.sonarqube.org/display/SONAR/Documentation
SonarQube's canonical source can be found here
https://github.com/SonarSource/sonarqube
I'm using the container image provided at
https://hub.docker.com/_/sonarqube
Once, stood up your instance of SonarQube will reachable by default at
To login:
- Open SonarQube (e.g., http://sonar.nemonik.com) in your browser.
- Click
Log in
. - When the page refreshes, click
Log in with GitLab
. - Provide your GitLab credentials if asked. The default admin account username and password is GitLab's root account and the password you set.
- Then
Authorize
SonarQube to use your account. - You will be redirected into SonarQube, authenicated.
The second value in the Agile Manifesto is
Working software over comprehensive documentation
The documentation for this class, this readme.md
file, is authored in Markdown, a light-weight markup language. The course's diagrams are authored in PlantUML, a domain-specific language used to author well-formed and human-readable code to render UML diagrams.
For me, I don't need to see the diagrams rendered to follow them. The code alone is sufficient.
PlantUML supports a number of UML diagrams: Sequence, Use Case, Class diagram, Activity diagram, Component, State, Object, Deployment, and Timing. The DSL also supports a number of other non-UML diagrams: Wireframe graphical interface, Specification and Description Language (SDL), Ditaa diagram...
This class makes use of just two types: Activity and Deployment diagrams. The diagrams are especially helpful to visual learners. Each PlantUML diagram's source file exist in plantuml folder in the root of the project and is rendered by a GitHub workflow into scalable vector graphic (SVG) that is pushed to the diagrams folder upon changes committed to each diagram's source file.
The PlantUML effort also provides server for rendering diagrams in PNG, SVG or LaTeX formats. Plugins for Microsoft Code, Atom, and other editors have been authored to assist in authoring in the PlantUML DSL.
PlantUML's main site and documentation is at
PlantUML's canonical source can be found here
I'm using the container image provided at
https://hub.docker.com/r/plantuml/plantuml-server
The aforementioned workflow is using
https://hub.docker.com/r/think/plantuml
to render the diagrams.
Once, stood up your instance of PlantUML server will reachable by default at
The PlantUML Server stores no user data. You will not be asked to authenticate.
Heimdall 2 is a security results viewer and review tool supporting multiple security results formats, such as: InSpec, SonarQube, OWASP-Zap and Fortify.
Heimdall 2's canonical source can be found here
https://github.com/mitre/heimdall2
I've written a Helm chart to install on Kubernetes whose repository can be found here
https://github.com/nemonik/heimdall2-helm
Once, stood up your instance of Heimdall 2 server will reachable by default at
To create an account perform the following:
- Open Heimdall 2 in your browser (e.g., https://heimdall.nemonik.com).
- Click
SIGN UP
in the lower right of the pop up by default this will open https://heimdall.nemonik.com/signup. - Enter your first and last name (e.g., First Name:
Turd
, Last Name:Ferguson
). - Provide an email (e.g.,
[email protected]
). - Being this a security application you're going to be asked to provide a secure password (You'll be warned, Passwords are a minimum of 15 characters in length. Passwords must contain at least one special character, number, upper-case letter, and lower-case letter. Passwords cannot contain more than three consecutive repeating characters. Passwords cannot contain more than four repeating characters from the same character class.) Provide a secure password (.e.g.,
sup3rS3cr3tCr3d3nt1@ls!
). - Then login with with the email you provided (e.g.,
[email protected]
) and the password you provided (e.g.,sup3rS3cr3tCr3d3nt1@ls!
), and clickLogin
.
The Kubernetes cluster and the long running tools are required to be up and running for the following sections.
In this next part, we will create a simple helloworld GoLang project to demonstrate Continuous Integration. GoLang lends itself well to DevOps and underlines almost every new tool you can think of related to DevOps and cloud (e.g., golang / go, docker / docker-ce, kubernetes / kubernetes, openshift / origin , hashicorp / terraform, coreos / etcd, hashicorp / vault, hashicorp / packer, hashicorp / consul, gogits / gogs, drone / drone.)
A backlog is essentially your (or your team's) to-do list, a prioritized list of work derived from the road map (e.g., the outline for future product functionality and when new features will be released) and its requirements.
Open Taiga (e.g., https://taiga.nemonik.com) in your web browser.
The default username is
whose password is
password
These will be different if you changed the class configuration.
Complete the follow to track your progress in completing the Golang helloworld project
-
Click
Create Project
. -
Select
Kanban
. In a Kanban board work moves from left to right with each column represents a stage within the value stream. -
Give your project a name. For example,
Helloworld
and a description, such as,My Kanban board for this awesome helloworld app
and then clickCREATE PROJECT
. -
You can skip this step and opt to chose to click
<
to foldREADY
,USER STORY STATUS
andARCHIVED
only after completing step 6. Otherwise, you can edit your Kanban board to justNEW
,IN PROGRESS
, andDONE
bya. On the bottom-left, click the
Settings
gear. b. ClickATTRIBUTES
. c. Scroll down toUSER STORY STATUS
. d. Hover overReady
, click the trash icon to delete and clickACCEPT
. c. Do the same forReady for test
andArchived
. d. Click theKANBAN
icon on the far left. It Looks like columns. -
In the
NEW
column selectAdd New bulk
icon that looks like a list and when the page updates cut-and-paste the lines below into the text box and clickSAVE
.Create the project's backlog Create the project in GitLab Setup the project Author the application Align source code with Go coding standards Lint your code Build the application Run your application Author the unit tests Automate the build (i.e., write the Makefile) Author Drone-based Continuous Integration
Track your progress in Taiga as you work through each section.
- In GitLab (e.g., https://gitlab.nemonik.com), if you need to login the default username is
root
and whatever password you set. - Click on
Projects
in the upper left and selectCreate blank project
(e.g., https://gitlab.nemonik.com/projects/new#blank_project). - When the page load, enter
helloworld
for theProject name
. - Provide an optional
Project description
. Something descriptive, such as,GoLang helloworld application for the hands-on DevOps class.
. - Make the application
Public
to save yourself from entering your username and password when cloning. - Click the blue
Create project
button on the lower left.
The UI will refresh to show you a landing page for the project (e.g., https://gitlab.nemonik.com/root/helloworld)
On your host open a shell and configure your user name and email:
git config --global user.name "root"
git config --global user.email "[email protected]"
The Ansible golang
playbook will already have configured:
- Your
$GOPATH
and$GOBIN
environmental variable. - Added
$GOBIN
to your PATH environmental variable. - Created
go
workspace in home directory (i.e.,~/go
directory containingbin
,pkg
, andsrc
.)
Go source code is placed in the src
directory under a namespace (i.e., a unique base path to avoid naming collisions under which all your go code will reside.) In open source software development, it is typical to use github.com account path. Mine is github.com/nemonik
, so I would create this base path via (You do the same., so we are all on the same page.)
mkdir -p $GOPATH/src/github.com/nemonik
Now lets create the GoLang helloworld
project to demonstrate Continuous Integration via
cd $GOPATH/src/github.com/nemonik
git clone https://gitlab.nemonik.com/root/helloworld.git
Output will resemble
Cloning into 'helloworld'...
warning: You appear to have cloned an empty repository.
NOTE
- Ignore the
warning: You appear to have cloned an empty repository.
warning. This is perfectly normal.
Then move into the clone of your repository via
cd helloworld
So, that you do not version control certain files in git, create a .gitignore
file with your editor
nvim .gitignore
with the following contents
# OS-specific
.DS_Store
# reports
coverage.out
golangci-lint.xml
# binary
./helloworld-web
NOTE
- Make sure you pre-pend that dot (
.
) at the start of.gitignore
. In *NIX Dot-files are hidden files. .gitignore
will not show up if you simply list the file system via thels
command, but if you usels -a
orls --all
it will. Either arguments configuresls
to not ignore entries starting with.
.
In the project folder (i.e., ~/go/src/github.com/nemonik/helloworld
) start your module using
go mod init
Whose output will resemble
go: creating new go.mod: module github.com/nemonik/helloworld
go: to add module requirements and sums:
go mod tidy
Then create main.go
in nvim
with this content:
package main
import "fmt"
func main() {
fmt.Println(HelloWorld())
}
func HelloWorld() string {
return "hello world"
}
When you first open the file in nvim
it will populate the file with
package main
import "fmt"
func main() {
fmt.Println("vim-go")
}
Just delete or edit.
vim-go
will automagically format the source code according to Go coding standards upon saving the file and exiting the editor.
Otherwise, you can format source code in the module by entering into the shell
go fmt
Already installed on your host
is golint
. Where go fmt
reformatted the code to GoLang standards, golint
prints style mistakes.
To run golint
, in the root of the helloworld
project execute
golint
Command line out will be
main.go:9:1: exported function HelloWorld should have comment or be unexported
Fix the error by editing main.go
to
package main
import "fmt"
func main() {
fmt.Println(HelloWorld())
}
// HelloWorld returns "hello world"
func HelloWorld() string {
return "hello world"
}
Run golint
again and it should return no output indicating it sees nothing wrong.
Build the project by executing
go build -o helloworld .
Success returns no command line output. What? Did you want a cookie? No cookie for you. This is GoLang's way of doing things. Silence is golden and means things went fine. Otherwise, go back and fix the mistakes in your code.
Now run your application
./helloworld
The command line output will be
hello world
GoLang ships with a built-in testing
package
https://golang.org/pkg/testing/
for automated unit testing of Go packages. Unit testing is a software development process where the smallest testable components of an application are individually tested for proper operation. Unit testing offers the biggest return for dollars spent in comparison to integration and functional testing.
For more on this topic read Martin Fowler's
https://martinfowler.com/bliki/TestPyramid.html
but in short unit testing in comparison to integration and functional testing provides the greatest bang for buck followed by integration and functional testing (i.e., unit testing is the cheapest most valuable from of testing.) Functional testing, where the system is tested against the functional requirements, is by far the most expensive, most brittle, and arguably less valuable in comparison.
In nvim
create main_test.go
with this content:
package main
import (
"os"
"testing"
)
func TestMain(m *testing.M) {
os.Exit(m.Run())
}
func TestHelloWorld(t *testing.T) {
if HelloWorld() != "hello world" {
t.Errorf("got %s expected %s", HelloWorld(), "hello world")
}
}
The test is 16 lines as per
$ cat main_test.go | wc -l
where we pipe the contents of main_test.go
through the wc
command-line utility used to display the number of lines, words, and/or bytes in standard input
The helloworld
application itself is just 12 lines of code as per
$ cat main.go | wc -l
Yes, line count is an overly simple metric to weigh, but line count should serve to inform you of the obvious -- there's a cost in authoring unit tests. This cost is perhaps the number one reason why authoring unit tests will be skipped. Well, that and having the necessary engineering prowess to author them. The same could be said in regards to automation authored in this course. Please, keep this in mind as you work your way through the course material.
Let's execute the unit test by entering
go test -v -cover
The command line returns
=== RUN TestHelloWorld
--- PASS: TestHelloWorld (0.00s)
PASS
coverage: 50.0% of statements
ok github.com/nemonik/helloworld 0.003s
This step and all the proceeding follows of a DevOps tenant where "Developers are expected to pre-flight new code."
Build automation is a key practice of CI. So, let's make the build reproducible by automating everything we've done this far via authoring a Makefile.
In the root of the project create Makefile
and add the following contents
BINARY_NAME=helloworld
.PHONY: all clean fmt lint test build run
all: build
clean:
go clean
rm -f $(BINARY_NAME)
fmt:
go fmt
lint: fmt
go get golang.org/x/lint/golint
golint
test: lint
go test -v -cover ./...
build: test
go build -o $(BINARY_NAME) -v
run:
./$(BINARY_NAME)
NOTE
- Each line indentation is a
tab
and not a series ofspace bar
characters. Make will fail to execute if these tabs are converted to a series of space characters. - The first letter of the
Makefile
is capitialized.
Save the file and exit your editor.
Okay, let's try out our build automation
make all
The output will resemble
go fmt
go get golang.org/x/lint/golint
go get: added golang.org/x/lint v0.0.0-20210508222113-6edffad5e616
golint
go test -v -cover ./...
=== RUN TestHelloWorld
--- PASS: TestHelloWorld (0.00s)
PASS
coverage: 50.0% of statements
ok github.com/nemonik/helloworld 0.151s coverage: 50.0% of statements
go build -o helloworld -v
CI integrates all of the steps we have worked to ensure a high quality build into a pipeline, so let's do that.
We're going to author a continuous integration pipeline for our application and execute it on Drone. Drone expects a .drone.yml
to exist at the root of the project and will execute the pipeline it contains when the project is committed to GitLab.
A pipeline is broken up into multiple named steps, where each step executes in an ephemeral (i.e., does its job and then poof it is gone) container with shared disk access to the project's workspace. The benefit of this approach is that it relieves you from having to create and maintain slaves to execute your pipelines.
Drone automatically clones your project's repo (Short for "repository.") into a volume (referred to as the workspace) shared by each container (including plugin and service containers).
First, let's retrieve the golang container image and push into our private containert registry
docker pull golang:1.16.5
docker tag golang:1.16.5 k3d-registry.nemonik.com:5000/golang:1.16.5
docker push k3d-registry.nemonik.com:5000/golang:1.16.5
Then in nvim
create .drone.yml
file at the root of the helloworld
project:
kind: pipeline
type: kubernetes
name: default
steps:
- name: build
image: k3d-registry.nemonik.com:5000/golang:1.16.5
commands:
- make build
- name: run
image: k3d-registry.nemonik.com:5000/golang:1.16.5
commands:
- make run
NOTE
- Make sure you pre-pend that dot (
.
) at the start of.drone.yml
. - Like the
.gitignore
file,.drone.yml
is a hidden file and will not show up if you list the directory contents withls
alone. You will need to enterls -las
.
The pipeline is authored in YAML like almost all the CI orchestrators out there except for Jenkin's Pipelines, which you author in Groovy-based DSL.
steps:
- defines the list of steps followed to build, test and deploy your code.build
andrun
- defines the names of the step. These are yours to name. Name steps something meaningful as to what the step is orchestrating. Each step is executed serially, in the order defined.image: k3d-registry.nemonik.com:5000/golang:1.16.5
- defines the container image to execute the step. The golang container tagged1.16.5
will be retrieved from private container image registry located atk3d-registry.nemonik.com:5000
. Drone uses Docker images for the build environment, plugins and service containers. Drone spins them up for the execution of the pipeline and when no longer needed they go poof.commands
- defines a collection of terminal commands to be executed. These are all the same commands we executed previously in the command line. If anyone of these commands were to fail returning a non-zero exit code, the pipeline will immediately end resulting in a failed build.
- Open Drone (e.g., https://drone.nemonik.com)
- If this is your first time, you will need to click
CONTINUE
on welcome-page. - Your browser will be redirected to GitLab, where if you're not authenticated please provide your credentials. You will be asked to authorize Drone to use your account by clicking the blue
Authorize
button. - You will be returned back to Drone CI and if you haven't already completed the registeration form be asked to submit your email, full name and company. Provide what you feel comfortable providing and then click
Submit
. - Drone will then try to sync with GitLab, so wait for a bit while the syncing arrows chase each other for a bit.
- Then click
root/helloworld
repo andACTIVATE REPOSITORY
to enable Drone CI orchestration for the project. - Then click the Drone logo in the upper left of the page to return home.
You won't have any builds to start, but when you do the builds will increment starting from 1.
The build colors mean something:
Red
- indicates a failed build.Yellow-orange
- is the presently executing build.Green
- is a build that passed.
When a build does start, click on its row to open and monitor it. The UI will update as the build proceeds informing you as to its progress.
To trigger the build, simply commit your code:
git add .
git commit -m "added Drone pipeline"
git push origin master
NOTE
- Since we have not registered an SSH key with GitLab, during the push we will be prompted to enter a
Username
andPassword
. - By default your
Username
isroot
and yourPassword
is whatever you set for GitLab. - The git command-line client will not display your password as you enter it.
Immediately after you enter your GitLab username/password open Drone CI (e.g., https://drone.nemonik.com/root/helloworld) in your browser, if you re-use an existing tab to this page refresh the page.
Give it time, but your pipeline should execute.
The execution of this pipeline will follow as so:
- A new build will appear. Click on it.
- Drone will clone your project's repository in a
clone
step. - And then will execute a
build
andrun
steps in order each spinning upgolang:1.16.5
container. - These steps execute the commands in the same way you executed them yourself: a. make lint b. make test c. make build d. make run
The output of the build
(An arbitrary name. You could use "skippy".) step will resemble:
+ make build
go fmt
go get golang.org/x/lint/golint
go: downloading golang.org/x/lint v0.0.0-20210508222113-6edffad5e616
go: downloading golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7
golint
go test -v -cover ./...
=== RUN TestHelloWorld
--- PASS: TestHelloWorld (0.00s)
PASS
coverage: 50.0% of statements
ok github.com/nemonik/helloworld 0.003s coverage: 50.0% of statements
go build -o helloworld -v
github.com/nemonik/helloworld
The output of the run
step will resemble:
+ make run
./helloworld
hello world
Our build was successful. Drone CI uses a container's exit code to determine success or failure. A container's non-zero exit code will cause the pipeline to exit immediately.
That's it. This is essentially CI. Remember, CI stands for "Continuous Integration". Scintillating isn't it?
NOTES
-
If your pipeline should fail you can debug via reviewing the
kubectl logs -n drone -l app.kubernetes.io/component=drone-runner-kube -f
Output will look like
time="2021-07-17T20:34:27Z" level=warning msg="Engine: Container start timeout" build.id=1 build.number=1 container=drone-sssx70ptd4ctv6uqpucc error="kubernetes error: container failed to start in timely manner: id=drone-sssx70ptd4ctv6uqpucc" image="k3d-registry.nemonik.com:5000/golang:1.16.5" placeholder="drone/placeholder:1" pod=drone-2mvry5w700zvx3z3l2gw repo.id=1 repo.name=helloworld repo.namespace=root stage.id=1 stage.name=default stage.number=1 step=build step.name=build thread=17 time="2021-07-17T20:43:41Z" level=warning msg="Engine: Container start timeout" build.id=3 build.number=3 container=drone-3akpvbe33pxqhbxrba1a error="kubernetes error: container failed to start in
-
You can also re-run the pipelne for a build in
debug
mode via the Drone CI web interface. Click on the hamburger in the upper left (i.e., three dots in a vertical line), selectDebug
and it will re-run the pipeline and then pause at the failed step, so you can open a web-based shell into the running step and debug it. Once, Drone CI gets to the failed step it will provide aOPEN REMOTE SESSION
button to open you into a shell of the step that failed. Pretty cool.
The helloworld
project can be viewed completed on GitHub at
https://github.com/nemonik/helloworld-gen2
Like helloworld
, the helloworld-web
project is a very simple application that we will use to explore Continuous Deliver. Remember, Continuous Delivery builds upon Continuous Integration. You've accomplished Continuous Integration. Wahoo.
Open Taiga in your web browser (e.g., http://taiga.nemonik.com).
Complete the follow to track your progress in completing the helloworld-web project
-
Click
Projects
in the upper-left, thenNew Project
. -
Select
Kanban
. A Kanban board shows how work moves from left to right, each column represents a stage within the value stream. -
Give your project a name. For example,
Helloworld-web
and a description, such as,My Kanban board for this awesome helloworld-web app
and then clickCREATE PROJECT
. -
You can skip this step and opt to chose to click
<
to foldREADY
,USER STORY STATUS
andARCHIVED
only after completing step 6. Otherwise, you can edit your Kanban board to just showNEW
,IN PROGRESS
, andDONE
by a. On the bottom-left, click theSettings
gear. b. ClickATTRIBUTES
. c. Scroll down toUSER STORY STATUS
. d. Hover overReady
, click the trash icon to delete and clickACCEPT
. c. Do the same forReady for test
andArchived
. d. Click theKANBAN
icon on the far left. It Looks like columns. And then reload the browser to get the changes to take -
In the
NEW
column selectAdd New bulk
icon that looks like a list and when the page updates cut-and-paste the lines below into the text box and clickSAVE
.Create the project's backlog Create the project in GitLab Setup the project Author the helloworld-web application Build and run the helloworld-web application Run golangci-lint on the helloworld-web application Author the unit tests Perform static analysis (i.e., sonar-scanner) on the command line Automate the build (i.e., write the Makefile) Containerize the application Run the container Push the container image to the private registry Configure Drone to execute your CICD pipeline Add Static Analysis (sonar) step to your CICD pipeline Add the build step to the pipeline Add the nemonik\helloworld-web:latest container image publish step to pipeline Deploy helloworld-web application to the Kubernetes cluster Add a deploy rule to the Makefile Add a deploy step to the pipeline Add compliance-as-code (inspec) test to the pipeline
Track your progress in Taiga as you work through each section.
- In GitLab (e.g., https://gitlab.nemonik.com) click on the GitLab logo in the upper left.
- Click
Projects
on the far upper-left andClick blank project
(e.g. https://gitlab.nemonik.com/projects/new#blank_project) - Enter
helloworld-web
for theProject name
. Be careful with the spelling. - Provide an optional
Project description
. Something descriptive, such as,GoLang helloworld application for the hands-on DevOps class.
. - Save yourself a headache, and make the application
Public
. - Click the
Create project
button on the lower left.
The UI will refresh to show you landing page for the project (e.g., https://gitlab.nemonik.com/root/helloworld-web).
You'll now clone the new helloworld-web
GitLab hosted repo as you did prior for the helloworld
project.
If you don't already have a shell open, open one now and enter the following
cd ~/go/src/github.com/nemonik
git clone https://gitlab.nemonik.com/root/helloworld-web.git
cd helloworld-web
NOTE
- The URL for the project provided above is the default. You will have to adjust it for your configuration.
- Again, ignore the "warning: You appear to have cloned an empty repository." warning, cause of course you did.
- The
git clone
will fail if you did not name your project correctly while in GitLab.
So, that you do not version control certain files in git, create a .gitignore
file with your editor with this content
# OS-specific
.DS_Store
# reports
coverage.out
golangci-lint.xml
inspec_helloworld.json
# binary
./helloworld-web
# sonar
.scannerwork/
In the project folder (i.e., ~/go/src/github.com/nemonik/helloworld
) start your module using
go mod init
Whose output will resemble
go: creating new go.mod: module github.com/nemonik/helloworld
go: to add module requirements and sums:
go mod tidy
Then create main.go
in nvim
with this content:
package main
import (
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/", handler)
fmt.Print("listening on :3000\n")
http.ListenAndServe(":3000", logRequest(http.DefaultServeMux))
}
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello world!\n")
}
func logRequest(handler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Printf("%s %s %s\n", r.RemoteAddr, r.Method, r.URL)
handler.ServeHTTP(w, r)
})
}
Format the code like you did previously, but if you are using nvim
as configured through the class automation the source will already by formatted correctly up save.
go fmt
Build the application
go build -o helloworld-web .
No output means things are peachy. Otherwise fix your mistakes.
Now run
./helloworld-web
Command line output will be
listening on :3000
One OSX, you may be asked to Allow
“helloworld-web” to accept incoming network connections? Click Allow
.
To run the application, either
- Open http://localhost:3000 in a web browser, or
- Enter
curl http://localhost:3000
into the command-line of another terminal.
Both will return:
Hello world!
./helloworld-web
's output in the terminal window will resemble
listening on :3000
[::1]:55890 GET /
crl-c
will stop your application.
Okay. So, our code runs, but are there any hidden problems?
In our prior helloworld
project we used golint
, but although golint
is still a "useful engine" it has been deprecated/frozen (i.e., no longer maintained) we should look for another linter. Earlier versions of this course used the Go Meta Linter, but this linter is also deprecated with the advise to use golangci-lint, so this is what we'll use. the Like the Go Meta Linter, golangci-lint provides an extensible list of linters, so we'll have all our bases covered.
The Ansible ansible/go.yaml playbook will have installed golangci-lint
for you. To install yourself read https://golangci-lint.run/usage/install/#local-installation.
Now, let us run our linters.
golangci-lint run
golangci-lint
with no configuration will run all these linters:
deadcode: Finds unused code [fast: false, auto-fix: false] errcheck: Errcheck is a program for checking for unchecked errors in go programs. These unchecked errors can be critical bugs in some cases [fast: false, auto-fix: false] gosimple (megacheck): Linter for Go source code that specializes in simplifying a code [fast: false, auto-fix: false] govet (vet, vetshadow): Vet examines Go source code and reports suspicious constructs, such as Printf calls whose arguments do not align with the format string [fast: false, auto-fix: false] ineffassign: Detects when assignments to existing variables are not used [fast: true, auto-fix: false] staticcheck (megacheck): Staticcheck is a go vet on steroids, applying a ton of static analysis checks [fast: false, auto-fix: false] structcheck: Finds unused struct fields [fast: false, auto-fix: false] typecheck: Like the front-end of a Go compiler, parses and type-checks Go code [fast: false, auto-fix: false] unused (megacheck): Checks Go code for unused constants, variables, functions and types [fast: false, auto-fix: false] varcheck: Finds unused global variables and constants [fast: false, auto-fix: false]
NOTE
- A number of linters are disabled by default. To view these enter
golangci-lint help linters
in your shell.
And after some time, it will output something like the following
main.go:11:21: Error return value of `http.ListenAndServe` is not checked (errcheck)
http.ListenAndServe(":3000", logRequest(http.DefaultServeMux))
Oops. Line 11 has problems:
http.ListenAndServe(":3000", logRequest(http.DefaultServeMux))
on line 11 returns anerr
if it runs into problems. We need to handle the problem by logging theerr
and exit.
Let's fix the problems by opening main.go
in our editor, make the following changes to address the concerns and save
package main
import (
"log"
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/", handler)
fmt.Print("listening on :3000\n")
log.Fatal(http.ListenAndServe(":3000", logRequest(http.DefaultServeMux)))
}
func handler(w http.ResponseWriter, r *http.Request) {
_, err := fmt.Fprintf(w, "Hello world!\n")
if err != nil {
log.Fatal(err)
}
}
func logRequest(handler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Printf("%s %s %s\n", r.RemoteAddr, r.Method, r.URL)
handler.ServeHTTP(w, r)
})
}
vim-go
will correct our formatting.
And then run our linters again
golangci-lint run
And after some time, nothing is returned. Problem solved. If they could all be this easy.
So, our code runs and we've fixed the problems surfaced by our linter.
But like most software written for the government, we don't have any unit tests, so let's fix that.
Create main_test.go
in nvim
with this content:
package main
import (
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"testing"
)
func TestLogRequest(t *testing.T) {
mux := http.NewServeMux()
mux.Handle("/", http.HandlerFunc(handler))
l := logRequest(mux)
// Create an http request
req, _ := http.NewRequest("GET", "/", nil)
// Create http.ResponseWriter for test inspection
recorder := httptest.NewRecorder()
// Capture Stdout
rescueStdout := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
l.ServeHTTP(recorder, req)
// Stop capturing Stdout
w.Close()
out, _ := ioutil.ReadAll(r)
os.Stdout = rescueStdout
// Compare
if string(out) != " GET /\n" {
t.Errorf("logRequest didn't log the expected \"GET /\"")
}
}
func TestHandler(t *testing.T) {
// Create an http request
req, _ := http.NewRequest("GET", "/", nil)
// Create http.ResponseWriter for test inspection
recorder := httptest.NewRecorder()
// Call the handler
handler(recorder, req)
// Inspect the http.ResponseWriter
if recorder.Code != http.StatusOK {
t.Errorf("Server did not return %v", http.StatusOK)
}
if recorder.Body.String() != "Hello world!\n" {
t.Errorf("Body contain \"%v\" instead of expected \"Hello world!\"", recorder.Body.String())
}
}
func TestMain(m *testing.M) {
os.Exit(m.Run())
}
Wow. That's a lot more code than the application itself, huh. Maybe this is the reason so few unit tests are written.
Execute the unit test by entering
go test -v -cover
The command line will return something like
=== RUN TestLogRequest
--- PASS: TestLogRequest (0.00s)
=== RUN TestHandler
--- PASS: TestHandler (0.00s)
PASS
coverage: 55.6% of statements
ok github.com/nemonik/helloworld-web 1.044s
Notice, we only scored 55.6% coverage, but we appear to have a unit test for all our methods? This where discernment comes in. Do you battle for a 100%, 80%, some other number snatched from the air or call this a win. Up to you or really your team.
SonarQube provides a static analysis capability to show the health of an application's source code, highlighting issues as they are introduced.
Before you run sonar-scanner you have to commit your code so Sonar knows who to blame, so first head back to your development
shell and enter
git add main.go main_test.go
git commit -m "added application code and unit test"
In nvim
create a sonar-project.properties
file
sonar.host.url=https://sonar.nemonik.com
sonar.projectKey=helloworld-web
sonar.projectName=helloworld-web
sonar.projectVersion=1.0
sonar.go.golangci-lint.reportPaths=tests/reports/golangci-lint.xml
sonar.go.coverage.reportPaths=tests/reports/coverage.out
sonar.sources=./
sonar.tests=./
sonar.test.inclusions=**/**_test.go
sonar.sources.inclusions=**/**.go
Then start the analysis with
go fmt
mkdir -p tests/reports && touch tests/reports/.gitkeep
golangci-lint run --out-format checkstyle | tee tests/reports/golangci-lint.xml
go test ./... -coverprofile=tests/reports/coverage.out
sonar-scanner
After some time, the output will look like
NOTE
- If you see an error concerned about blame files not being found you didn't first commit your code. Go back commit your code. You don't need to
push
it. Just commit it. Commit it!!!! - Also, notice I didn't say
push
your code. Your code hasn't been pushed to GitLab, but has only been committed to your local copy of your repository.
Let me unpack what the above commands are doing
go fmt
formats the code as we did earlier.
golangci-lint run --out-format checkstyle | tee tests/reports/golangci-lint.xml
gathers golangci-lint reports in checkstyle format
go test ./... -coverprofile=tests/reports/coverage.out
executes your unit tests and generate the coverage.out
report.
sonar-scanner
then submits the reports. SonarQube will automatically create the project for you with a report (e.g., https://sonar.nemonik.com/dashboard?id=helloworld-web).
In nvim
create a Makefile
to ensure the build and the steps leading to are repeatable.
BINARY_NAME=helloworld-web
.PHONY: all clean fmt lint test sonar build run
all: sonar build
clean:
go clean
rm -f $(BINARY_NAME)
rm -f tests/reports/golangci-lint.xml
rm -f tests/reports/coverage.out
fmt:
go fmt
lint: fmt
mkdir -p tests/reports && touch tests/reports/.gitkeep
golangci-lint run --out-format checkstyle | tee tests/reports/golangci-lint.xml
test: lint
mkdir -p tests/reports && touch tests/reports/.gitkeep
go test ./... -coverprofile=tests/reports/coverage.out
sonar: test
sonar-scanner
build:
go build -o $(BINARY_NAME) -v
run:
./$(BINARY_NAME)
NOTE
- The indents are
tab
characters and notspaces
characters otherwise yourmake
will fail to execute. Thenvim
editor will recognize your file is a [Makefile](https://en.wikipedia.org/wiki/Make_(software)#Makefile and warn you incorrect indentation.
Test out your Makefile
make all
Output will resemble
go fmt
mkdir -p tests/reports && touch tests/reports/.gitkeep
golangci-lint run --out-format checkstyle | tee tests/reports/golangci-lint.xml
<?xml version="1.0" encoding="UTF-8"?>
<checkstyle version="5.0">
</checkstyle>
mkdir -p tests/reports && touch tests/reports/.gitkeep
go test ./... -coverprofile=tests/reports/coverage.out
ok github.com/nemonik/helloworld-web 0.004s coverage: 55.6% of statements
sonar-scanner
INFO: Scanner configuration file: /opt/sonar-scanner/conf/sonar-scanner.properties
INFO: Project root configuration file: /home/student/go/src/github.com/nemonik/helloworld-web/sonar-project.properties
INFO: SonarScanner 4.6.0.2311
INFO: Java 16.0.1 N/A (64-bit)
INFO: Linux 5.12.15-arch1-1 amd64
INFO: User cache: /home/student/.sonar/cache
INFO: Scanner configuration file: /opt/sonar-scanner/conf/sonar-scanner.properties
INFO: Project root configuration file: /home/student/go/src/github.com/nemonik/helloworld-web/sonar-project.properties
INFO: Analyzing on SonarQube server 8.5.1
INFO: Default locale: "en_US", source code encoding: "UTF-8" (analysis is platform dependent)
INFO: Load global settings
INFO: Load global settings (done) | time=134ms
INFO: Server id: EA8D9556-AXq_eICKOsCp9_iMkQ-O
INFO: User cache: /home/student/.sonar/cache
INFO: Load/download plugins
INFO: Load plugins index
INFO: Load plugins index (done) | time=37ms
INFO: Load/download plugins (done) | time=98ms
INFO: Process project properties
INFO: Process project properties (done) | time=6ms
INFO: Execute project builders
INFO: Execute project builders (done) | time=1ms
INFO: Project key: helloworld-web
INFO: Base dir: /home/student/go/src/github.com/nemonik/helloworld-web
INFO: Working dir: /home/student/go/src/github.com/nemonik/helloworld-web/.scannerwork
INFO: Load project settings for component key: 'helloworld-web'
INFO: Load project settings for component key: 'helloworld-web' (done) | time=16ms
INFO: Load quality profiles
INFO: Load quality profiles (done) | time=56ms
INFO: Load active rules
INFO: Load active rules (done) | time=623ms
INFO: Indexing files...
INFO: Project configuration:
INFO: Excluded sources: **/**_test.go
INFO: Included tests: **/**_test.go
INFO: 6 files indexed
INFO: 9 files ignored because of inclusion/exclusion patterns
INFO: 3 files ignored because of scm ignore settings
INFO: Quality profile for go: Sonar way
INFO: ------------- Run sensors on module helloworld-web
INFO: Load metrics repository
INFO: Load metrics repository (done) | time=23ms
INFO: Sensor CSS Rules [cssfamily]
INFO: No CSS, PHP, HTML or VueJS files are found in the project. CSS analysis is skipped.
INFO: Sensor CSS Rules [cssfamily] (done) | time=0ms
INFO: Sensor JaCoCo XML Report Importer [jacoco]
INFO: 'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml
INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer
INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=3ms
INFO: Sensor SonarGo [go]
INFO: 1 source files to be analyzed
INFO: Load project repositories
INFO: Load project repositories (done) | time=15ms
INFO: Sensor SonarGo [go] (done) | time=165ms
INFO: Sensor Go Cover sensor for Go coverage [go]
INFO: 1/1 source files have been analyzed
INFO: Load coverage report from '/home/student/go/src/github.com/nemonik/helloworld-web/tests/reports/coverage.out'
INFO: Sensor Go Cover sensor for Go coverage [go] (done) | time=9ms
INFO: Sensor Import of GolangCI-Lint issues [go]
INFO: Importing /home/student/go/src/github.com/nemonik/helloworld-web/tests/reports/golangci-lint.xml
INFO: Sensor Import of GolangCI-Lint issues [go] (done) | time=40ms
INFO: Sensor C# Properties [csharp]
INFO: Sensor C# Properties [csharp] (done) | time=1ms
INFO: Sensor JavaXmlSensor [java]
INFO: Sensor JavaXmlSensor [java] (done) | time=1ms
INFO: Sensor HTML [web]
INFO: Sensor HTML [web] (done) | time=3ms
INFO: Sensor VB.NET Properties [vbnet]
INFO: Sensor VB.NET Properties [vbnet] (done) | time=0ms
INFO: ------------- Run sensors on project
INFO: Sensor Zero Coverage Sensor
INFO: Sensor Zero Coverage Sensor (done) | time=1ms
INFO: CPD Executor Calculating CPD for 1 file
INFO: CPD Executor CPD calculation finished (done) | time=5ms
INFO: Analysis report generated in 54ms, dir size=84 KB
INFO: Analysis report compressed in 19ms, zip size=12 KB
INFO: Analysis report uploaded in 63ms
INFO: ANALYSIS SUCCESSFUL, you can browse https://sonar.nemonik.com/dashboard?id=helloworld-web
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at https://sonar.nemonik.com/api/ce/task?id=AXq_uDjqOsCp9_iMkVvo
INFO: Analysis total time: 2.609 s
INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
INFO: Total time: 3.667s
INFO: Final Memory: 14M/80M
INFO: ------------------------------------------------------------------------
go build -o helloworld-web -v
We can build a Docker image for our application on top of a golang:1.16.5
container image we earlier pushed into our private container registry by creating a Dockerfile
with the following content
FROM k3d-registry.nemonik.com:5000/golang:1.16.5
MAINTAINER Michael Joseph Walsh <[email protected]>
RUN mkdir /app
ADD helloworld-web /app/
WORKDIR /app
ENTRYPOINT ["/app/helloworld-web"]
EXPOSE 3000
And then build the application and create its Docker image via
make build
docker build -t nemonik/helloworld-web .
NOTE
- Don't miss that last period (
.
) at the end the line above. - If you get an error message like
failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount305186654/Dockerfile: no such file or directory
you've misspelledDockerfile
or simply didn't create it.
After some time, the command line output will resemble
[+] Building 3.0s (9/9) FINISHED
=> [internal] load build definition from Dockerfile
=> => transferring dockerfile: 249B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> [internal] load metadata for k3d-registry.nemonik.com:5000/golang:1.16.5
=> [1/4] FROM k3d-registry.nemonik.com:5000/golang:1.16.5
=> [internal] load build context
=> => transferring context: 6.07MB
=> [2/4] RUN mkdir /app
=> [3/4] ADD helloworld-web /app/
=> [4/4] WORKDIR /apps
=> exporting to image
=> => exporting layers
=> => writing image sha256:15b253f9c34c904538f3f6236ff99be5a5b0b7e000cddfddf5577cbe19934622
=> => naming to docker.io/nemonik/helloworld-web
What just happened?
- The
FROM
line instructs Docker to retrieve thegolang:1.16.5
from the private container image registry running on Docker, which it did. And then use this as the basis of your application's docker image. - Then the rest of the commands in the
Dockerfile
are executed laying down layers on top of thegolang:1.16.5
container image thereby building a new docker image entitlednemonik/helloworld-web
and tagging itlatest
. docker build
then places the image with the namenemonik/helloworld-web
in your host's local container image registry so that containers can be created off this image locally.
Check the development vagrant's local container image registry via
docker images nemonik/helloworld-web
The command line output will resemble
REPOSITORY TAG IMAGE ID CREATED SIZE
nemonik/helloworld-web latest 15b253f9c34c 5 minutes ago 868MB
You've created an approximately 868MB sized nemonik/helloworld-web
image tagged latest
. Kind of fat isn't? Not "phat" as in cool, but fat as in large.
NOTE:
- The registry will also contain the
k3d-registry.nemonik.com:5000/golang:1.16.5
on whichnemonik/helloworld-web:latest
is based, so the next time re-build image you won't have to wait fork3d-registry.nemonik.com:5000/golang:1.16.5
to be retrieved.
But this approach doesn't create the smallest most secure container image. You can accomplish this by instead using Docker's reserved, minimal image scratch
, as a starting point for your Dockerfile
like so
FROM scratch
MAINTAINER Michael Joseph Walsh <[email protected]>
WORKDIR /
ADD helloworld-web /
ENTRYPOINT ["/helloworld-web"]
EXPOSE 3000
In order for our Go application to execute in the smallest container image possible we will disable cgo tool that is enabled by default for cross compiling, target the Linux operating system and build our Go application statically to include all the dependencies like so
CGO_ENABLED=0 GOOS=linux go build -a -o helloworld-web .
NOTE:
-a
parameter is used to force rebuilding package to ensure you have all the dependencies.
We can update the project's Makefile to do the same by
nvim Makefile
Update the top of the Makefile to be
.PHONY: all clean fmt lint test sonar build run docker-build
all: sonar docker-build
Then modifying the build
rule
build:
CGO_ENABLED=0 GOOS=linux go build -o $(BINARY_NAME) -v
And add the docker-build
target to the end
docker-build: build
docker build --no-cache -t nemonik/helloworld-web .
Remember to use tab
characters vice space
characters.
And then run in the command line
make docker-build
Whose output will resemble
Execute the docker images
command again for this new image
docker images nemonik/helloworld-web
And its output will resemble
REPOSITORY TAG IMAGE ID CREATED SIZE
nemonik/helloworld-web latest 2a95b88aaa45 4 minutes ago 6.14MB
The image is a slim 6.14 MB and way more secure.
Spin up a new nemonik/helloworld-web
container by entering either
-
Option 1 - Run the container in the foreground
docker run -p 3000:3000 --name helloworld-web nemonik/helloworld-web
and then hitting
in your web browser, or
-
Option 2 - Run the container in the background
docker run -d -p 3000:3000 --name helloworld-web nemonik/helloworld-web
The command will output a string of text resembling
82df2a483612c923c1c6e1ee0f08fdf24dca8f0db66c3fc7ec483e5796c53cc5
This is the container id for the container you just spun up.
Now hit the same URL in the command-line via
curl http://localhost:3000
Where
run
messages Docker you are running a new container-d
in the second option, runs the container in background and prints container ID.-p 3000:3000
published the container's port as 3000 to the host.--name helloworld-web
names the running containernemonik/helloworld-web
states what container image to use.
The command line output for the first option will be
Listening on :3000
172.17.0.1:60178 GET /
For the second option there will be no output written to the screen, but you can see the same output if you run
docker logs <the container id output when started the container> -f
or
docker logs helloworld-web -f
returning
listening on :3000
172.17.0.1:61714 GET /
ctrl-c
(i.e., press and hold the ctrl
key while pressing c
) to exit the logs.
To kill the container
docker rm -f helloworld-web
In the development vagrant, push the nemonik/helloworld-web
container image into the private container registry running in the Kubernetes cluster, so that all the vagrants can create containers from the image with the commands
docker tag nemonik/helloworld-web k3d-registry.nemonik.com:5000/nemonik/helloworld-web
docker push k3d-registry.nemonik.com:5000/nemonik/helloworld-web
Command line output will be
Using default tag: latest
The push refers to repository [k3d-registry.nemonik.com:5000/nemonik/helloworld-web]
c55ef32b361b: Pushed
latest: digest: sha256:4306742c9211c49fc7363ba859ec921ff29292aca4678aa15b2fe7f67885fcba size: 528
Update the project's Makefile
nvim Makefile
Update .PHONY
and replace docker-build
with docker-push
in the all
target line like so:
.PHONY: all clean fmt lint test sonar build run docker-build docker-push
all: sonar docker-push
Below docker-build
insert docker-push
rule
docker-push: docker-build
docker tag nemonik/helloworld-web k3d-registry.nemonik.com:5000/nemonik/helloworld-web
docker push k3d-registry.nemonik.com:5000/nemonik/helloworld-web
And then run in the command line
make docker-push
The output will be something like
The container registry image shipped by Docker does not provide a GUI, but we can verify by querying the catalog of the private registry through a web browser or Unix command line tool curl
by entering into the command line
curl -X GET http://k3d-registry.nemonik.com:5000/v2/_catalog
Returns in the command line
{
"repositories": [
"bitnami/postgresql",
"busybox",
"drone/drone-runner-kube",
"drone/kubernetes-secrets",
"mitre/heimdall2",
"nemonik/drone",
"nemonik/helloworld-web",
"nginx",
"plantuml/plantuml-server",
"postgres",
"rabbitmq",
"redis",
"sameersbn/gitlab",
"sameersbn/postgresql",
"sonarqube",
"taigaio/taiga-back",
"taigaio/taiga-events",
"taigaio/taiga-front",
"taigaio/taiga-protected",
"traefik"
]
}
The pretty print form of this looks like so
curl -s -X GET curl -X GET http://k3d-registry.nemonik.com:5000/v2/_catalog | npx prettier --parser json
{
"repositories": [
"bitnami/postgresql",
"busybox",
"drone/drone-runner-kube",
"drone/kubernetes-secrets",
"golang",
"mitre/heimdall2",
"nemonik/drone",
"nemonik/helloworld-web",
"nginx",
"plantuml/plantuml-server",
"postgres",
"rabbitmq",
"redis",
"sameersbn/gitlab",
"sameersbn/postgresql",
"sonarqube",
"taigaio/taiga-back",
"taigaio/taiga-events",
"taigaio/taiga-front",
"taigaio/taiga-protected",
"traefik"
]
}
npx
will auto install prettier
for you.
Quite a bit of container images we got there.
To list container images the private registry holds for the helloworld-web
container image enter
curl -X GET http://k3d-registry.nemonik.com:5000/v2/nemonik/helloworld-web/tags/list
Returns in the command line
{ "name": "nemonik/helloworld-web", "tags": ["latest"] }
The pretty print of this look like
{
"name": "nemonik/helloworld-web",
"tags": ["latest"]
}
As you did for the purpose of CI (Continuous Integration) of the prior application, you will need to configure Drone to perform CICD (a combination of Continuous Integration and Continuous Delivery) on the helloworld-web
application.
Complete the following:
- Open Drone CI (.e.g., http://drone.nemonik.com/) in your browser and authenticate thr:Wqough GitLab on into Drone, if you need to.
- Then select
SYNC
and watch as the arrows chase each other for a bit before theroot/helloworld-web
repository shows up. - Then click
root/helloworld-web
repo andACTIVATE REPOSITORY
to enable Drone orchestration for the project. - Then click the Drone logo in the upper left of the page to return home.
So, lets create our pipeline starting with a sonarqube
step to update SonarQube with code quality scans automatically.
We'll need to build a container image to do this
cd $HOME/Development/workspace
mkdir -p golang-sonarqube-scanner
cd golang-sonarqube-scanner
git init --initial-branch=master
Then create Dockerfile
in nvim
.
FROM k3d-registry.nemonik.com:5000/golang:1.16.5
MAINTAINER Michael Joseph Walsh <[email protected]>
RUN apt-get -y update
RUN apt-get -y install unzip
RUN wget -O /usr/local/sonar-scanner-cli-4.6.2.2472-linux.zip --no-check-certificate --no-cookies https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.6.2.2472-linux.zip; \
unzip /usr/local/sonar-scanner-cli-4.6.2.2472-linux.zip -d /usr/local; \
echo "https://sonar.nemonik.com" > /usr/local/sonar-scanner-4.6.2.2472-linux/conf/sonar-scanner.properties; \
rm /usr/local/sonar-scanner-cli-4.6.2.2472-linux.zip; \
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.41.1; \
go get -u golang.org/x/lint/golint
ENV PATH /usr/local/sonar-scanner-4.6.2.2472-linux/bin:$PATH
ENTRYPOINT ["kubectl"]
CMD ["--help"]
We then build, tag and push this container image into our local registry for use in a pipeline
docker build -t nemonik/golang-sonarqube-scanner:latest .
docker tag nemonik/golang-sonarqube-scanner:latest k3d-registry.nemonik.com:5000/nemonik/golang-sonarqube-scanner:latest
docker push k3d-registry.nemonik.com:5000/nemonik/golang-sonarqube-scanner:latest
The output will resemble
Now that we've built, tagged and pushed the container into our registery we can utiulize it in a pipeline.
In the shell
cd ~/go/src/github.com/nemonik/helloworld-web
nvim .drone.yml
copy the content below to create our pipeline (.drone.yml
) in our text editor
kind: pipeline
type: kubernetes
name: default
steps:
- name: sonarqube
image: k3d-registry.nemonik.com:5000/nemonik/golang-sonarqube-scanner:latest
commands:
- export DRONESRC=`pwd`
- export GOBIN=$GOPATH/bin
- export PATH="$GOBIN:$PATH"
- mkdir -p $GOPATH/src/github.com/nemonik
- cd $GOPATH/src/github.com/nemonik
- ln -s $DRONESRC helloworld-web
- cd helloworld-web
- golangci-lint run --out-format checkstyle > tests/reports/golangci-lint.xml || true
- go test -v ./... -coverprofile=tests/reports/coverage.out || true
- sonar-scanner || true
Things to note in the above
-
This step uses the
nemonik/golang-sonarqube-scanner:latest
container image built on top of thegolang:1.16.5
image to speed builds along. -
The following commands is a bit of filesystem juggling, so that the scan can be executed
- export DRONESRC=`pwd` - export GOBIN=$GOPATH/bin - export PATH="$GOBIN:$PATH" - mkdir -p $GOPATH/src/github.com/nemonik - cd $GOPATH/src/github.com/nemonik - ln -s $DRONESRC helloworld-web
What follows handles running the scan absorbing errors as they arise, so as to not break the build.
- cd helloworld-web - golangci-lint run --out-format checkstyle > golangci-lint.xml || true - go test -v ./... -coverprofile=coverage.out || true - sonar-scanner || true
To execute your pipeline, push your changes to GitLab
git add .
git commit -m "added sonar step to pipeline"
git push origin master
And then monitor the progress of the root/helloworld-web
repository build (e.g., https://drone.nemonik.com/root/helloworld-web ) in Drone CI.
The pipeline should execute in a few minutes.
Success typically resembles
+ export DRONESRC=/drone/src
+ export GOBIN=$GOPATH/bin
+ export PATH="$GOBIN:$PATH"
+ mkdir -p $GOPATH/src/github.com/nemonik
+ cd $GOPATH/src/github.com/nemonik
+ ln -s $DRONESRC helloworld-web
+ cd helloworld-web
+ golangci-lint run --out-format checkstyle > tests/reports/golangci-lint.xml || true
+ go test -v ./... -coverprofile=tests/reports/coverage.out || true
=== RUN TestLogRequest
--- PASS: TestLogRequest (0.00s)
=== RUN TestHandler
--- PASS: TestHandler (0.00s)
PASS
coverage: 55.6% of statements
ok github.com/nemonik/helloworld-web 0.004s coverage: 55.6% of statements
+ sonar-scanner || true
INFO: Scanner configuration file: /usr/local/sonar-scanner-4.6.2.2472-linux/conf/sonar-scanner.properties
INFO: Project root configuration file: /go/src/github.com/nemonik/helloworld-web/sonar-project.properties
INFO: SonarScanner 4.6.2.2472
INFO: Java 11.0.11 AdoptOpenJDK (64-bit)
INFO: Linux 5.12.15-arch1-1 amd64
INFO: User cache: /root/.sonar/cache
INFO: Scanner configuration file: /usr/local/sonar-scanner-4.6.2.2472-linux/conf/sonar-scanner.properties
INFO: Project root configuration file: /go/src/github.com/nemonik/helloworld-web/sonar-project.properties
INFO: Analyzing on SonarQube server 8.5.1
INFO: Default locale: "en_US", source code encoding: "US-ASCII" (analysis is platform dependent)
INFO: Load global settings
INFO: Load global settings (done) | time=105ms
INFO: Server id: EA8D9556-AXq_eICKOsCp9_iMkQ-O
INFO: User cache: /root/.sonar/cache
INFO: Load/download plugins
INFO: Load plugins index
INFO: Load plugins index (done) | time=40ms
INFO: Load/download plugins (done) | time=1704ms
INFO: Process project properties
INFO: Process project properties (done) | time=8ms
INFO: Execute project builders
INFO: Execute project builders (done) | time=1ms
INFO: Project key: helloworld-web
INFO: Base dir: /go/src/github.com/nemonik/helloworld-web
INFO: Working dir: /go/src/github.com/nemonik/helloworld-web/.scannerwork
INFO: Load project settings for component key: 'helloworld-web'
INFO: Load project settings for component key: 'helloworld-web' (done) | time=14ms
INFO: Load quality profiles
INFO: Load quality profiles (done) | time=55ms
INFO: Auto-configuring with CI 'DroneCI'
INFO: Load active rules
INFO: Load active rules (done) | time=624ms
INFO: Indexing files...
INFO: Project configuration:
INFO: Excluded sources: **/**_test.go
INFO: Included tests: **/**_test.go
INFO: 6 files indexed
INFO: 8 files ignored because of inclusion/exclusion patterns
INFO: 2 files ignored because of scm ignore settings
INFO: Quality profile for go: Sonar way
INFO: ------------- Run sensors on module helloworld-web
INFO: Load metrics repository
INFO: Load metrics repository (done) | time=22ms
INFO: Sensor CSS Rules [cssfamily]
INFO: No CSS, PHP, HTML or VueJS files are found in the project. CSS analysis is skipped.
INFO: Sensor CSS Rules [cssfamily] (done) | time=1ms
INFO: Sensor JaCoCo XML Report Importer [jacoco]
INFO: 'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml
INFO: No report imported, no coverage information will be imported by JaCoCo XML Report Importer
INFO: Sensor JaCoCo XML Report Importer [jacoco] (done) | time=2ms
INFO: Sensor SonarGo [go]
INFO: 1 source files to be analyzed
INFO: Load project repositories
INFO: Load project repositories (done) | time=20ms
INFO: Sensor SonarGo [go] (done) | time=178ms
INFO: Sensor Go Cover sensor for Go coverage [go]
INFO: 1/1 source files have been analyzed
INFO: Load coverage report from '/go/src/github.com/nemonik/helloworld-web/tests/reports/coverage.out'
INFO: Sensor Go Cover sensor for Go coverage [go] (done) | time=12ms
INFO: Sensor Import of GolangCI-Lint issues [go]
INFO: Importing /go/src/github.com/nemonik/helloworld-web/tests/reports/golangci-lint.xml
INFO: Sensor Import of GolangCI-Lint issues [go] (done) | time=48ms
INFO: Sensor C# Properties [csharp]
INFO: Sensor C# Properties [csharp] (done) | time=1ms
INFO: Sensor JavaXmlSensor [java]
INFO: Sensor JavaXmlSensor [java] (done) | time=1ms
INFO: Sensor HTML [web]
INFO: Sensor HTML [web] (done) | time=3ms
INFO: Sensor VB.NET Properties [vbnet]
INFO: Sensor VB.NET Properties [vbnet] (done) | time=3ms
INFO: ------------- Run sensors on project
INFO: Sensor Zero Coverage Sensor
INFO: Sensor Zero Coverage Sensor (done) | time=1ms
INFO: CPD Executor Calculating CPD for 1 file
INFO: CPD Executor CPD calculation finished (done) | time=7ms
INFO: Analysis report generated in 53ms, dir size=84 KB
INFO: Analysis report compressed in 14ms, zip size=12 KB
INFO: Analysis report uploaded in 21ms
INFO: ANALYSIS SUCCESSFUL, you can browse https://sonar.nemonik.com/dashboard?id=helloworld-web
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at https://sonar.nemonik.com/api/ce/task?id=AXrAARniOsCp9_iMkVvv
INFO: Analysis total time: 2.719 s
INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
INFO: Total time: 6.088s
INFO: Final Memory: 13M/68M
INFO: ------------------------------------------------------------------------
Open on your host open SonarQube (e.g., https://sonar.nemonik.com/dashboard?id=helloworld-web) to view the results.
Add a build step to our .drone.yml
- name: build
image: k3d-registry.nemonik.com:5000/golang:1.16.5
commands:
- make build
To execute your pipeline, push your changes to GitLab
git add .
git commit -m "added build step to pipeline"
git push origin master
Open the helloworld-web
ind Drone CI (e.g., ) to monitor progress.
Output for build
will resemble
+ make build
CGO_ENABLED=0 GOOS=linux go build -a -o helloworld-web -v
internal/unsafeheader
runtime/internal/sys
runtime/internal/atomic
sync/atomic
unicode/utf8
internal/race
internal/cpu
math/bits
unicode
runtime/internal/math
container/list
crypto/internal/subtle
crypto/subtle
unicode/utf16
vendor/golang.org/x/crypto/cryptobyte/asn1
internal/nettrace
vendor/golang.org/x/crypto/internal/subtle
internal/bytealg
math
runtime
internal/reflectlite
sync
internal/singleflight
math/rand
internal/testlog
errors
sort
internal/oserror
path
io
strconv
vendor/golang.org/x/net/dns/dnsmessage
syscall
bytes
strings
hash
crypto/internal/randutil
crypto/hmac
hash/crc32
vendor/golang.org/x/crypto/hkdf
crypto
reflect
crypto/rc4
vendor/golang.org/x/text/transform
bufio
internal/syscall/execenv
internal/syscall/unix
time
context
io/fs
internal/poll
os
internal/fmtsort
encoding/binary
crypto/sha512
crypto/md5
encoding/base64
crypto/ed25519/internal/edwards25519
crypto/sha1
crypto/sha256
crypto/cipher
fmt
encoding/pem
path/filepath
net
vendor/golang.org/x/crypto/poly1305
vendor/golang.org/x/crypto/chacha20
crypto/des
crypto/aes
io/ioutil
vendor/golang.org/x/sys/cpu
encoding/hex
net/url
compress/flate
math/big
log
vendor/golang.org/x/crypto/curve25519
vendor/golang.org/x/crypto/chacha20poly1305
vendor/golang.org/x/text/unicode/norm
vendor/golang.org/x/text/unicode/bidi
vendor/golang.org/x/net/http2/hpack
mime
mime/quotedprintable
net/http/internal
vendor/golang.org/x/text/secure/bidirule
compress/gzip
vendor/golang.org/x/net/idna
crypto/dsa
encoding/asn1
crypto/elliptic
crypto/rand
crypto/ed25519
crypto/rsa
vendor/golang.org/x/net/http/httpproxy
net/textproto
crypto/x509/pkix
vendor/golang.org/x/crypto/cryptobyte
vendor/golang.org/x/net/http/httpguts
mime/multipart
crypto/ecdsa
crypto/x509
crypto/tls
net/http/httptrace
net/http
github.com/nemonik/helloworld-web
Mirroring what you saw in development in your local environment.
Add the publish step to your .drone.yml
to the root of the project (e.g., ~/go/src/github.com/nemonik/helloworld-web
), so that the container image is publish to the private registry via the pipeline. The publish:
step must be indented the same as the prior build:
step.
- name: publish
image: plugins/docker
settings:
storage_driver: overlay
insecure: true
registry: k3d-registry.nemonik.com:5000
repo: k3d-registry.nemonik.com:5000/nemonik/helloworld-web
force_tag: true
tags:
- latest
This step makes use of plugins/docker
container to publish the nemonik/helloworld-web:latest
container image to the private registry.
To execute your pipeline, push your changes to GitLab
git add .
git commit -m "added publish step to pipeline"
git push origin master
The publish step will resemble:
+ /usr/local/bin/dockerd --data-root /var/lib/docker --host=unix:///var/run/docker.sock -s overlay --insecure-registry k3d-registry.nemonik.com:5000
Registry credentials or Docker config not provided. Guest mode enabled.
+ /usr/local/bin/docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:22:56 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:30:32 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
+ /usr/local/bin/docker info
Client:
Debug Mode: false
WARNING: the overlay storage-driver is deprecated, and will be removed in a future release.
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.8
Storage Driver: overlay
Backing Filesystem: <unknown>
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.25-linuxkit
Operating System: Alpine Linux v3.11 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 11.7GiB
Name: drone-8izahbzb7st7t3p47if3
ID: 2DDX:JAVW:WSPE:IG3X:OKZ3:LADJ:PKRP:5Y24:75SZ:CHOY:APEW:BGLW
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
k3d-registry.nemonik.com:5000
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
+ /usr/local/bin/docker build --rm=true -f Dockerfile -t c785abd3ebefc230f16639fdd097cc5749b5a5ba . --pull=true --label org.opencontainers.image.created=2021-07-14T18:55:28Z --label org.opencontainers.image.revision=c785abd3ebefc230f16639fdd097cc5749b5a5ba --label org.opencontainers.image.source=https://gitlab.nemonik.com/root/helloworld-web.git --label org.opencontainers.image.url=https://gitlab.nemonik.com/root/helloworld-web
Sending build context to Docker daemon 9.569MB
Step 1/10 : FROM scratch
--->
Step 2/10 : MAINTAINER Michael Joseph Walsh <[email protected]>
---> Running in 0aaa9e9588b9
Removing intermediate container 0aaa9e9588b9
---> 2ce1cbce7459
Step 3/10 : WORKDIR /
---> Running in 7ce3d8c12dda
Removing intermediate container 7ce3d8c12dda
---> 0ac43f84fbe5
Step 4/10 : ADD helloworld-web /
---> b89429f76ed9
Step 5/10 : ENTRYPOINT ["/helloworld-web"]
---> Running in 59892a4b3d61
Removing intermediate container 59892a4b3d61
---> f323865fc000
Step 6/10 : EXPOSE 3000
---> Running in ac9fae305851
Removing intermediate container ac9fae305851
---> de314d9fd246
Step 7/10 : LABEL org.opencontainers.image.created=2021-07-14T18:55:28Z
---> Running in a815ad6dd55b
Removing intermediate container a815ad6dd55b
---> 3ff4bc34e8c9
Step 8/10 : LABEL org.opencontainers.image.revision=c785abd3ebefc230f16639fdd097cc5749b5a5ba
---> Running in ff848a7c8893
Removing intermediate container ff848a7c8893
---> b49387938b30
Step 9/10 : LABEL org.opencontainers.image.source=https://gitlab.nemonik.com/root/helloworld-web.git
---> Running in 4ab39639728b
Removing intermediate container 4ab39639728b
---> 3d309bc20ca2
Step 10/10 : LABEL org.opencontainers.image.url=https://gitlab.nemonik.com/root/helloworld-web
---> Running in cefed426070b
Removing intermediate container cefed426070b
---> 8045a038e955
Successfully built 8045a038e955
Successfully tagged c785abd3ebefc230f16639fdd097cc5749b5a5ba:latest
+ /usr/local/bin/docker tag c785abd3ebefc230f16639fdd097cc5749b5a5ba k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest
+ /usr/local/bin/docker push k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest
The push refers to repository [k3d-registry.nemonik.com:5000/nemonik/helloworld-web]
d4bb74436830: Preparing
d4bb74436830: Pushed
latest: digest: sha256:b2be86d945694498eb78c1eb3cb239ca3e94b56cac8fc10d0f886dbe086e3ab6 size: 528
+ /usr/local/bin/docker rmi c785abd3ebefc230f16639fdd097cc5749b5a5ba
Untagged: c785abd3ebefc230f16639fdd097cc5749b5a5ba:latest
+ /usr/local/bin/docker system prune -f
Total reclaimed space: 0B
Indicating the publish
step executed, successfully.
This step is typically painfully slow if your container isn't optimized suchs as ours as this step leverages docker-in-docker to perform its tasks.
Let's deploy the helloworld-web
to the cluster. First we need to author the necessary Kubernetes resource files to declare the desired state of the application on the cluster.
Kubernetes accomplishes this declaratively through the creating or applying of Kubernetes resource files.
In your shell
cd ~/go/src/github.com/nemonik/helloworld-web`
mkdir kubernetes
cd kubernetes
nvim helloworld-web-namespace.yml
and add the following YAML content
---
apiVersion: v1
kind: Namespace
metadata:
name: helloworld-web
And another file, helloworld-web.yml
with this content
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: helloworld-web
app.kubernetes.io/instance: helloworld-web
app.kubernetes.io/name: helloworld-web
name: helloworld-web
namespace: helloworld-web
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/component: helloworld-web
app.kubernetes.io/instance: helloworld-web
app.kubernetes.io/name: helloworld-web
sessionAffinity: None
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-web
namespace: helloworld-web
labels:
app.kubernetes.io/component: helloworld-web
app.kubernetes.io/instance: helloworld-web
app.kubernetes.io/name: helloworld-web
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: helloworld-web
app.kubernetes.io/instance: helloworld-web
app.kubernetes.io/name: helloworld-web
template:
metadata:
labels:
app.kubernetes.io/component: helloworld-web
app.kubernetes.io/instance: helloworld-web
app.kubernetes.io/name: helloworld-web
spec:
containers:
- name: helloworld-web
image: "k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 3000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
restartPolicy: Always
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
name: helloworld-web
namespace: helloworld-web
spec:
rules:
- host: helloworld.nemonik.com
http:
paths:
- backend:
service:
name: helloworld-web
port:
name: http
path: /
pathType: Prefix
The two YAML files describes the Namespace
, Service
, Deployment
and Ingress
where
Namepace
provide a scope for other Kubernetes resources.Service
is a REST object that exposes an application running on a set of Pods as a network service.Deployment
deploys your Pods declaratively.Ingress
is an API object the controls external access to the Service.
NOTES
- YAML files typically usually follow a naming convention where they end in either
.yaml
or.yml
. Pay close attention to what you type. The class utilizes.yml
where I find myself favoring through muscle memory to type.yaml
.
Executing
kubectl apply -f helloworld-web-namespace.yml 2> /dev/null
kubectl delete -f helloworld-web.yml 2> /dev/null
kubectl apply -f helloworld-web.yml
kubectl wait --for=condition=ready pod -n helloworld-web -l app.kubernetes.io/component=helloworld-web
Will deploy the helloworld-web
application with the following output
namespace/helloworld-web created
service/helloworld-web created
deployment.apps/helloworld-web created
ingress.networking.k8s.io/helloworld-web created
pod/helloworld-web-6785bbf496-l92qr condition met
Then in a browser open
https://helloworld.nemonik.com
or in your shell enter
curl https://helloworld.nemonik.com
Either your deployment will return
Hello world!
Let's add a deploy rule to our Makefile
cd ~/go/src/github.com/nemonik/helloworld-web
nvim Makefile
Update the .PHONY
and all
rule to
.PHONY: all clean fmt lint test sonar build run docker-build docker-push deploy
all: sonar deploy
Then add to the and of the Makefile
deploy: docker-push
kubectl apply -f kubernetes/helloworld-web-namespace.yml 2> /dev/null
kubectl delete -f kubernetes/helloworld-web.yml 2> /dev/null
kubectl apply -f kubernetes/helloworld-web.yml
kubectl wait --for=condition=ready pod -n helloworld-web -l app.kubernetes.io/component=helloworld-web --timeout=180s
Execute the make deploy
make deploy
Output will resemble
Adding a deploy
step to our Drone pipeline to perform the deployment of nemonik/helloworld:latest
container image is a bit more involved.
First, we need to create a service account so that we can automate the deployment.
In your shell
cd ~/go/src/github.com/nemonik/helloworld-web/kubernetes
nvim helloworld-web-service-account.yml
create a file with the the following content
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helloworld-web-service-account
namespace: helloworld-web
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: helloworld-web-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: helloworld-web-service-account
namespace: helloworld-web
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: helloworld-web-role
namespace: helloworld-web
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: helloworld-web-rolebinding
namespace: helloworld-web
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: helloworld-web-role
subjects:
- namespace: helloworld-web
kind: ServiceAccount
name: helloworld-web-service-account
Then apply the the Kubernetes resources to the cluster by
kubectl apply -f helloworld-web-service-account.yml
Success is rather anticlimactic, but will resemble
serviceaccount/helloworld-web-service-account created
clusterrolebinding.rbac.authorization.k8s.io/helloworld-web-admin created
role.rbac.authorization.k8s.io/helloworld-web-role created
rolebinding.rbac.authorization.k8s.io/helloworld-web-rolebinding created
But we're not done. More still to do...
NOTE
- In this case we created out service account in the namespace of our application. If you delete this namespace, you delete the service account... And invalidate the secrets you create in Drone CI in the following step.
In order to deploy from our pipeline we will need to add a few secrets to our pipeline to utilize our service account.
To start, we need to get the helloworld-web service account token
set helloworld_web_service_account_token (kubectl get serviceaccount -n helloworld-web --context k3d-hands-on-devops-class helloworld-web-service-account -o jsonpath='{.secrets[0].name}')
kubectl get secret $helloworld_web_service_account_token --context k3d-hands-on-devops-class -n helloworld-web -o jsonpath='{.data.token}' | base64 -d
The above is what one would do in the fish shell. If you decided to stick with a shell that you were more familiar with like Bash or Zsh the following is what you are looking for
helloworld_web_service_account_token=`kubectl get serviceaccount -n helloworld-web --context k3d-hands-on-devops-class helloworld-web-service-account -o jsonpath='{.secrets[0].name}'`
kubectl get secret $helloworld_web_service_account_token --context k3d-hands-on-devops-class -n helloworld-web -o jsonpath='{.data.token}' | base64 -d
The result returned will be the k8s_token
that resembles
eyJhbGciOiJSUzI1NiIsImtpZCI6Im8wX1JsMUZnSXNnUmY5X1lsSk5KcGh6TWN5S1MxTUJVVUsxNWI0VHgzaWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJoZWxsb3dvcmxkLXdlYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJoZWxsb3dvcmxkLXdlYi1zZXJ2aWNlLWFjY291bnQtdG9rZW4tOHN2enciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiaGVsbG93b3JsZC13ZWItc2VydmljZS1hY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzJhY2RjY2QtYmU1Yy00NWU3LTkxZGEtMWZjNzU2ZGUzYjcyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmhlbGxvd29ybGQtd2ViOmhlbGxvd29ybGQtd2ViLXNlcnZpY2UtYWNjb3VudCJ9.GvyRkXOejhakwDF18ZuaTQt2lmwGvEPqDlJ0uubAov34d7XrCyAWujluPKtn1D4C-teQWuTuV52u2Xu3CyvYU3fU3ztz-k8rwA6rM77fPzdhk0yZ1tnkcZG6i_Kcv0p4_B0RAg9MQYz70S_XFYJhjp8-aqRjXuJ76-hfyWxECvghjehXX5tsT19kgPE9QXBotRQlfvWqOVEsE_hlOnjJUI3CKvm-T2faVCgiK00P6osDtjtVGbvHd4oAO1s9iK8aXOFBuxhcFOfTowRsESMcriMHT7enuTcpaaZ-TE55kJFz3LUnH4PCF0Nn4By77KcrJL1ngJTQr2wyMy-jhr0tAA
Then no matter what shell you are using (e.g., Bash, Zsh, fish retrieve the k8s_cert
by
kubectl get secret $helloworld_web_service_account_token --context k3d-hands-on-devops-class -n helloworld-web -o jsonpath='{.data.ca\.crt}'
The value returned will resemble
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTWpZek1URTFNRE13SGhjTk1qRXdOekUxTURFeE1UUXpXaGNOTXpFd056RXpNREV4TVRRegpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTWpZek1URTFNRE13V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRUWsyTGJiNkN3Y3FWZnZVTSswU1BxNDU5K1o2c1J2S1NJTFArbFk4S3IKUGhwYmtzaWU0cG5panhyNUlIeTdOQXVuc0hTZGVPbUpaQWRZbHpmRDgyZVdvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUk3WnMzeThBM05XWnNiWkNOTnVQCjhvSG9hOXd3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUtIZ3FvV1NGQU1UcmpZWlEycm00RW9mUDB6NjdwdXEKRGM2YWduRmhWTTlHQWlFQXZmUTFsTERiZ0RaOUZBUE1UWXlaOVRiVlZ5TGRNaVM5S0U5R0JodUhHWXM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
So, with the k8s_cert
and k8s_token
values above
- Open Drone CI and edit the secrets for
helloworld-web
(e.g., https://drone.nemonik.com/root/helloworld-web/settings). - Click on
Secrets
on the left-hand side. - Click
+ NEW SECRET
and in theCreate a New Secret
pop-up enterk8s_cert
and enter your value above. In the case above I would enterbut your value will be different.LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJlRENDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTWpZek1URTFNRE13SGhjTk1qRXdOekUxTURFeE1UUXpXaGNOTXpFd056RXpNREV4TVRRegpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTWpZek1URTFNRE13V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFRUWsyTGJiNkN3Y3FWZnZVTSswU1BxNDU5K1o2c1J2S1NJTFArbFk4S3IKUGhwYmtzaWU0cG5panhyNUlIeTdOQXVuc0hTZGVPbUpaQWRZbHpmRDgyZVdvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUk3WnMzeThBM05XWnNiWkNOTnVQCjhvSG9hOXd3Q2dZSUtvWkl6ajBFQXdJRFNRQXdSZ0loQUtIZ3FvV1NGQU1UcmpZWlEycm00RW9mUDB6NjdwdXEKRGM2YWduRmhWTTlHQWlFQXZmUTFsTERiZ0RaOUZBUE1UWXlaOVRiVlZ5TGRNaVM5S0U5R0JodUhHWXM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
- Click
+ NEW SECRET
and in theCreate a New Secret
pop-up enterk8s_token
and enter your value above. In the case above the I would enterbut your value will be different.eyJhbGciOiJSUzI1NiIsImtpZCI6Im8wX1JsMUZnSXNnUmY5X1lsSk5KcGh6TWN5S1MxTUJVVUsxNWI0VHgzaWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJoZWxsb3dvcmxkLXdlYiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJoZWxsb3dvcmxkLXdlYi1zZXJ2aWNlLWFjY291bnQtdG9rZW4tOHN2enciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiaGVsbG93b3JsZC13ZWItc2VydmljZS1hY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzJhY2RjY2QtYmU1Yy00NWU3LTkxZGEtMWZjNzU2ZGUzYjcyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmhlbGxvd29ybGQtd2ViOmhlbGxvd29ybGQtd2ViLXNlcnZpY2UtYWNjb3VudCJ9.GvyRkXOejhakwDF18ZuaTQt2lmwGvEPqDlJ0uubAov34d7XrCyAWujluPKtn1D4C-teQWuTuV52u2Xu3CyvYU3fU3ztz-k8rwA6rM77fPzdhk0yZ1tnkcZG6i_Kcv0p4_B0RAg9MQYz70S_XFYJhjp8-aqRjXuJ76-hfyWxECvghjehXX5tsT19kgPE9QXBotRQlfvWqOVEsE_hlOnjJUI3CKvm-T2faVCgiK00P6osDtjtVGbvHd4oAO1s9iK8aXOFBuxhcFOfTowRsESMcriMHT7enuTcpaaZ-TE55kJFz3LUnH4PCF0Nn4By77KcrJL1ngJTQr2wyMy-jhr0tAA
NOTES
- When copying these values out of your fish shell be mindful not to copy the last character (⏎).
- And don't copy the token into the
k8s_cert
secret and vice versa.
To deploy our helloworld-web application to our cluster we will use a container. Sinlead provides a Drone plug (i.e., a container) to do this sinlead
/drone-kubectl that I've patched his plugin to to build from the latest bitnami/kubectl:1.21.1 container image and direct its initialization output to /dev/null
, so that this unexpected output doesn't break the build.
So, let's retrieve my fork (i.e., when a developer clones a project and starts independent development on it).
Let's changes directories in the shell
cd $HOME/Development/workspace
git clone https://github.com/nemonik/drone-kubectl.git
cd drone-kubectl
It is always a good idea to inspect the Dockerfile by
cat Dockerfile
Whose output will resemble
FROM bitnami/kubectl:1.21.1
LABEL maintainer "Michael Joseph Walsh <[email protected]>"
LABEL base-on-the-work-of "Sinlead <[email protected]>"
COPY init-kubectl kubectl /opt/sinlead/kubectl/bin/
USER root
ENV PATH="/opt/sinlead/kubectl/bin:$PATH"
ENTRYPOINT ["kubectl"]
CMD ["--help"]
Sinlead wraps the kubectl with a bash script so it can access the secrets we've set for our pipeline, so it also a good idea to review init-kubectl and kubectl copied into /opt/sinlead/kubectl/bin/
. I've made some slight changes in my fork his work.
We're going to build this container
docker build -t nemonik/drone-kubectl:1.21.1 .
Then tag it and add it to our private container registry
docker tag nemonik/drone-kubectl:1.21.1 k3d-registry.nemonik.com:5000/nemonik/drone-kubectl:1.21.1
docker push k3d-registry.nemonik.com:5000/nemonik/drone-kubectl:1.21.1
Now that service account created, the secrets have been added and we have our container image we'll use to automate the kubectl
command let's create the deployment
step.
Now let add a step to our pipeline to deploy our helloworld-web application to our Kubernetes cluster
cd ~/go/src/github.com/nemonik/helloworld-web/
nvim .drone.yml
And add the following step at the bottom
- name: deploy
image: k3d-registry.nemonik.com:5000/nemonik/drone-kubectl:1.21.1
settings:
kubernetes_cert:
from_secret: k8s_cert
kubernetes_token:
from_secret: k8s_token
commands:
- kubectl delete -f kubernetes/helloworld-web.yml 2> /dev/null
- kubectl apply -f kubernetes/helloworld-web.yml
To execute your pipeline, push your changes to GitLab
git add .drone.yml
git commit -m "added deploy step to pipeline"
git push origin master
Open the root/helloworld-web
repository pipeline (e.g., https://drone.nemonik.com/root/helloworld-web) in Drone CI to monitor.
The pipeline should complete in a few minutes.
The deployment
step output will resemble
+ kubectl delete -f kubernetes/helloworld-web.yml 2> /dev/null
service "helloworld-web" deleted
deployment.apps "helloworld-web" deleted
ingress.networking.k8s.io "helloworld-web" deleted
+ kubectl apply -f kubernetes/helloworld-web.yml
service/helloworld-web created
deployment.apps/helloworld-web created
ingress.networking.k8s.io/helloworld-web created
+ kubectl wait --for=condition=ready pod -n helloworld-web -l app.kubernetes.io/component=helloworld-web
pod/helloworld-web-6785bbf496-qb45b condition met
pod/helloworld-web-6785bbf496-jhg8v condition met
Then in a browser open
https://helloworld.nemonik.com
or in your shell enter
curl https://helloworld.nemonik.com
Either your deployment will return
Hello world!
So, now we have beginnings of a real CICD pipeline. There are no strings on me err. you.
NOTES
-
If your
deploy
step fails withdefault: deploy - Error
you likely skipped building, tagging and pushing thenemonik/drone-kubectl:1.21.1
container image in the prior step. You can debug by looking at thedrone-runner-kube
logs. This pod is used to run your pipelne.kubectl logs -n drone -l app.kubernetes.io/component=drone-runner-kube
Look for something like
time="2021-07-17T21:55:09Z" level=warning msg="Engine: Container start timeout" build.id=4 build.number=4 container=drone-1do79imnrgsthw59fm21 error="kubernetes error: container failed to start in timely manner: id=drone-1do79imnrgsthw59fm21" image="k3d-registry.nemonik.com:5000/nemonik/drone-kubectl:1.21.1" placeholder="drone/placeholder:1" pod=drone-l3ejd197yd9sghsaqxbp repo.id=1 repo.name=helloworld-web repo.namespace=root stage.id=4 stage.name=default stage.number=1 step=deploy step.name=deploy thread=54
near the end as an indication.
-
You can also rerun your pipeline in debug. In your build you click on the hamburger (three dots in vertical row) found in the upper right-side of the page, click and select
Debug
from the drop down. Your pipeline will re-run up to it fails and then pause until you open shell into the running container to debug by clickingOPEN REMOTE SESSION
on the lower-right side of the page. You may have to add executables to your PATH, etc, but you can debug the step to your heart's content.
First let me switch gears into discussing DevSecOps.
The Dev part of the DevOps clip for development (i.e., the application developers) and Ops clip for; well, "every technology operational stakeholder (e.g., network engineers administrators, testers, and why yes, security engineers.)" DevSecOps is a specialization of DevOps focused on embedding security thought and collaboration into your team's culture when working an applications software development life cycle (SDLC) often expressed as security-as-code. If you're doing DevOps correctly, you're also inherently performing the methods and repeated practices of DevSecOps. In DevOps, every technology operational stakeholder discipline must be included in the team without needing to fork DevOps to include said discipline. We don't need DevNetOps nor do we need DevTestOps nor we need DevShempOps...
So, let us add some security-as-code to our project.
Authoring InSpec tests permit you to author compliance-as-code, a form of security-as-code, thereby turning compliance, security, and other policy requirements into automated tests.
InSpec has a very robust support for both the container and its host, see: https://lollyrock.com/posts/inspec-for-docker/ for more information for using InSpec with Docker.
More on InSpec in general can be found here
And these specific sections are relevant to Docker:
https://www.inspec.io/docs/reference/resources/docker_container/
https://www.inspec.io/docs/reference/resources/docker/
https://www.inspec.io/docs/reference/resources/docker_image/
A really good tutorial focused on InSpec can be found here:
http://www.anniehedgie.com/inspec-basics-1
Back in your shell at the root of the helloworld-web
project (i.e., ~/go/src/github.com/nemonik/helloworld-web
), we'll initialize an InSpec profile to verify your container's compliance to policy and configuration guidance. Yep, you're "gonna" be a security engineer.
cd ~/go/src/github.com/nemonik/helloworld-web
mkdir -p tests/inspec
cd tests/inspec
inspec init profile helloworld-web
If asked, accept the product license by entering
yes
And the output from InSpec will resemble
─────────────────────────── InSpec Code Generator ───────────────────────────
Creating new profile at /Users/nemonik/go/src/github.com/nemonik/helloworld-web/inspec/helloworld-web
• Creating file README.md
• Creating directory controls
• Creating file controls/example.rb
• Creating file inspec.yml
Then complete following
cd tests/inspec/helloworld-web/controls/
rm example.rb
nvim helloworld-web.rb
Then copy in the contents of the test below
# copyright: 2021, Michael Joseph Walsh
title "k8s helloworld-web deployment tests"
input('namespace', value: 'helloworld-web')
input('helloworld_image', value: 'k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest')
control "helloworld-web-deployment-1" do
impact 1.0
title "Validate helloworld-web ingress"
describe "Ingress host is expected to be 'helloworld.nemonik.com'" do
subject { command("kubectl get ingresses -n " + input('namespace') + " helloworld-web -o=jsonpath='{.spec.rules[0].host}'") }
its('stdout') { should cmp 'helloworld.nemonik.com' }
end
describe "Ingress is expected to route traffic to 'helloworld-web' service" do
subject { command("kubectl get ingresses -n " + input('namespace') + " helloworld-web -o=jsonpath='{.spec.rules[0].http.paths[0].backend.service.name}'") }
its('stdout') { should cmp 'helloworld-web' }
end
describe "Ingress is expected to route traffic to service http port" do
subject { command("kubectl get ingresses -n " + input('namespace') + " helloworld-web -o=jsonpath='{.spec.rules[0].http.paths[0].backend.service.port.name}'") }
its('stdout') { should cmp 'http' }
end
end
control "helloworld-web-deployment-2" do
impact 1.0
title "Validate helloworld-web service"
describe "Service port is expected to be named 'http'" do
subject { command("kubectl get service -n " + input('namespace') + " helloworld-web -o=jsonpath='{.spec.ports[0].name}'") }
its('stdout') { should cmp "http" }
end
describe "Service is expected to listen on port '80'" do
subject { command("kubectl get service -n " + input('namespace') + " helloworld-web -o=jsonpath='{.spec.ports[0].port}'") }
its('stdout') { should cmp "80" }
end
describe "The Service port is expected to be a TCP port" do
subject { command("kubectl get service -n " + input('namespace') + " helloworld-web -o=jsonpath='{.spec.ports[0].protocol}'") }
its('stdout') { should cmp "TCP" }
end
describe "Service port is expected to target port 'http'" do
subject { command("kubectl get service -n " + input('namespace') + " helloworld-web -o=jsonpath='{.spec.ports[0].targetPort}'") }
its('stdout') { should cmp "http" }
end
end
control "helloworld-web-deployment-3" do
impact 1.0
title "Validate helloworld-web pod"
describe "Pod is running container 'k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest'" do
subject { command("kubectl get pods -n " + input('namespace') + " -l app.kubernetes.io/component=helloworld-web -o=jsonpath='{$.items[0].spec.containers[0].image}'") }
its('stdout') { should cmp "k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest" }
end
describe "Pod is running container whose port is 3000" do
subject { command(" kubectl get pods -n " + input('namespace') + " -l app.kubernetes.io/component=helloworld-web -o=jsonpath='{$.items[0].spec.containers[0].ports[0].containerPort}'") }
its('stdout') { should cmp "3000" }
end
describe "Pod is running container whose port is named 'http'" do
subject { command(" kubectl get pods -n " + input('namespace') + " -l app.kubernetes.io/component=helloworld-web -o=jsonpath='{$.items[0].spec.containers[0].ports[0].name}'") }
its('stdout') { should cmp "http" }
end
describe "Pod is running container whose port is named 'http'" do
subject { command(" kubectl get pods -n " + input('namespace') + " -l app.kubernetes.io/component=helloworld-web -o=jsonpath='{$.items[0].spec.containers[0].ports[0].name}'") }
its('stdout') { should cmp "http" }
end
describe "Pod is running container whose port is a TCP port" do
subject { command(" kubectl get pods -n " + input('namespace') + " -l app.kubernetes.io/component=helloworld-web -o=jsonpath='{$.items[0].spec.containers[0].ports[0].protocol}'") }
its('stdout') { should cmp "TCP" }
end
end
control "helloworld-web-deployment-4" do
impact 1.0
title "Validate helloworld-web deployment"
describe "Deployment ensure replica count is 1" do
subject { command("kubectl get deployment -n " + input('namespace') + " helloworld-web -o=jsonpath='{$.status.replicas}'") }
its('stdout') { should cmp "1" }
end
end
We'll execute the InSpec test of the Kubernetes deployment
cd ~/go/src/github.com/nemonik/helloworld-web/tests/inspec/helloworld-web/
inspec exec .
The output will resemble
Profile: InSpec Profile (helloworld-web)
Version: 0.1.0
Target: local://
✔ helloworld-web-deployment-1: Validate helloworld-web ingress
✔ Ingress host is expected to be 'helloworld.nemonik.com' stdout is expected to cmp == "helloworld.nemonik.com"
✔ Ingress is expected to route traffic to 'helloworld-web' service stdout is expected to cmp == "helloworld-web"
✔ Ingress is expected to route traffic to service http port stdout is expected to cmp == "http"
✔ helloworld-web-deployment-2: Validate helloworld-web service
✔ Service port is expected to be named 'http' stdout is expected to cmp == "http"
✔ Service is expected to listen on port '80' stdout is expected to cmp == "80"
✔ The Service port is expected to be a TCP port stdout is expected to cmp == "TCP"
✔ Service port is expected to target port 'http' stdout is expected to cmp == "http"
✔ helloworld-web-deployment-3: Validate helloworld-web pod
✔ Pod is running container 'k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest' stdout is expected to cmp == "k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest"
✔ Pod is running container whose port is 3000 stdout is expected to cmp == "3000"
✔ Pod is running container whose port is named 'http' stdout is expected to cmp == "http"
✔ Pod is running container whose port is named 'http' stdout is expected to cmp == "http"
✔ Pod is running container whose port is a TCP port stdout is expected to cmp == "TCP"
✔ helloworld-web-deployment-4: Validate helloworld-web deployment
✔ Deployment ensure replica count is 1 stdout is expected to cmp == "1"
Profile Summary: 4 successful controls, 0 control failures, 0 controls skipped
Test Summary: 14 successful, 0 failures, 0 skipped
We check each resource (Ingress, Service, and Deployment) declared in the deployment for conformance.
Add the InSpec step to our Makefile
with
cd ~/go/src/github.com/nemonik/helloworld-web
nvim Makefile
Update the .PHONY
and all
rule
.PHONY: all clean fmt lint test sonar build run docker-build docker-push deploy inspec
all: sonar deploy inspec
And then at the bottom add
inspec:
inspec exec tests/inspec/helloworld-web/. --chef-license=accept-silent
Executing
make inspec
The results resemble
Now that we've preflighted our InSpec test from our host, let's add the test to our pipeline.
We'll build from the nemonik/drone-kubectl:1.21.1
we built earlier
Let's changes directories and set up our project for this your the shell
cd $HOME/Development/workspace
mkdir drone-inspec
cd drone-inspec
git init --initial-branch=master
We'll create a Dockerfile
in nvim
FROM k3d-registry.nemonik.com:5000/nemonik/drone-kubectl:1.21.1
LABEL maintainer "Michael Joseph Walsh <[email protected]>"
USER root
RUN curl https://omnitruck.chef.io/install.sh | bash -s -- -P inspec
ENV PATH="/opt/sinlead/kubectl/bin:$PATH"
ENTRYPOINT ["inspec"]
CMD ["--help"]
The curl https://omnitruck.chef.io/install.sh | bash -s -- -P inspec
step install Inspec as per https://docs.chef.io/inspec/install/.
Then build, tag and push the container into our private container registry
docker build -t nemonik/drone-inspec:latest .
docker tag nemonik/drone-inspec:latest k3d-registry.nemonik.com:5000/nemonik/drone-inspec:latest
docker push k3d-registry.nemonik.com:5000/nemonik/drone-inspec:latest
Output will resemble
Head back into our helloworld-web
project (i.e., ~/go/src/github.com/nemonik/helloworld-web/
)
cd ~/go/src/github.com/nemonik/helloworld-web/
nvim .drone.yml
Add the inspec
step at the bottom
- name: inspec
image: k3d-registry.nemonik.com:5000/nemonik/drone-inspec:latest
settings:
kubernetes_cert:
from_secret: k8s_cert
kubernetes_token:
from_secret: k8s_token
commands:
- inspec exec tests/inspec/helloworld-web/. --chef-license=accept-silent
To execute your pipeline, push your changes to GitLab
git add .
git commit -m "added inspec step to pipeline"
git push origin master
Head over to Drone and watch the helloworld-web
repository pipeline execute (e.g., https://drone.nemonik.com/root/helloworld-web).
Output of the inspec
step of the pipeline will resemble
+ cd tests/inspec/helloworld-web
+ inspec exec . --chef-license=accept-silent
Profile: InSpec Profile (helloworld-web)
Version: 0.1.0
Target: local://
�[38;5;41m ✔ helloworld-web-deployment-1: Validate helloworld-web ingress�[0m
�[38;5;41m ✔ Ingress host is expected to be 'helloworld.nemonik.com' stdout is expected to cmp == "helloworld.nemonik.com"�[0m
�[38;5;41m ✔ Ingress is expected to route traffic to 'helloworld-web' service stdout is expected to cmp == "helloworld-web"�[0m
�[38;5;41m ✔ Ingress is expected to route traffic to service http port stdout is expected to cmp == "http"�[0m
�[38;5;41m ✔ helloworld-web-deployment-2: Validate helloworld-web service�[0m
�[38;5;41m ✔ Service port is expected to be named 'http' stdout is expected to cmp == "http"�[0m
�[38;5;41m ✔ Service is expected to listen on port '80' stdout is expected to cmp == "80"�[0m
�[38;5;41m ✔ The Service port is expected to be a TCP port stdout is expected to cmp == "TCP"�[0m
�[38;5;41m ✔ Service port is expected to target port 'http' stdout is expected to cmp == "http"�[0m
�[38;5;41m ✔ helloworld-web-deployment-3: Validate helloworld-web pod�[0m
�[38;5;41m ✔ Pod is running container 'k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest' stdout is expected to cmp == "k3d-registry.nemonik.com:5000/nemonik/helloworld-web:latest"�[0m
�[38;5;41m ✔ Pod is running container whose port is 3000 stdout is expected to cmp == "3000"�[0m
�[38;5;41m ✔ Pod is running container whose port is named 'http' stdout is expected to cmp == "http"�[0m
�[38;5;41m ✔ Pod is running container whose port is named 'http' stdout is expected to cmp == "http"�[0m
�[38;5;41m ✔ Pod is running container whose port is a TCP port stdout is expected to cmp == "TCP"�[0m
�[38;5;41m ✔ helloworld-web-deployment-4: Validate helloworld-web deployment�[0m
�[38;5;41m ✔ Deployment ensure replica count is 1 stdout is expected to cmp == "1"�[0m
Profile Summary: �[38;5;41m4 successful controls�[0m, 0 control failures, 0 controls skipped
Test Summary: �[38;5;41m14 successful�[0m, 0 failures, 0 skipped
NOTES
-
If your
deploy
step fails withdefault: inspec - Error
you likely skipped building, tagging and pushing thenemonik/drone-inspec:latest
container image in the prior step. You can debug by looking at thedrone-runner-kube
logs. This pod is used to run your pipelne.kubectl logs -n drone -l app.kubernetes.io/component=drone-runner-kube
Look for something like
time="2021-07-17T22:14:57Z" level=warning msg="Engine: Container start timeout" build.id=5 build.number=5 container=drone-lret11jqiuakvomlap51 error="kubernetes error: container failed to start in timely manner: id=drone-lret11jqiuakvomlap51" image="k3d-registry.nemonik.com:5000/nemonik/drone-inspec:latest" placeholder="drone/placeholder:1" pod=drone-3a4ux4q4mj7z29j78hbr repo.id=1 repo.name=helloworld-web repo.namespace=root stage.id=5 stage.name=default stage.number=1 step=inspec step.name=inspec thread=8
near the end as an indication.
We'll use Heimdall 2 to view the results in security engineer friendly manner.
cd ~/go/src/github.com/nemonik/helloworld-web
inspec exec --chef-license=accept-silent tests/inspec/helloworld-web/. --reporter json > tests/reports/inspec_helloworld.json
Open Heimdall 2 - (e.g., http://heimdall.nemonik.com). You will need to authenticate and will be presented with an upload pane. Make sure LOCAL FILES
, click Choose files to upload
, then browse to navigate to inspec_helloworld.json
in the class project (go/src/github.com/nemonik/helloworld-web/tests/reports/inspec_helloworld.json
) and upload to view the results.
The ./supplemental folder holds
- ./supplemental/k3s-server-sample.json - a compliance scan against a K3S Cluster as per the K8s STIG.
- ./supplemental/RHEL7-STIG-scan-sample.json - a compliance scan against a Centos7 VM as per the RHEL7 STIG.
the results of two other InSpec tests that show off the power of Heimdall 2. Give uploading these into Heimdall 2 a try.
NOTES
- As we are using the Heimdall Enterprise Server 2 edition we have a REST API available to upload the results from our pipeline. Maybe in a future version of my class, I'll add.
Although, functional testing, where a system is tested against the functional requirements, is by far the most expensive, most brittle and arguably least valuable in comparison to integration and unit testing, it still has its place in testing an application.
In this section, we're going to write an automated functional test for helloworld-web
application instead of relying on a manual functional test. Why? Because the automated functional test can be repeatedly and reliably executed over and over. The same can be said not be said for tests written in english to be processed and executed by humans.
We're going to write our functional tests in Selenium, a portable software-testing framework for web applications. Essentially, Selenium automates web browsers.
More can be found here
You'll need a couple of shells open to your development
vagrant to complete this section.
In a shell retrieve the standalone-firefox:3.141 container, tag, and push it into our private container registry.
docker pull selenium/standalone-chrome:3.141
docker tag selenium/standalone-chrome:3.141 k3d-registry.nemonik.com:5000/selenium/standalone-chrome:3.141
docker push k3d-registry.nemonik.com:5000/selenium/standalone-chrome:3.141
If you're using the fish shell
set host_ip (cat /tmp/host_ip)
If you're using Bash or Zsh shell
export host_ip="$(cat /tmp/host_ip)"
Then regardles of your shell start the selenium/standalone-chrome:3.141 container
docker run --rm --shm-size=1g -p 4444:4444 --add-host helloworld.nemonik.com:$host_ip --name standalone-chrome k3d-registry.nemonik.com:5000/selenium/standalone-chrome:3.141
This spins up headless Chrome browser you can programmatically drive, so we can use it run our automated functional test. The container is running in the foreground so we can watch the log output.
NOTES
--add-host helloworld.nemonik.com:$host_ip
parameter is needed, because the container will not know how to resolve thehelloworld.nemonik.com
domain. This is not a problem in the cluster as ./coredns/patch.sh patches CoreDNS so that containers running inside the cluster can resolve the domain.
A good start outputs to the command line like so
2021-07-18 13:46:22,105 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
2021-07-18 13:46:22,107 INFO supervisord started with pid 8
2021-07-18 13:46:23,111 INFO spawned: 'xvfb' with pid 10
2021-07-18 13:46:23,114 INFO spawned: 'selenium-standalone' with pid 11
13:46:23.574 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
2021-07-18 13:46:23,580 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2021-07-18 13:46:23,580 INFO success: selenium-standalone entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
13:46:23.722 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
2021-07-18 13:46:23.808:INFO::main: Logging initialized @679ms to org.seleniumhq.jetty9.util.log.StdErrLog
13:46:24.261 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
13:46:24.421 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
Leave this shell running the container and move on...
Open another shell, so we can author and run our automated test.
In this other shell, we'll create a folder to hold our automated functional test like so
cd ~/go/src/github.com/nemonik/helloworld-web/tests/
mkdir selenium
cd selenium
We're going to write our test in Python. Python is already installed on your host, but Ansible automation also installed pyenv as well as Python 3.9.6. To check
pyenv versions
Output will resemble
system
3.9.5
* 3.9.6 (set by /Users/mjwalsh/.pyenv/version
The one marked with an Asterix (*) is the presently configured Python
python version
Will verify we'll be using Python 3.9.6
as well.
So, now that's taken care of, but we're going to use Poetry to create our project, manage our dependencies and run our Selenium test.
Perform the following
poetry new func-test-helloworld-web
cd func-test-helloworld-web
Output will resemble
Created package func_test_helloworld_web in func-test-helloworld-web
Indicating Poetry created a func-test-helloworld
folder under ~/go/src/github.com/nemonik/helloworld-web/tests/selenium
along with a virtualenv for the project that will resemble this folder structure
func-test-helloworld-web
├── README.rst
├── func_test_helloworld_web
│ └── __init__.py
├── pyproject.toml
└── tests
├── __init__.py
└── test_func_test_helloworld_web.py
Poetry handles retreiving and adding the selenium==3.141
dependency via
poetry add selenium==3.141
Output will resemble
Creating virtualenv functional-pSxRwJgh-py3.9 in /Users/mjwalsh/Library/Caches/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies... (0.2s)
Writing lock file
Package operations: 10 installs, 0 updates, 0 removals
• Installing pyparsing (2.4.7)
• Installing attrs (21.2.0)
• Installing more-itertools (8.8.0)
• Installing packaging (21.0)
• Installing pluggy (0.13.1)
• Installing py (1.10.0)
• Installing urllib3 (1.26.6)
• Installing wcwidth (0.2.5)
• Installing pytest (5.4.3)
• Installing selenium (3.141.0)
Poetry will create a virtualenv for the project, retrieve the dependencies in to it and update the project's pyproject.toml
file
[tool.poetry]
name = "func-test-helloworld-web"
version = "0.1.0"
description = ""
authors = ["Administrator <[email protected]>"]
[tool.poetry.dependencies]
python = "^3.9"
selenium = "3.141"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
In your shell
cd func_test_helloworld_web/
nvim test.py
And copy the following Python source code into nvim
# Copyright (C) 2021 Michael Joseph Walsh - All Rights Reserved
# You may use, distribute and modify this code under the
# terms of the the license.
#
# You should have received a copy of the license with
# this file. If not, please email <[email protected]>
""" helloworld-web selenium test """
import logging
import unittest
import os
import socket
import time
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
class HelloworldWebTest(unittest.TestCase):
"""helloworld-web selenium test"""
def setUp(self):
"""Executed each time a test runs to setup the browser session"""
selenium_executor_url = (
"http://" + SELENIUM_HOST + ":" + SELENIUM_PORT + "/wd/hub"
)
# Wait til container starts with browser
while True:
try:
print("Trying to connect to %s" % (selenium_executor_url))
selenium_socket = socket.socket()
selenium_socket.connect((SELENIUM_HOST, int(SELENIUM_PORT)))
print("Connected.")
break
except Exception as exception:
print("Failed to connect. Exception is %s" % (exception))
time.sleep(5)
finally:
selenium_socket.close()
self.browser = webdriver.Remote(
command_executor=selenium_executor_url,
desired_capabilities=DesiredCapabilities.CHROME,
)
self.browser.set_window_size(1928, 1288)
self.addCleanup(self.browser.quit)
def test_a_helloworld_web(self):
"""Tests for `Hello world!` to be returned."""
self.browser.get(HELLOWORLD_WEB_URL)
assert "Hello world!" in self.browser.page_source
if __name__ == "__main__":
try:
SELENIUM_HOST = os.environ["SELENIUM_HOST"]
logging.info("SELENIUM_HOST=%s" % SELENIUM_HOST)
except:
raise Exception("SELENIUM_HOST environment variable not set.")
try:
SELENIUM_PORT = os.environ["SELENIUM_PORT"]
logging.info("SELENIUM_PORT=%s" % SELENIUM_PORT)
except:
raise exception("SELENIUM_PORT environment variable not set.")
try:
HELLOWORLD_WEB_URL = os.environ["HELLOWORLD_WEB_URL"]
logging.info("HELLOWORLD_WEB_URL=%s" % HELLOWORLD_WEB_URL)
except:
raise exception("HELLOWORLD_WEB_URL environment variable not set.")
# So that tests are fire in the order declared as the delete project is dependent on the
# create project test
unittest.TestLoader.sortTestMethodsUsing = None
unittest.main(
warnings="ignore", # This removes socket warnings from being displayed
failfast=True, # Since subsequent tests aere required to have passed
verbosity=2, # See verbose output. see: https://docs.python.org/2/library/unittest.html
)
Save the file and exit. Yep. All this code to execute an automated functional test for the return of the text Hello world!. They're note cheap. Top of the test pyramid expensive, so use them wisely.
For now, if your using the fish shell let us run our test by entering into the command line
set -x SELENIUM_HOST localhost
set -x SELENIUM_PORT 4444
set -x HELLOWORLD_WEB_URL https://helloworld.nemonik.com
poetry run python func_test_helloworld_web/test.py -v
If you're using Bash or Zsh perform the following
export SELENIUM_HOST=localhost
export SELENIUM_PORT=4444
export HELLOWORLD_WEB_URL=https://helloworld.nemonik.com
poetry run python func_test_helloworld_web/test.py -v
Successful command line output in this window will be
Tests for `Hello world!` to be returned. ... Trying to connect to http://localhost:4444/wd/hub
Connected.
ok
----------------------------------------------------------------------
Ran 1 test in 1.188s
OK
The selenium/standalone-chrome:3.141 container running in the other shell will output
16:37:26.608 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"browserName": "chrome",
"version": ""
}
16:37:26.608 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
Starting ChromeDriver 91.0.4472.101 (af52a90bf87030dd1523486a1cd3ae25c5d76c9b-refs/branch-heads/4472@{#1462}) on port 27517
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
[1626626246.616][SEVERE]: bind() failed: Cannot assign requested address (99)
16:37:27.059 INFO [ProtocolHandshake.createSession] - Detected dialect: W3C
16:37:27.059 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session fd7024d1503fffdd7865d41056e0166f (org.openqa.selenium.chrome.ChromeDriverService)
16:37:27.354 INFO [ActiveSessions$1.onStop] - Removing session fd7024d1503fffdd7865d41056e0166f (org.openqa.selenium.chrome.ChromeDriverService)
In your shell
First pull down, tag and push
docker pull python:3.9.6
docker tag python:3.9.6 k3d-registry.nemonik.com:5000/nemonik/python:3.9.6
docker push k3d-registry.nemonik.com:5000/nemonik/python:3.9.6
Then we'll need to create a container to execute our selenium
step.
cd ~/Development/workspace/
mkdir python-poetry
cd python-poetry
git init --initial-branch=master
nvim Dockerfile
And copy the following into it
FROM k3d-registry.nemonik.com:5000/nemonik/python:3.9.6
LABEL maintainer "Michael Joseph Walsh <[email protected]>"
RUN pip install --upgrade pip && \
pip install poetry
Now to build, tag, and push the container to our private container registry
docker build -t nemonik/poetry:latest .
docker tag nemonik/poetry:latest k3d-registry.nemonik.com:5000/nemonik/poetry:latest
docker push k3d-registry.nemonik.com:5000/nemonik/poetry:latest
Output will resemble
[+] Building 20.7s (6/6) FINISHED
=> [internal] load build definition from Dockerfile
=> => transferring dockerfile: 226B
=> [internal] load .dockerignore
=> => transferring context: 2B
=> [internal] load metadata for k3d-registry.nemonik.com:5000/nemonik/python:3.9.6
=> [1/2] FROM k3d-registry.nemonik.com:5000/nemonik/python:3.9.6
=> [2/2] RUN pip install --upgrade pip && pip install poetry
=> exporting to image
=> => exporting layers
=> => writing image sha256:f738b305093af2bfef3a5e3aae1f568bb77e3d366b3eaa3ad7152d57a7ad3d1a
=> => naming to docker.io/nemonik/poetry:latest
The push refers to repository [k3d-registry.nemonik.com:5000/nemonik/poetry]
9c75e3ccdc52: Pushed
cd6b2a9ae627: Mounted from nemonik/python
84c97f2e3099: Mounted from nemonik/python
b0cb6a43f300: Mounted from nemonik/python
4b4c002ee6ca: Mounted from nemonik/python
cdc9dae211b4: Mounted from nemonik/python
7095af798ace: Mounted from nemonik/python
fe6a4fdbedc0: Mounted from nemonik/python
e4d0e810d54a: Mounted from nemonik/python
4e006334a6fd: Mounted from nemonik/python
latest: digest: sha256:8fc54ec8f4326f7cf521ba05306d01ffe4105d2865a6cbe357f007a6b9df381d size: 2429
Edit the .drone.yml
file at the root of your helloworld-web
project and add the following selenium
step, the shared_memory
volume, and a firefox
service.
- name: selenium
image: k3d-registry.nemonik.com:5000/nemonik/poetry:latest
commands:
- cd tests/selenium/func-test-helloworld-web
- export SELENIUM_HOST=localhost
- export SELENIUM_PORT=4444
- export HELLOWORLD_WEB_URL=https://helloworld.nemonik.com
- poetry install
- poetry run python func_test_gitlab/test.py -v
services:
- name: chrome
image: k3d-registry.nemonik.com:5000/selenium/standalone-chrome:3.141
volumes:
- name: shared_memory
path: /dev/shm
volumes:
- name: docker
host:
path: /var/run/docker.sock
NOTE
services
is not part of the prior steps, because it is not a step, but enumeration of services, so be careful when you edit the pipeline. Drone CI uses theservices:
section to spin up a patched version ofk3d-registry.nemonik.com:5000/selenium/standalone-chrome:3.141
exposed with the namechrome
.
But before you execute the pipeline it needs to be updates to be a trusted repository as the service will need access to /var/run/docker.sock
.
- Open the
helloworld-web
repository in Drone CI (e.g., https://drone.nemonik.com/root/helloworld-web/settings). - Under
Project Settings
toggleTrusted
so that is enabled (i.e., blue ) to enable privileged container settings. - Click
SAVE CHANGES
.
Let's kick off the pipeline by committing our code
git add .drone.yaml
git commit -m "added seleniuim step"
git push origin master
While successful output for the selenium
stage will resemble
+ cd tests/selenium/func-test-helloworld-web
+ export SELENIUM_HOST=localhost
+ export SELENIUM_PORT=4444
+ export HELLOWORLD_WEB_URL=https://helloworld.nemonik.com
+ poetry install
Creating virtualenv func-test-helloworld-web-y9XfTYFJ-py3.9 in /root/.cache/pypoetry/virtualenvs
Installing dependencies from lock file
Package operations: 10 installs, 0 updates, 0 removals
• Installing pyparsing (2.4.7)
• Installing attrs (21.2.0)
• Installing more-itertools (8.8.0)
• Installing packaging (21.0)
• Installing pluggy (0.13.1)
• Installing py (1.10.0)
• Installing urllib3 (1.26.6)
• Installing wcwidth (0.2.5)
• Installing pytest (5.4.3)
• Installing selenium (3.141.0)
Installing the current project: func-test-helloworld-web (0.1.0)
+ poetry run python func_test_helloworld_web/test.py -v
test_a_helloworld_web (__main__.HelloworldWebTest)
Tests for `Hello world!` to be returned. ... ok
----------------------------------------------------------------------
Trying to connect to http://localhost:4444/wd/hub
Connected.
Ran 1 test in 3.226s
OK
If you click on the chrome
service for this build, you will see similar output as the pre-flight you executed
2021-07-18 19:06:04,997 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
2021-07-18 19:06:04,998 INFO supervisord started with pid 9
2021-07-18 19:06:06,000 INFO spawned: 'xvfb' with pid 11
2021-07-18 19:06:06,003 INFO spawned: 'selenium-standalone' with pid 12
19:06:06.445 INFO [GridLauncherV3.parse] - Selenium server version: 3.141.59, revision: e82be7d358
2021-07-18 19:06:06,448 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2021-07-18 19:06:06,448 INFO success: selenium-standalone entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
19:06:06.616 INFO [GridLauncherV3.lambda$buildLaunchers$3] - Launching a standalone Selenium Server on port 4444
2021-07-18 19:06:06.687:INFO::main: Logging initialized @662ms to org.seleniumhq.jetty9.util.log.StdErrLog
19:06:07.140 INFO [WebDriverServlet.<init>] - Initialising WebDriverServlet
19:06:07.319 INFO [SeleniumServer.boot] - Selenium Server is up and running on port 4444
19:07:37.541 INFO [ActiveSessionFactory.apply] - Capabilities are: {
"browserName": "chrome",
"version": ""
}
19:07:37.547 INFO [ActiveSessionFactory.lambda$apply$11] - Matched factory org.openqa.selenium.grid.session.remote.ServicedSession$Factory (provider: org.openqa.selenium.chrome.ChromeDriverService)
Starting ChromeDriver 91.0.4472.101 (af52a90bf87030dd1523486a1cd3ae25c5d76c9b-refs/branch-heads/4472@{#1462}) on port 28657
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
19:07:39.901 INFO [ProtocolHandshake.createSession] - Detected dialect: W3C
19:07:39.948 INFO [RemoteSession$Factory.lambda$performHandshake$0] - Started new session 157a1a06b309ecdf759c5a8e8af838c9 (org.openqa.selenium.chrome.ChromeDriverService)
19:07:40.291 INFO [ActiveSessions$1.onStop] - Removing session 157a1a06b309ecdf759c5a8e8af838c9 (org.openqa.selenium.chrome.ChromeDriverService)
NOTE
- If your build fails outright with the message
default: linter: untrusted repositories cannot mount host volumes
, you have forgotten the enableTrusted
for theroot/hellworld-web
repository in Drone CI. You can go back do that and restart the build by clicking on the hamburger (the icon with three dots in a vertical line) to open a drop down and selecting ``RESTART`.
Dynamic application security testing (DAST) is used to detect security vulnerabilities in an application while it is running, so as to help you remediate these concerns while in development. Again, yet another example of "thinking about application and infrastructure security from the start."
The OWASP Zed Attack Proxy (ZAP) is one of the world’s most popular free DAST tools actively maintained by hundreds of international volunteers, so add a step to test the application.
We'll pull the owasp/zap2docker-stable:2.10.0 container image from Docker Hub, then tag and push the container image into our private registry
docker pull owasp/zap2docker-stable:2.8.0
docker tag owasp/zap2docker-stable:latest k3d-registry.nemonik.com:5000/owasp/zap2docker-stable:2.8.0
docker push k3d-registry.nemonik.com:5000/owasp/zap2docker-stable:2.8.0
If you're using the fish shell
set host_ip (cat /tmp/host_ip)
If you're using either Bash or Zsh
export host_ip="$(cat /tmp/host_ip)"
Regardless of your shell you then
docker run --rm --add-host helloworld.nemonik.com:$host_ip k3d-registry.nemonik.com:5000/owasp/zap2docker-stable:2.8.0 zap-baseline.py -t http://helloworld.nemonik.com
OWASP ZAP will take some time to run as it works to find security vulnerabilities in our running application, so give it time.
When finished its output will resemble
2021-07-18 19:56:18,253 Params: ['zap-x.sh', '-daemon', '-port', '56269', '-host', '0.0.0.0', '-config', 'api.disablekey=true', '-config', 'api.addrs.addr.name=.*', '-config', 'api.addrs.addr.regex=true', '-config', 'spider.maxDuration=1', '-addonupdate', '-addoninstall', 'pscanrulesBeta']
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Jul 18, 2021 7:56:22 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 6 URLs
PASS: Cookie No HttpOnly Flag [10010]
PASS: Cookie Without Secure Flag [10011]
PASS: Incomplete or No Cache-control and Pragma HTTP Header Set [10015]
PASS: Web Browser XSS Protection Not Enabled [10016]
PASS: Cross-Domain JavaScript Source File Inclusion [10017]
PASS: Content-Type Header Missing [10019]
PASS: X-Frame-Options Header Scanner [10020]
PASS: X-Content-Type-Options Header Missing [10021]
PASS: Information Disclosure - Debug Error Messages [10023]
PASS: Information Disclosure - Sensitive Information in URL [10024]
PASS: Information Disclosure - Sensitive Information in HTTP Referrer Header [10025]
PASS: HTTP Parameter Override [10026]
PASS: Information Disclosure - Suspicious Comments [10027]
PASS: Open Redirect [10028]
PASS: Cookie Poisoning [10029]
PASS: User Controllable Charset [10030]
PASS: User Controllable HTML Element Attribute (Potential XSS) [10031]
PASS: Viewstate Scanner [10032]
PASS: Directory Browsing [10033]
PASS: Heartbleed OpenSSL Vulnerability (Indicative) [10034]
PASS: Strict-Transport-Security Header Scanner [10035]
PASS: HTTP Server Response Header Scanner [10036]
PASS: Server Leaks Information via "X-Powered-By" HTTP Response Header Field(s) [10037]
PASS: Content Security Policy (CSP) Header Not Set [10038]
PASS: X-Backend-Server Header Information Leak [10039]
PASS: Secure Pages Include Mixed Content [10040]
PASS: HTTP to HTTPS Insecure Transition in Form Post [10041]
PASS: HTTPS to HTTP Insecure Transition in Form Post [10042]
PASS: User Controllable JavaScript Event (XSS) [10043]
PASS: Big Redirect Detected (Potential Sensitive Information Leak) [10044]
PASS: Retrieved from Cache [10050]
PASS: X-ChromeLogger-Data (XCOLD) Header Information Leak [10052]
PASS: Cookie Without SameSite Attribute [10054]
PASS: CSP Scanner [10055]
PASS: X-Debug-Token Information Leak [10056]
PASS: Username Hash Found [10057]
PASS: X-AspNet-Version Response Header Scanner [10061]
PASS: PII Scanner [10062]
PASS: Timestamp Disclosure [10096]
PASS: Hash Disclosure [10097]
PASS: Cross-Domain Misconfiguration [10098]
PASS: Weak Authentication Method [10105]
PASS: Reverse Tabnabbing [10108]
PASS: Absence of Anti-CSRF Tokens [10202]
PASS: Private IP Disclosure [2]
PASS: Session ID in URL Rewrite [3]
PASS: Script Passive Scan Rules [50001]
PASS: Insecure JSF ViewState [90001]
PASS: Charset Mismatch [90011]
PASS: Application Error Disclosure [90022]
PASS: Loosely Scoped Cookie [90033]
FAIL-NEW: 0 FAIL-INPROG: 0 WARN-NEW: 0 WARN-INPROG: 0 INFO: 0 IGNORE: 0 PASS: 51
Great, now lets add another step to our pipeline after the selenium
step, but before services:
block. Again, before the services:
block.
- name: owasp-zap
image: k3d-registry.nemonik.com:5000/owasp/zap2docker-stable:2.8.0
commands:
- zap-baseline.py -t http://helloworld.nemonik.com || true
NOTE
- Again, add this new step right after the last
step
, but before theservices:
.
To execute your pipeline, push your changes to GitLab
git add .
git commit -m "added owasp-zap step to the pipeline"
git push origin master
Open your root/helloworld-web
repository (e.g., https://drone.nemonik.com/root/helloworld-web) in Drone CI and monitor the progress of the build. The pipeline should execute in a few minutes.
Successful output for this stage resembles the prior output
2021-07-18 19:56:18,253 Params: ['zap-x.sh', '-daemon', '-port', '56269', '-host', '0.0.0.0', '-config', 'api.disablekey=true', '-config', 'api.addrs.addr.name=.*', '-config', 'api.addrs.addr.regex=true', '-config', 'spider.maxDuration=1', '-addonupdate', '-addoninstall', 'pscanrulesBeta']
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Jul 18, 2021 7:56:22 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 6 URLs
PASS: Cookie No HttpOnly Flag [10010]
PASS: Cookie Without Secure Flag [10011]
PASS: Incomplete or No Cache-control and Pragma HTTP Header Set [10015]
PASS: Web Browser XSS Protection Not Enabled [10016]
PASS: Cross-Domain JavaScript Source File Inclusion [10017]
PASS: Content-Type Header Missing [10019]
PASS: X-Frame-Options Header Scanner [10020]
PASS: X-Content-Type-Options Header Missing [10021]
PASS: Information Disclosure - Debug Error Messages [10023]
PASS: Information Disclosure - Sensitive Information in URL [10024]
PASS: Information Disclosure - Sensitive Information in HTTP Referrer Header [10025]
PASS: HTTP Parameter Override [10026]
PASS: Information Disclosure - Suspicious Comments [10027]
PASS: Open Redirect [10028]
PASS: Cookie Poisoning [10029]
PASS: User Controllable Charset [10030]
PASS: User Controllable HTML Element Attribute (Potential XSS) [10031]
PASS: Viewstate Scanner [10032]
PASS: Directory Browsing [10033]
PASS: Heartbleed OpenSSL Vulnerability (Indicative) [10034]
PASS: Strict-Transport-Security Header Scanner [10035]
PASS: HTTP Server Response Header Scanner [10036]
PASS: Server Leaks Information via "X-Powered-By" HTTP Response Header Field(s) [10037]
PASS: Content Security Policy (CSP) Header Not Set [10038]
PASS: X-Backend-Server Header Information Leak [10039]
PASS: Secure Pages Include Mixed Content [10040]
PASS: HTTP to HTTPS Insecure Transition in Form Post [10041]
PASS: HTTPS to HTTP Insecure Transition in Form Post [10042]
PASS: User Controllable JavaScript Event (XSS) [10043]
PASS: Big Redirect Detected (Potential Sensitive Information Leak) [10044]
PASS: Retrieved from Cache [10050]
PASS: X-ChromeLogger-Data (XCOLD) Header Information Leak [10052]
PASS: Cookie Without SameSite Attribute [10054]
PASS: CSP Scanner [10055]
PASS: X-Debug-Token Information Leak [10056]
PASS: Username Hash Found [10057]
PASS: X-AspNet-Version Response Header Scanner [10061]
PASS: PII Scanner [10062]
PASS: Timestamp Disclosure [10096]
PASS: Hash Disclosure [10097]
PASS: Cross-Domain Misconfiguration [10098]
PASS: Weak Authentication Method [10105]
PASS: Reverse Tabnabbing [10108]
PASS: Absence of Anti-CSRF Tokens [10202]
PASS: Private IP Disclosure [2]
PASS: Session ID in URL Rewrite [3]
PASS: Script Passive Scan Rules [50001]
PASS: Insecure JSF ViewState [90001]
PASS: Charset Mismatch [90011]
PASS: Application Error Discosure [90022]
PASS: Loosely Scoped Cookie [90033]
FAIL-NEW: 0 FAIL-INPROG: 0 WARN-NEW: 0 WARN-INPROG: 0 INFO: 0 IGNORE: 0 PASS: 51
Our application is relatively simple, so it was doubtful anything would be found.
The helloworld-web
project can be viewed completed at
https://github.com/nemonik/helloworld-web-gen2
This class doesn't cover a number of container application development best practices. A topic out of scope of the original intention of this course; especially, as I'm already cramming in several days of course material into a one-day course when taught in person, but perhaps subsequent course updates I'll cover a few of the following not already covered in the course material as additional topics. The biggest reason why relates with the following sections. Agile and DevOps both exist to deliver features into the hands of users. We're not doing DevOps to do DevOps. If all anyone talks about is DevOps in the absence of the application life cycle you have a problem. Also, DevOps is very much intertwined with modern cloud-native development.
With that, here's some best practices for containerized application development and operation:
- Follow https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ and http://www.projectatomic.io/docs/docker-image-author-guidance/ guidance.
- Re-use existing upstream images from trusted sources.
- Avoid multiple processes executing in your own container images.
- Clean temporary files, such as, OS package repository caches, when creating your images.
- Avoid running the container’s process as root.
- Harden your Docker configuration as per an InSpec compliance profile, such as https://github.com/mitre/docker-ce-cis-baseline and https://github.com/dev-sec/cis-docker-benchmark or if you are using another container runtime either find one for the runtime or write your own compliance profile.
- Doing item-6 will require you to make use of a notary and private container registry (e.g., https://hub.docker.com/_/registry, https://hub.docker.com/r/sonatype/nexus3). I’ve written Ansible IaC to deploy Notary and it was a real pain in the butt to figure it out and took my countless hours, because the documentation is to put it plainly, "Sh!t." It would seem, they (i.e., whoever owns Docker Enterprise now) wants you to use Docker Enterprise vice getting Notary up and running with Docker.
- Put your application development through a CICD pipeline like this class of the following that applies: code format enforcement, linting, static analysis, build automation, unit testing, compliance-as-code for the container image, automated functional test, and dynamic analysis.
- Consider adding to your CICD pipelines the execution of vulnerability scanning tools, such as, Clair, Docker Bench for Security, OpenSCAP Workbench, Anchore, et cetera. There will be overlap between these and other similar tools. Pick the ones that work the best for you, ones with frequent updates and having the largest vibrant community around.
You can now uninstall the cluster and the registries.
make uninstall
make uninstall-pullthrough
make uninstall-registry
And this ends my class.
This is class is a labor of love (i.e., I'm not getting paid to author and maintain it). Please, consider buying me a coffee .