Understanding Jenkins Pipelines - Part 1
Hello and welcome back to my blog! In today's post, we're going explore the celebrated automation tool for CI-CD - Jenkins! More specifically, Jenkins Pipelines. I'm sure that anyone relatively new in their careers would be looking for resources to ramp up, and I hope that this blog post is good enough for you to get started and add Jenkins pipelines to your technical arsenal.
We'll be exploring the following topics:
- What is Jenkins
- What is a Jenkins Pipeline
- The Jenkinsfile
- Jenkins Pipeline Concepts
What is Jenkins?
Jenkins is a comprehensive, open-source automation engine that is popular for Continuous Integration and Continuous Deployment (CI-CD) in software development. It allows real-time monitoring and recording of discrete improvements to a more comprehensive codebase. It lets developers easily identify, fix bugs in their codebase, and simplify their builds’ validation. Jenkins is the most popular CI/CD tool because it closely monitors repetitive jobs and assists in automated execution during a project’s production. Jenkins manages and controls software delivery processes throughout the entire lifecycle, including build, document, test, package, stage, deployment, static code analysis and much more.
What is a Jenkins Pipeline?
A Jenkins Pipeline is nothing but a series of steps that outlines how your software makes its way from development and version control right through to your users and customers. This process is known as Continuous Delivery.
As you can see from the above diagram a pipeline typically consists of steps (known more commonly as stages). Typically, you would see the following stages defined in a Jenkins CD pipeline.
- Build
- Test
- Stage
- Deploy / Release
Note that these stage names are arbitrary and Jenkins is fully customizable to enable you to run any number of stages and call them what you wish.
The Jenkinsfile
The definition on what steps are involved in a Jenkins pipeline is written into a file called the Jenkinsfile. This file can be committed to a project’s source control repository. This is the foundation of "Pipeline-as-code"; treating the CD pipeline a part of the application to be versioned and reviewed like any other code.
Creating a Jenkinsfile and committing it to source control provides a number of immediate benefits:
- Automatically creates a Pipeline build process for all branches and pull requests.
- Code review/iteration on the Pipeline (along with the remaining source code).
- Audit trail for the Pipeline.
- Single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project.
While the syntax for defining a Pipeline, either in the web UI or with a Jenkinsfile is the same, it is generally considered best practice to define the Pipeline in a Jenkinsfile and check that in to source control.
A Jenkinsfile can be written using two types of syntax - Declarative and Scripted.
Differences between declarative and scripted pipeline syntax
More information If you'd like to know more details using an example, visit: https://www.jenkins.io/doc/book/pipeline/#pipeline-example
Jenkins Pipeline Concepts
Now that you are aware of the different Jenkinsfile syntaxes, this sections explored some fundamental concepts of the pipeline syntax.
Pipeline
It is a user-defined framework that includes all the processes like create, check, deploy, etc. In a Jenkinsfile, it’s a list of all the levels. All of the stages and steps within this block are described. This is the fundamental block to the syntax of a declarative pipeline.
pipeline {
...
}
Node
A node is a system running a complete workflow. It’s an integral part of the syntax of the scripted pipeline.
node {
}
Agent
An agent is described as a directive that can run multiple builds using just one Jenkins instance. This feature helps spread the workload to various agents and execute multiple projects within Jenkins’s single instance. It instructs Jenkins to assign the builds to an executor.
A single agent may be defined for a whole Jenkins pipeline, or different agents may be assigned to execute each stage within a pipeline. Some of the most commonly used Agent parameters are:
- Any: Runs the stage pipeline on any available agent.
- None: This parameter is added to the root of the pipeline. It means that there is no global agent for the entire pipeline, and each stage must define its own agent.
- Label: Performs on the labeled agent the pipeline/stage.
- Docker: This parameter uses a docker container as a pipeline execution environment or as a specific level. For example, the docker can be used to pull an image of Ubuntu. This image can now be used to run multiple commands as an execution environment.
pipeline {
agent {
docker {
image 'ubuntu:18.04'
}
}
}
Stages
This section includes all of the work that needs to be completed. The work is defined in the form of stages. Within this Directive, there may be more than one level. Each stage executes a particular task.
pipeline {
agent any
stages {
stage ('Build') {
}
stage ('Test') {
}
stage ('QA') {
}
stage ('Deploy') {
}
stage ('Monitoring') {
}
}
}
Steps
Within a stage block, the pipeline can be described as a series of steps. Such steps are performed in sequence for the execution of a level. Within a Steps guideline, there must be at least one step.
pipeline {
agent any
stages {
stage ('Build') {
steps {
echo
'Running build phase. '
}
}
}
}
And thats it for this blog post! I will be exploring how to create your own pipeline script in part 2 of this blog post series. ☺️