Continuous Integration in 10 Minutes With Gitlab CI and Slack
In this blog
One of our core practices in Application Services is the practice of continuous integration.
Continuous Integration (abbreviated CI) is an automated process of incorporating changes into a shared codebase, including the necessary steps to ensure that the changes are being incorporated successfully. Continuous Integration brings confidence, reproducibility, and consistency to the process of collaborating on a codebase as a team. The benefits of CI are so important, in fact, that the presence of a CI process is one of the established benchmarks for assessing the health of a team's technical processes within Application Services.
The essential steps of CI
The steps involved in the continuous integration process vary based on team and project needs. Generally speaking, however, there are 4 "essential steps":
- Accessing the codebase (e.g., cloning the repository)
- Building
- Testing
- Notifying the team of success or failure
Additional steps may include: automatic merging into a master or release branch, creation of build assets or artifacts, performance testing, or even a subsequent continuous delivery step (abbreviated CI), whereby the successfully incorporated changes from the CI phase are deployed to a production (or production-like) environment.
Though often an integral part of the automation process (and, indeed, half of the ubiquitous CI/CD abbreviation), there are limitations that certain teams may have that prevent continuous delivery from being part of their automation process. As such, continuous delivery is out of scope for now.
Steps 1-3 typically involve a series of scripts that run on a command line, and as such, require a computer somewhere in order to execute. There are a variety of options to facilitate this, including virtual machines, cloud computing instances, and of course dedicated physical computers (the ubiquitous "Jenkins Box" provisioned by many in-person Application Services teams in pre-pandemic times). In the case of a physical computer, the fourth step of the CI process— team notification— can be as simple as a big red "failure" output on a widescreen monitor. In a virtual working environment, however, step 4 must leverage virtual-friendly communication tools to be successful.
What Gitlab CI has to offer
While Gitlab does a wonderful job of storing project codebases, it also offers built-in Continuous Integration functionality, eponymously named Gitlab CI.
Gitlab CI offers a robust system for configuring CI pipelines, which run a customizable set of jobs. In this article, we'll walk through the configuration involved to add a Gitlab CI pipeline that supports three out of the four "Essential Steps" of the CI process listed above.
To complete the final step, we'll add a Slack Integration for a robust, remote-friendly notification mechanism. All of the configuration required for these tools really does come together in less than 10 minutes!
A small disclaimer: one key piece of time-saving infrastructure that we're leveraging with Gitlab CI is called a shared runner. To execute steps 1-3 above, Gitlab utilizes a utility known as a runner, which executes CI pipeline steps in a configurable environment. There are four options:
- Installing Gitlab's runner software on a dedicated VM or physical computer (a specific runner, manual setup)
- Using a Docker image via Kubernetes (a specific runner, with automatic setup)
- Using a shared runner, an environment shared across multiple projects, maintained at the Gitlab account level (at the time of writing, WWT maintains 4 shared runners)
- Using a group runner (available for projects that are part of a Gitlab group)
For projects hosted in Application Services' gitlab.asynchrony.com Gitlab instance, this shared runner option is ready out-of-the-box. For everyone else, there may be additional configuration required that could take more than 10 minutes altogether.
10 minutes to a fully functional CI Pipeline
This tutorial will walk through the process of "turning on" Gitlab CI for an existing project repository hosted on the Application Services Gitlab instance (gitlab.asynchrony.com), leveraging a shared runner (option 3 above).
After configuration, the subsequent sections will cover setup of the four commonly-used steps mentioned above (accessing the codebase, building, testing, notifying of success/failure).
By the end of the tutorial, we'll have the first iteration of a new CI pipeline, which can be iterated upon with additional steps or configuration based on your team requirements.
Our "existing project" for this tutorial is a small Node.js repo with a single test file: https://gitlab.asynchrony.com/beau.davenport/gitlab-ci-in-10-minutes
Let's set the stopwatch and get started!
0:00-1:49 — Accessing the codebase, building, testing
To start: clone the test repo.
Then, create a new file in the root of the project called .gitlab-ci.yml
. Paste in the following content:
unit_tests:
tags:
- docker
image: node:14.9.0
stage: test
only:
- master
script:
- npm install
- npm test
This file contains almost all of the necessary configuration for Gitlab CI to run. Let's break down the parts:
The .gitlab-ci.yml
file currently contains one job, unit_tests
(more can always be added).
tags
specify which shared runner to look for. As mentioned above, the shared runners are maintained at the account level. Our gitlab.asynchrony.com account has 4 shared runners, all tagged with the docker tag. Gitlab CI will find the first available shared runner with this matching tag, and use it to install an image.image
specifies that the job should use a docker image, and indicates which one to use for the job.stage
indicates the current stage of the pipelineonly
specifies a "whitelist" of git branches. Only these branches will be configured to trigger a CI pipeline run.script
is the "meat and potatoes" of the job. This is a list of sequential script commands to execute. Any errors encountered in these runs will lead to a "fail" of the job.
Commit this new file and push up the commit. Gitlab will detect this configuration file and use it automatically for running our first Continuous Integration pipeline.
You can see the results of the now-configured pipeline run in Gitlab's dashboard here: https://gitlab.asynchrony.com/beau.davenport/gitlab-ci-in-10-minutes/-/pipelines. There will be a list of runs with a Status of passed or failed. Click on this Status to see the pipeline details screen for that pipeline run.
From the pipeline details screen, you can see more about the individual run. Note: this details screen is also navigable from the commit itself, which will be linked in our Slack integration (see below). Any jobs associated with the pipeline would appear here, so we see the one job we have configured. Clicking on that job will navigate to the job details screen.
The job details screen has additional information about the particular job, but perhaps most importantly, it provides terminal output. Here, we can see the actual results of the script
configuration from the .gitlab-ci.yml
file:
Having verified the output of our CI run, we can see that Gitlab was able to access the repo, build the project, and run our tests. 3 out of 4 essential CI steps in less than two minutes— not bad!
1:49-3:59 — Add Slack integration to the project
The fourth step in our basic Continuous Integration pipeline is notification. At a minimum, the team would want to know if the CI pipeline failed, and for what reason. If the team is already using Slack on a regular basis, a Slack Integration makes a fantastic alert tool.
Slack channels (and individual users!) have an incoming webhook that can be used to send a new message to the channel from a third party. Once configured, Gitlab CI will be able to send a new message to the channel associated with the webhook URL. This message can be customized, but by default will include information about the job that failed (pipeline step 2), the last commit checked out (pipeline step 1), as well as pipeline and user identification.
Gitlab is configured such that if you follow the link to the individual commit associated with a pipeline run, you can follow that to the CI dashboard and see terminal output and other details.
https://wwt.slack.com/apps/new/A0F7XDUAZ-incoming-webhooks
https://hooks.slack.com/services/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
3:59-6:45 — Fail build and send a slack message
To see a "failed build" in action, we can push up an intentionally broken commit. In the case of our test repo, changing the test assertion to always fail is an easy way to do it.
With all of the proper configuration in place, once pushed up, this commit should go through all four steps of our pipeline and leave a trail to follow for fixing.
- Latest code with broken commit is checked out
- Pipeline runs our job, installing dependencies and building
- Tests run, our single test fails - job and pipeline then fail
- Slack message is sent indicating failure, with a link to follow. Following the link to the commit, and then to the job, we see in the script output that the tests failed on our broken assertion. If this is fixed and a new commit pushed up, the pipeline should return to green
6:45-??? — Next steps
So, where to go from here? Some possible next steps:
- Add an integration/UAT test step
- Add a performance test step
- Add a linting/formatting validation step
- Customize notification formatting and settings