Skip to content
Andrew Garner edited this page Nov 17, 2019 · 2 revisions

Background to Jengu

Company X had multiple global product teams, each product team had hundreds of developers with several hundred micro-services and microliths in total. Each component had it's own git repository. Where automated integration tests and system tests existed they were often in their own repository.

There were thousands of Jenkins jobs across multiple Jenkins masters, handling building of (java) snapshots, releases and hotfixes. Most jobs were handcrafted freestyle jobs with many inconsistencies as a result. Making a single build process change across so many jobs at once challenging, as was troubleshooting problems with jobs with limited retention of job history. Even with the SCM Sync Configuration plugin installed to view past changes it did not explain why changes were made, it didn't catch jobs which should have had changes applied but didn't, and there is not the same up-front PR process associated with creating jobs as code. As Jez Humble notes in the book Continuous Delivery, committing configuration to version control after it has effectively been deployed is an anti-pattern.

Most projects used a common toolset including Gradle and Cucumber, and broadly followed the same pattern for CI/CD, which was build, unit test, and publish to artifact repository. Following a git-flow branching strategy and depending on whether the target infrastructure was dynamic, there would be subsequent groups of deployments and further tests executed, ultimately leading to a production deployment. Some projects using cloud

A handful of people were responsible for building CI/CD Pipelines for all the projects, and efficient reuse of Pipeline code was essential.

Initial development

Multi-branch jobs were used to build and test feature branches as well as develop, hotfix and release branches. The entry point used in each Jenkinsfile would check the current branch name and call the appropriate pipeline which was also defined in the shared library. A mass of possible configuration options were replaced with opinionated defaults, in a Spring Boot style, which could be overridden to support each project's specific requirements while limiting other aspects to recommended behaviours.

Growing pains

As the shared library grew in popularity and adoption across the consuming teams a few problems became apparent, which were not a surprise. Changes to the shared library code would often break builds for the library consumers because all the testing was manual. Coming from nearly two decades of sofware development I searched for solutions to the question "How can I run automated tests for this?". For several reasons we decided against Jenkins Pipeline Unit. One reason was that changes like method renames were slipping through our code review, which this framework would also not have caught. Our team also consisted of many multi-skilled DevOps Engineers, however almost none had a software development background. We needed a solution which any engineer who could write Jenkins DSL for pipelines and shared libraries could also write the tests and debug test failures for, with minimal additional training.

Unsatisfied with anything pre-existing we could find, I wrote the beginnings of this library which ran a series of test cases against the shared library code from a Jenkinsfile. This was functional, but required knowledge to troubleshoot any errors from the job console log, as the code would sometimes print error messages to the console log when testing an error scenario.

With my background in Java, I wrote code to use the standard JUnit annotations and build JUnit-compatible XML reports which could be consumed by Jenkins to more usefully display the results of a test run. Annotations on the test methods allowed discrete single-purpose test cases to be run, for the whole test suite to be executed and log test failures rather than aborting the test run, and for test classes to be split over multiple files which mirrored the files under test from the vars folder.

The solution, or part of it

Jengu is a hybrid, with aspects of unit and integration tests in one. It requires the tests are executed on a Jenkins instance and will depend on certain plugins being installed and configuration made, such as white-listed methods. This is acceptable and even desirable in the case of multiple Jenkins masters. Each Jenkins master was execute the tests and a pass on any given Jenkins master should mean the shared library code will execute as expected on that master. New Jenkins masters were being created on a regular basis. Once the shared library tests were completing and passing on a new Jenkins master, this hybrid unit and integration test framework meant that where the test coverage was good we could be confident that consumers would be able to run the shared library pipelines successfully. Where successful execution of a test case depended on (the method under test, which depended on) some configuration on the Jenkins master, this acted as an integration test for us.

Enhancements

While this framework is pretty reasonable at testing the lower level classes and methods we wrote, all the way up to individual stages, it wasn't so straight-forward to test the entire pipelines. We used dummy projects which we could build, deploy, publish, branch, release and hotfix whenever we needed to. We also had a wrapper job which would kick off each branch of a multi-branch job for several dummy projects, each with a different configuration, and the job would report the roll-up status of all the other jobs triggered.

What we didn't have was anything which would automate pushing commits to trigger builds, bumping semantic versions or creating new branches, all of which would have increased the coverage and functional testing of this particular shared library. This is one of several areas where improvements can still be made. PR's are always welcome :)

Clone this wiki locally