Continuous Integration with GitLab

Posted by Balázs Varkoly on December 5, 2016

Recently Team Alfonzó, with help from the Build Team, took the next in adopting a new build infrastructure. We wanted to move away from Jenkins and the TOPdesk DockerHub registry, towards a more distributed infrastructure. Our Implementation Wizard project gave us the opportunity we were looking for to start making use of GitLab’s CI and Registry features.

How does GitLab compare to a Jenkins pipeline script?

If you have ever written a pipeline script for Jenkins you will probably find GitLab’s solution more refined and aesthetically more pleasing. Here – instead of Jenkins’ Groovy based DSL – you write a yaml file in which you list the build stages and specify the scripts that should run at each stage. Yaml syntax provides you with a sensible structure, while preserving the freedom you need to configure the build. Unfortunately it lacks the possibility to try out scripts without committing and pushing to the code base. You might want to be aware of this before unwittingly flooding the change history with CI-related experimental changes.

Bypassing docker-in-docker difficulties

Once you have your yaml file, you will need a registered GitLab Runner CI architecture to actually perform the build. At Team Alfonzó we currently have our own dedicated runner in docker. Perhaps at a later stage it will be more resource-friendly if multiple teams use shared runners.

We also discovered a quirk when our runner in one docker container wanted to run the build stages in another docker container.

If you start searching for the topic you will probably end up at docker-in-docker(dind) which unfortunately doesn’t work for us at the moment. To bypass this we configured Docker API to use the same Unix socket as an entry point for both containers (the container for the runner and “stage container” – which is used by the runner to perform the build stages). We applied the workaround by mounting the runner’s docker socket to the stage container’s socket.

Even with the hijack above, we weren’t able to get the docker containers, in which the build stages are performed, to accept data volumes from outside. This made deployment to Kubernetes pretty difficult; it lacked the file which contains deployment details. Luckily we were able to bypass this lack of functionality using kubectl – Kubernetes’ CLI tool – which can accept data from standard input.

Hosting Docker Images with GitLab registry

GitLab also provides a registry, which can be used to host the docker image built from your project. We found this a more convenient way to host our images than DockerHub. The latter was a bit cumbersome, since only a few people had admin rights to create and modify registries. What’s more, the privileges were not fine-grained enough for our needs. We found that we had either too much or too little access to work efficiently. With GitLab, the only disadvantage we could see is that it doesn’t support multiple images for one project. This issue occurred to external projects and the GitLab Team has already been notified.


All in all we are happy with GitLab’s CI and Registry as an alternative build infrastructure. We see GitLab as a good solution to the high current workload on Jenkins, and a suitable choice for projects aimed at using the new architecture.


Leave a Reply

Your email address will not be published. Required fields are marked *