BuildAndTest

From Einstein Toolkit Documentation
Jump to: navigation, search

Einstein Toolkit Jenkins documentation

(The details in this page were last checked on 25-Jan-2018.)

The Einstein Toolkit is regularly built and tested using Jenkins. The Jenkins instance is publicly available at https://build-test.barrywardell.net/. Jenkins utilises a master/slave setup, where the build master runs the web interface and handles all configuration, and runs actual build jobs (which consume significant CPU resources) on "build nodes".

build-test.barrywardell.net is a virtual machine running on the NCSA Nebula system (http://nebula.ncsa.illinois.edu).

The build node is also a VM in the NCSA Nebula system. That runs Ubuntu in the VM, but that installation is only there to run docker.

On buildslave6, port 22 is an ssh server to the base ubuntu installation.

The standard ET build environment is in a docker container with an ssh server, mapped to port 2024. This docker container is built from https://hub.docker.com/r/ianhinder/et-jenkins-slave, with the ubuntu-16.04 tag.

Jenkins job definition

The source code repository for the Einstein Toolkit is https://bitbucket.org/einsteintoolkit/einsteintoolkit. This is a git "super-repository" which contains a submodule for each component repository of the toolkit. It is updated by an NCSA nebula VM instance called "git", which scans each of the ET repositories and updates the submodule in the einsteintoolkit repository to the latest version. It also maintains git-svn mirrors for each of the remaining SVN repositories. From Jenkins' point of view, it simply clones the einsteintoolkit repository recursively, and builds the ET that results.

The EinsteinToolkit Jenkins job is configured at https://build-test.barrywardell.net/job/EinsteinToolkit/configure.

In the "Build" section are the commands used to start the build and test on the build node. At the time of writing, this enables various thorns, runs some submodule commands, and clones the CactusJenkins repository https://bitbucket.org/ianhinder/cactusjenkins (master branch) into the workspace (cloned fresh on every build). It then runs the build-cactus script from CactusJenkins with einsteintoolkit.th to compile the ET, followed by the test-cactus script to run the tests.

build-cactus: The build-cactus script (https://bitbucket.org/ianhinder/cactusjenkins/src/master/build-cactus) won't be able to determine the current machine from the hostname because it is running in an anonymous docker container. It therefore probes to find the OS (by looking for various signature files) and selects the optionlist to run based on this. It currently detects CentOS, Ubuntu and falls back to Debian if on a Debian-like system. These use the centos.cfg, ubuntu.cfg and debian.cfg optionlists, respectively. On a standard Jenkins build node, it will therefore use ubuntu.cfg. It uses generic.sub for submission script, and build.run from the CactusJenkins repository as run script.

test-cactus: This script outputs the test results in a testresults.xml file in JUnit format, which is picked up by the Jenkins JUnit plugin, which presents the results in the web interface.

Jenkins build node

Creation

To create a new build node, create a new instance in Nebula with the following settings:

  • m1.medium
  • Boot from image (creates a new volume)
  • Delete on terminate
  • Image name: Ubuntu 16.04
  • Device size: 40 GB
  • Memory size: ?? (RH: the OSX slaves have 2GB per VM so providing much more may not be useful)
  • Networking: default2

ssh to the instance, and initialise it according to the instructions in the README of https://bitbucket.org/ianhinder/ncsajenkins.

At the time of writing, this sets up a ubuntu 16.04 docker container from the docker repository ianhinder/et-jenkins-slave, and starts the container in the background, configuring it to start on boot.

Updating

To change the environment in which the ET is built, you need to edit the Dockerfile of the build slave. This is stored in the repository https://bitbucket.org/ianhinder/et-jenkins-slave/branch/ubuntu-16.04

(Note the branch.) Once the Dockerfile has been updated and pushed back to bitbucket, Docker Hub will automatically build the image. You then need to restart the build slave with the new image:

   ssh -oProxyCommand="ssh -W %h:%p <username>@build.barrywardell.net" ubuntu@192.168.0.36 "cd ncsajenkins; ./update.sh"

This will stop and remove the container, pull the latest image from Docker Hub, and recreate the container. (If any jobs happen to be currently running on that slave, they will be aborted.)

Jenkins should then automatically reconnect to the new container.

Login node

In order for people to attempt to reproduce Jenkins test failures in an environment as close to the Jenkins build node as possible, there is another Nebula instance called "login", which can be logged in to by ET members. Contact Ian Hinder for access. This node is accessible via ssh public key at login.barrywardell.net.

Repositories

There is, unfortunately, a proliferation of repositories involved in this system. Here we try to clarify, to avoid confusion: