Automated testing

From Einstein Toolkit Documentation
Jump to: navigation, search

Cactus has a test suite mechanism designed to identify when code is committed that breaks existing functionality (regressions). This mechanism consists of a set of tests which either pass or fail. We would like to have a system for running these tests regularly and an interface where one can see if tests have started to fail where they were previously passing.

Requirements

Essential:

  • The tests need to run on multiple machines, in particular on all our important development and production systems
  • Run the Cactus test suites regularly (every night)
  • It should be possible to "kick off" the test manually using a source tree with modifications, so that these can be tested before committing
  • It should be possible to run (or re-run) only individual tests (for debugging)
  • Test on both one process and two processes, with several OpenMP settings
  • Parse the output and identify which tests passed and which tests failed
  • Present the results in a form which is very easy to interpret
  • It should be possible to see easily which tests are newly failing
  • It should be possible to see what has changed in the code since the last passing test
  • Run on all the major production clusters (this means using their queue systems)

Optional:

  • Allow people to implement the testing mechanism themselves on their own machines for their own private codes

Additional notes

  • What allocation would we use for these tests? Would the computing centres be happy for us to be using SUs for this?