Difference between revisions of "Automated testing"
(strip out no longer needed informatin since we settled on Jenkins as the test suite driver) |
|||
(4 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | |||
− | |||
Cactus has a test suite mechanism designed to identify when code is committed that breaks existing functionality (regressions). This mechanism consists of a set of tests which either pass or fail. We would like to have a system for running these tests regularly and an interface where one can see if tests have started to fail where they were previously passing. | Cactus has a test suite mechanism designed to identify when code is committed that breaks existing functionality (regressions). This mechanism consists of a set of tests which either pass or fail. We would like to have a system for running these tests regularly and an interface where one can see if tests have started to fail where they were previously passing. | ||
− | == | + | ==Requirements== |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Essential: | |
+ | * The tests need to run on multiple machines, in particular on all our important development and production systems | ||
* Run the Cactus test suites regularly (every night) | * Run the Cactus test suites regularly (every night) | ||
+ | * It should be possible to "kick off" the test manually using a source tree with modifications, so that these can be tested before committing | ||
+ | * It should be possible to run (or re-run) only individual tests (for debugging) | ||
* Test on both one process and two processes, with several OpenMP settings | * Test on both one process and two processes, with several OpenMP settings | ||
* Parse the output and identify which tests passed and which tests failed | * Parse the output and identify which tests passed and which tests failed | ||
Line 20: | Line 16: | ||
* Run on all the major production clusters (this means using their queue systems) | * Run on all the major production clusters (this means using their queue systems) | ||
− | + | Optional: | |
− | * | + | * Allow people to implement the testing mechanism themselves on their own machines for their own private codes |
− | + | ==Additional notes== | |
− | |||
− | |||
− | * | + | * What allocation would we use for these tests? Would the computing centres be happy for us to be using SUs for this? |
Latest revision as of 13:57, 25 October 2016
Cactus has a test suite mechanism designed to identify when code is committed that breaks existing functionality (regressions). This mechanism consists of a set of tests which either pass or fail. We would like to have a system for running these tests regularly and an interface where one can see if tests have started to fail where they were previously passing.
Requirements
Essential:
- The tests need to run on multiple machines, in particular on all our important development and production systems
- Run the Cactus test suites regularly (every night)
- It should be possible to "kick off" the test manually using a source tree with modifications, so that these can be tested before committing
- It should be possible to run (or re-run) only individual tests (for debugging)
- Test on both one process and two processes, with several OpenMP settings
- Parse the output and identify which tests passed and which tests failed
- Present the results in a form which is very easy to interpret
- It should be possible to see easily which tests are newly failing
- It should be possible to see what has changed in the code since the last passing test
- Run on all the major production clusters (this means using their queue systems)
Optional:
- Allow people to implement the testing mechanism themselves on their own machines for their own private codes
Additional notes
- What allocation would we use for these tests? Would the computing centres be happy for us to be using SUs for this?