How to Review a Patch
It is a good idea to describe what we want to achieve by having many of our small patches to the Einstein Toolkit reviewed before they are accepted, because the discussion surrounding these patches is sometimes going into strange directions. If the review is for a new module to be included, then our quality requirements apply. You may also consider consulting the Sustainable Software Institute's software evaluation guide.
Why Code Review?
For contributing a new small feature or a modification to the Einstein Toolkit, we are using Trac to review changes before they are applied to the code. This review serves several purposes, such as:
- Some of the basic infrastructure code is used by many people, and we want a second pair of eyeballs to look at the code to catch errors and omissions early. Some problems occur only on a few machines, and if an experienced user looks at the code, he/she may notice something that otherwise goes undetected for a long time.
- There exists a loosely-defined strategic plan for the Einstein Toolkit and its components, and we want to make sure that the individual contributions go roughly in the right direction, or (more importantly) don't accidentally undo a feature that is important for us.
- Exposing changes to the public before they are applied is a good way to make people think before they commit, thus helping avoid sloppy mistakes.
It follows from this that contributions to code that is new or that is actively developed doesn't really need to be reviewed. Similarly, "obvious" things should be corrected without review. If a modification concerns only a few lines of code and it is easily undone, it doesn't need much of a discussion (since, if things go wrong, it can easily be undone!).
How to Review
If a contribution or a patch is out for review, we are asking you to help, in particular by addressing the following questions:
- Is there something obviously wrong with the code? Is there a tricky corner case in C/C++/Fortran/Perl/Bash that this code triggers, or would it break on some of the stranger supercomputers we use? Does the contribution contain a "d'oh!" kind of error? Is there a possible weird and unexpected interaction with other components of the ET, in particular if these have "well-known bugs"?
- Is this contribution a step in the wrong direction for the overall ET? (In many cases this question is not relevant.) Does it somehow impede MHD, radiation transport, increased parallel scalability? Does it invite people to make errors that are difficult to detect? Does it introduce a lot of complicated code that isn't really necessary?
In most cases it is not necessary to apply the patch or run the code for this review. This a review, not a test -- reviewing a paper doesn't require repeating a calculation presented in the paper (except in rare cases), it only requires understanding it.
What to Avoid
A contribution or a patch is not a duty that is delivered by a petitioner to us, the gatekeepers of the sanctuary. A contribution is a present, a gift to be cherished; it should be treated as a valuable and precious item. We only want to make sure there is no poison in there; apart from this, the simplest of gifts is still a gift that we are happy to receive.
For example, if the discussion is heading towards one of the topics below, this big picture has been lost, and the review process is often not useful any more:
- Checking whether the patch applies cleanly. Especially if a patch has been sitting on the shelves for a few months, things may have changed, or the patch may depend on another patch. We can assume that people with commit rights will be able to make small modifications while applying a patch, will check whether the code builds, and will run the test suite if necessary. And if things break, things can be corrected later, or even undone.
- Running the code to perform a quick check whether the patch actually implements the feature that it is advertised to implement. We can assume that the contributor describes the changes to the best of his/her knowledge, and should trust this description. On the other hand, discovering corner cases where things may fail usually requires examining the code, not just running it.
- Reminding the contributor to update the documentation or add a test case. These go without saying, and if the contributor forgets, he/she can be reminded afterwards. (Or someone else may step in and provide these.)
- Rejecting a contribution only because there is a better way of implementing it. It is ok to reject a patch if it is badly written, or if there is a better implementation already underway. But rejecting a contribution simply because it would be cleaner to do things differently, and without volunteering the time to work on such a better alternative, is unfair. Presumably, once the better alternative arrives (if it every arrives), the current implementation can be removed again. ("A bird in the hand is worth two in the bush.")
- Asking the contributor to split up the patch into multiple, inter-dependent contributions, unless it this truly necessary. There is much to be said for providing contributions in small, self-contained units, and to not mix unrelated contributions. However, there is also a limit to this, because each contribution needs to be prepared, tested, and reviewed (and reviewing is currently a bottleneck). If two related features were developed together, then they should be reviewed together.
- Small, nit-picking comments along the line of "you should have done it this way, but now that you didn't, we're accepting your contribution anyway". The ET is not perfect (and will never be), our contributions are not perfect either, and nit-picking while reviewing does not create the kind of generous atmosphere where newcomers feel invited to contribute.