ET Workshop 2015/new developments

From Einstein Toolkit Documentation
Jump to: navigation, search
  1. Managing and reproducing data
    1. postprocessing
    2. visualization
    3. simulation management, simfactory
      • Ian Hinder describes situation of simfactory2 and work on simfactory3 (Hinder, Wardell, Schnetter).
      • Ian Hawke mentions the possibility of other workflow management systems that exist and have a wide user base.
      • desire to include management of simulation data
    4. Reproducibility (Ian Hawke)
  2. Performance in optimization and usability
    1. AMR, scaling, adaptiveness
      • reduce focus on home grown solution for GR only
      • discuss benefits of Chombo and GRChombo. Ian Hawke mentions bad experience with these frameworks in relativity.
    2. Usability
      • more examples, better documentation (hypothetical "science with ET", Carpet)
      • scientific programmers
  3. Code correctness
    1. Cactus-aware correctness testing framework. Ideally with set of a simulation and analysis tests, may e much more heavyweight than testsuite.
    2. HPC correctness test
    3. Updating private codes to agree with ET developments
  4. Community practises
    1. backwards compatibility. Strict compatibility hurts usefulness.
    2. Cactus may have been to conservative mainaining compatibility
    3. IF things broke we were not good about announcing this or providing useful error messages at runtime
    4. hard to provide runtime information or code. Need a method to deprecate code and parameters with escalating warnings/errors/aborts as the deprecated feature becomes older.
  5. Physics modules
    1. better interfaces, evolution agnostic analysis, metadata
    2. adopt standards (preferably public ones, or from neighbouring fields)
    3. initial data: provide more? Better documentation for initial data thorns?
    4. GRHydro development:
      • cleanup
      • coordinate
    5. more standards for hydro
      • provide metadata with ID thorns
      • agree on exactly on what is provided
      • now there are multiple hydro codes that are public
  6. ET maintenance
    1. tickets (weekly telecon?)
  7. computer time for infrastructure development in Europe
    • PRACE only gives prepartory access to test on the given machine but not to develop
    • PRACE ony funds big ones, smaller ones through national agencies. (CINECA offers class C allocations for this)
  8. Usability
    1. higher level "Einstein Toolkit" user guide (see #1804)
    2. documentation wanted, not just code but also on how to do things
    3. want some high level documentation
    4. lack of complete documentation. Some part are well documented (Cactus flesh) but newer features are mostly undocumented, for example the tags (Ian: maybe for each release, we should identify the new features that might require documentation and write something brief for each of them)
    5. larger set of gallery examples
    6. suggestion to also have a correctness checking framework
    7. non-working examples are included in the toolkit. Example parfiles should be commented to make them easier to understand. (see #641)

Adaptive mesh refinement and Carpet

Currently two knowledgeable maintainers: Erik Schnetter and Roland Haas

Concerning the question about scaling and fine grained AMR Carpet already supports this though it may well be less efficient than other codes. Continuous Runge-Kutta method for buffer zone data may work better than Adams-Bashforth. Estimate for Adams-Bashforth is not fully conclusive (pen and paper) making it within 50% of the speed of RK4. Current status is that it is feasible but not tested yet in a big simulation (it is in the code and only requires some changes to enable more time levels).

Ian and Wolfgang state that running without buffer zones is impossible for accuracy. Continuous RK4 seems a cleaner solution. Advantage is that one needs to store fewer time levels. In principle we would try both AB and cont-RK. The AB work stalled since there is now time for a real-world test and a physics problem.

For more fine grained AMR Wolfgang suggests to have a test case with refinement level boundaries inside of the NS. Ian Hawke reports that a refinement level boundary worked fine if the whole box was contained in the star, the worst behaviour was if the buffer zones intersected the star's surface.

Methods to reduce the buffer zone width right now:

  • change numerical scheme in buffer zones this requires the user code to know which points are buffer zones
  • this however is limited in how much one can safe

For fine grained AMR one can start with Carpet's ARMToy example. Gradient based indicators can be troublesome, a better method may be self-shadowing which was tried "in the distant" past in Carpet but not yet implemented. The method relies on computing an error estimate between coarse and fine levels are computed just before restriction into the coarse grid happens. Ian Hawke remembers that it was less trivial than expected. If this is used to compute an error estimate on the fine grid then one needs some calls to C++ to transfer the difference from the restricted region into the fine grid.

Using a third party AMR infrastructure runs into issues with different data structures used between different infrastructures. Would need to hook up both the AMR and the Cactus specific infrastructure: memory management, scheduler, reduction operations, interpolators. Specific question: replacing CarpetLib by BoxLib. This would be easier since CarpetLib is designed in a similar way than BoxLib. It is not something that can be done in an afternoon though. Erik says that strategically this would be good and was tried a couple times, eg with Enzo. Not clear in advance if CarpetLib is currently performant enough for fine grained AMR. We may expect that it is not, and that the replacement is actually better. So the replacement (BoxLib) would need to be tested with something of similar complexity of the BSSN equation and fourth order stencils. BoxLib has examples for the wave equation on its website.

For load balancing issues in GRHydro (and possibly also IllinoisGRMHD) we would like to have a cost function to show imbalance in the grid since different points may take very different amounts of work.

Comparing codes Wolfgang finds that his code seems to spend less time in con2prim than GRhydro, David found that his code is a bit slower than BAM but scales a bit better.

Decided to have one ET workshop per year in Europe. Will likely have next one in summer in Italy.