ET Workshop 2015/new developments

From Einstein Toolkit Documentation
Revision as of 08:58, 13 August 2015 by Rhaas (talk | contribs)
Jump to: navigation, search
  1. Managing and reproducing data
    1. postprocessing
    2. visualization
    3. simulation management, simfactory
      • Ian Hinder describes situation of simfactory2 and work on simfactory3 (Hinder, Wardell, Schnetter).
      • Ian Hawke mentions the possibility of other workflow management systems that exist and have a wide user base.
      • desire to include management of simulation data
    4. Reproducibility (Ian Hawke)
  2. Performance in optimization and usability
    1. AMR, scaling, adaptiveness
      • reduce focus on home grown solution for GR only
      • discuss benefits of Chombo and GRChombo. Ian Hawke mentions bad experience with these frameworks in relativity.
    2. Usability
      • more examples, better documentation (hypothetical "science with ET", Carpet)
      • scientific programmers
  3. Code correctness
    1. Cactus-aware correctness testing framework. Ideally with set of a simulation and analysis tests, may e much more heavyweight than testsuite.
    2. HPC correctness test
    3. Updating private codes to agree with ET developments
  4. Community practises
    1. backwards compatibility. Strict compatibility hurts usefulness.
    2. Cactus may have been to conservative mainaining compatibility
    3. IF things broke we were not good about announcing this or providing useful error messages at runtime
    4. hard to provide runtime information or code. Need a method to deprecate code and parameters with escalating warnings/errors/aborts as the deprecated feature becomes older.
  5. Physics modules
    1. better interfaces, evolution agnostic analysis, metadata
    2. adopt standards (preferably public ones, or from neighbouring fields)
    3. initial data: provide more? Better documentation for initial data thorns?
    4. GRHydro development:
      • cleanup
      • coordinate
    5. more standards for hydro
      • provide metadata with ID thorns
      • agree on exactly on what is provided
      • now there are multiple hydro codes that are public
  6. ET maintenance
    1. tickets (weekly telecon?)
  7. computer time for infrastructure development in Europe
    • PRACE only gives prepartory access to test on the given machine but not to develop
    • PRACE ony funds big ones, smaller ones through national agencies. (CINECA offers class C allocations for this)
  8. Usability
    1. documentation wanted, not just code but also on how to do things
    2. larger set of gallery examples
    3. lack of complete documentation. Some part are well documented (Cactus flesh) but newer features are mostly undocumented, for example the tags
    4. want some high level documentation
    5. suggestion to also have a correctness checking framework
    6. non-working examples are included in the toolkit. Example parfiles should be commented to make them easier to understand.
    7. higher level "Einstein Toolkit" user guide.

Adaptive mesh refinement and Carpet

Currently two knowledgeable maintainers: Erik Schnetter and Roland Haas

Concerning the question about scaling and fine grained AMR Carpet already supports this though it may well be less efficient than other codes. Continuous Runge-Kutta method for buffer zone data may work better than Adams-Bashforth. Estimate for Adams-Bashforth is not fully conclusive (pen and paper) making it within 50% of the speed of RK4. Current status is that it is feasible but not tested yet in a big simulation (it is in the code and only requires some changes to enable more time levels).

Ian and Wolfgang state that running without buffer zones is impossible for accuracy. Continuous RK4 seems a cleaner solution. Advantage is that one needs to store fewer time levels. In principle we would try both AB and cont-RK. The AB work stalled since there is now time for a real-world test and a physics problem.

For more fine grained AMR Wolfgang suggests to have a test case with refinement level boundaries inside of the NS. Ian Hawke reports that a refinement level boundary worked fine if the whole box was contained in the star, the worst behaviour was if the buffer zones intersected the star's surface.

Methods to reduce the buffer zone width right now:

  • change numerical scheme in buffer zones this requires the user code to know which points are buffer zones
  • this however is limited in how much one can safe

For fine grained AMR one can start with Carpet's ARMToy example. Gradient based indicators can be troublesome, a better method may be self-shadowing which was tried "in the distant" past in Carpet but not yet implemented. The method relies on computing an error estimate between coarse and fine levels are computed just before restriction into the coarse grid happens. Ian Hawke remembers that it was less trivial than expected.