Difference between revisions of "CarpetX"

From Einstein Toolkit Documentation
Jump to: navigation, search
Line 41: Line 41:
 
| 40 || 358.8 || 527.4
 
| 40 || 358.8 || 527.4
 
|}
 
|}
 +
 +
== SyncTestX thorn ==
 +
* Wrote SyncTestX thorn for comparing the communication overhead for Carpet and CarpetX
 +
* Contains parameter files (and thornlists) for a weak scaling test using the thorn
 +
* Ran weak scaling test on LSU's Deep Bayou. The runs were scaled from 1 to 4 nodes (48 to 192 processors), and the number of threads per process was varied from 1 to 8. Except for 8 threads, CarpetX outperformed Carpet in the tests. The data is shown in the included plots. Also shown is a plot of the CarpetX time divided by the Carpet time, clearly showing that Carpet performed better only for the runs with 8 threads (with 1,2, and 4 nodes) and 4 threads (with 1 node).
 +
 +
[[File:Psi4.png|thumb|Plot of the L2,m2 mode output for Psi4 at r=10. This data is from the 200M simulation ran on Melete.]]
 +
 +
[[File:CarpetX SyncComparison1.png|thumb|Comparison of the runtimes for a weak scaling test for Carpet and CarpetX using the SyncTestX thorn. The test scales from 48 to 192 processors, and this plot shows the results for runs with 1 thread and 2 threads. CarpetX outperforms Carpet in all of these runs.]]
  
 
== Open issues/bugs ==
 
== Open issues/bugs ==

Revision as of 15:42, 8 December 2021

The folllowing is the work done by S. Cupp on the CarpetX framework.

CarpetX Interpolator

  • Added the interface to connect Cactus' existing interpolation system to the CarpetX interpolator
  • AHFinder interpolation test was extended to compare the results from directly calling CarpetX's interpolator with the results from calling the new interface.

CarpetX Arrays

  • Added support for distrib=const arrays in CarpetX
  • Combined the structures for arrays and scalars into a single struct
  • All scalar code was extended to be able to handle both scalars and arrays
  • Added test in TestArray thorn to verify that array data is allocated correctly and behaves as expected when accessed

CarpetX DynamicData

  • Overloaded the DynamicData function for use with CarpetX
  • Added the necessary data storage in the array group data to provide the dynamic data when requested
  • Added test in TestArray thorn to verify that dynamic data for grid functions, scalars, and arrays returns the correct data
  • Test also serves as basic test for read/write declarations, as all three variable types are written and read in this test

Multipole/qc0

  • Multipole incorporated into cactusamrex repository
  • Test produces data that matches old code (be sure to use the same interpolator)
  • The qc0 simulation is quite slow, but has been improved by switching to WeylScal4. Still, The Toolkit paper needs a simulation time of ~300M to get the full merger. We need to reach a simulation speed that makes this viable before we can compare to previous simulations. We can simulate for ~200M, but it takes an entire week on the 40-core machine Melete. During testing, Weyl was replaced with WeylScal4 because the compiler was failing to optimize Weyl's code, resulting in significantly longer runtimes. This can be seen in the data from simulations on Spine. Tiling has some minor slowdown, as tiling only provides benefits with openmp, which isn't active in these runs. In addition, some areas of the code likely still use explicit loops, which will result in multiple calculations of the same data with tiling.
  • A 200M simulation ran on Melete. Plots of the data show a well-behaved waveform.
Plot of the L2,m2 mode output for Psi4 at r=10. This data is from the 200M simulation ran on Melete.
  • Recent testing of just synchronization speeds (branched off of a more recent master) showed similar speeds, so I ran my previous comparison test with Minkowski initial data using the newer master. The timing data from those runs is shown below. The runs are much closer than before, so a change in master has fixed much of the speed issue. CarpetX still takes 30-50% more time than Carpet, but it is far better than before.
0.5M Simulations on Melete (MPI-only with no tiling)
Number of MPI Processes Carpet Runtime CarpetX Runtime
5 1800.0 2423.9
10 986.3 1388.2
20 567.7 835.1
40 358.8 527.4

SyncTestX thorn

  • Wrote SyncTestX thorn for comparing the communication overhead for Carpet and CarpetX
  • Contains parameter files (and thornlists) for a weak scaling test using the thorn
  • Ran weak scaling test on LSU's Deep Bayou. The runs were scaled from 1 to 4 nodes (48 to 192 processors), and the number of threads per process was varied from 1 to 8. Except for 8 threads, CarpetX outperformed Carpet in the tests. The data is shown in the included plots. Also shown is a plot of the CarpetX time divided by the Carpet time, clearly showing that Carpet performed better only for the runs with 8 threads (with 1,2, and 4 nodes) and 4 threads (with 1 node).
Plot of the L2,m2 mode output for Psi4 at r=10. This data is from the 200M simulation ran on Melete.
Comparison of the runtimes for a weak scaling test for Carpet and CarpetX using the SyncTestX thorn. The test scales from 48 to 192 processors, and this plot shows the results for runs with 1 thread and 2 threads. CarpetX outperforms Carpet in all of these runs.

Open issues/bugs

  • storage being always-on in CarpetX results in attempted regridding,etc. that causes validity failures. This is caused by the expected number of time levels not matching the actual number of time levels.
  • If there are too few cells, an unclear error appears which boils down to "assertion 'bc=bcrec' fails". This is because the cells are so large w.r.t. the number of points that multiple physical boundaries are within one cell. Identifying this as the problem and generating a different error message is recommended. Alternatively, the code could be altered to allow for these kinds of runs to function, though I am not sure that this would be worthwhile if it takes much effort/time.
  • CarpetX seems to use substantial resources for the qc0 test, and we are unsure of why. The amount of memory is several times higher than Roland's estimates for the usage with the given grid size. Eric has stated that this is due to the compiler failing to optimize the Weyl code.
  • The validity checking is incorrect for periodic boundary conditions. These boundary conditions are handled by AMReX, so no boundary conditions are run on the level of Cactus. Because it is all internal to AMReX, CarpetX never sets the boundaries to be valid. To fix this, any time the ghost zones are set to valid, the boundaries should also be set to valid. This should only be done when periodic boundary conditions are being used. Once this bug is fixed, the Weyl schedule.ccl should be reviewed as it contains hacks to bypass this bug.
  • A strange error occurred while working on the gauge wave test. The error was triggered by assert(bc == bcrec) in the prolongate_3d_rf2 function of CarpetX/src/prolongate_3d_rf2.cxx. We determined that having --oversubscribe triggers this error, but it is unclear why that would happen. It does not occur when using the TwoPunctures initial data and only started after switching to the gauge wave initial data. It also causes a significant slowdown and uses tons of memory. Roland thinks that this issue could be caused by an error estimator for regridding, but that doesn't explain why the assert is triggering.

Open tasks/improvements

  • SymmetryInterpolate isn't hooked up yet, but commented out code provides a starting point
  • Currently, CarpetX and Cactus both have parameters for interpolation order. As an example, qc0.par has to set "CarpetX::interpolation_order = 3" and "Multipole::interpolator_pars = "order=3" ". These should be condensed into a single parameter. Since individual thorns are setting their own interpolation order, I am assuming that different orders can be chosen for different thorns. If different variables have different orders of interpolation, the current implementation would break. Instead of using its own parameter, CarpetX should work with the existing infrastructure.
  • CarpetX's interpolate function doesn't return error codes, but historically there have been error codes for the interpolator. The new interpolate function should incorporate the old error codes to fully reproduce the functionality of the old interpolator
  • Interpolation interface should print out error codes for TableGetIntArray()
  • Distributed arrays are still not supported. It is unclear where these are used (at least to me). However, if they are needed, CarpetX will need to be extended to support them.
  • The DynamicData test revealed a bug with the scalar validity code. As such, we should consider whether we need more tests which specifically test the functionality of the valid/invalid code for the various types of variables. For example, tests for validating the poison routine, NaN checker, etc.

Closed issues/bugs

  • R. Haas resolved the issue with how CarpetX handled the difference between cell- and vertex-centered grid functions from old thorns. Incorrect default settings caused issues with interpolation, looping, etc. For example, TwoPunctures and Multipole required a hack where looping needed cctk_lsh[#]+1 instead of cctk_lsh[#]. Now that CarpetX/Cactus properly handles this data, old thorns should not need hacks to properly loop over grid functions.