Visualization of simulation results

From Einstein Toolkit Documentation
Jump to: navigation, search

(There are some recipes for visualizing certain quantities at Visualization recipes.)

The current state of people producing quick-and-dirty, overview-like visualizations from their running and run simulations seems to be that everyone has their own scripts/tools, usually just barely doing the specific task they are designed to do. It would be beneficial to have a set of common tools helping with at least some parts of this process of a) retrieving parts of the files, and b) producing some overview of the state of a simulation. The task in question is not to create high-quality plots for e.g. publications, but more a monitoring/debugging kind of overview.

To get this started, everyone interested is asked to shortly describe below what they currently do in that respect.

Frank Löffler
rsync (smaller) files by hand, use gnuplot/ygraph/VisIt to look at current results often using e.g. scripts for plotting multiple things with matplotlib and generating web pages from that
I would like to see support to obtain relevant files easily (simfactory comes to mind)
Tanja Bode
Current: Collection of bash/python/gnuplot/ygraph/VisIt scripts automatically generate a variety of interesting plots and generate an internal webpage summarizing the most interesting. Our script for VisIt animations has been generalized to take a command-line description of the quantity to be plotted so its flexibility is maximized.
Interests: I would like to see support for dynamically set HTML summaries of a run and its status. Perhaps by specifying certain basic system properties (0-2 BHs, with/without hydro, presence of non-BH compact object) to select from subsets of standard plots and a few animations. Having more flexibility in the animation choices on top of this as we have locally would be useful. Having these function on a cluster would be a plus, but not necessary.
Roland Haas
same method that Tanja uses (shared scripts/script elements). Interests are similar, a modular system to quickly generate an overview page would be nice.
Ian Hinder
I have a script called "getsim" which is called as "getsim <cluster> <simulation1> <simulation2> ...". This performs an rsync of the simulation directory into my ~/Simulations folder on my laptop. It excludes all files expected to be large, such as 1D, 2D and 3D HDF5 output, Cactus executable, etc. I often modify the script to change what is excluded or included; it would be nice to have different "presets" so that you could say you wanted the 2D data now, or you wanted output from a particular thorn which you don't normally sync. It would be very nice for this functionality to be implemented in simfactory, since simfactory already knows how to ssh to remote machines, using gsissh and trampolines if necessary. Currently this is hard-coded into my script for the machines I use. Once I have the simulation(s) on my laptop, I use a Mathematica package called SimulationTools written by Barry Wardell and me. It provides a functional interface to the data which deals transparently with merging output from different restarts, and can read data from several different thorns, depending on what is available. The package also supports reading Carpet HDF5 data and doing the required component-merging etc so that you can do analysis on the resulting data in Mathematica. This supports 1D, 2D and 3D data, but is essentially dimension-agnostic. I now use this instead of using VisIt for all my visualisation needs. It has a component called SimView which displays a panel summarising a BBH simulation, including run speed and memory usage as a function of coordinate time, BH trajectories, separation, waveforms, etc. SimulationTools is coupled to a C replacement for the Mathematica HDF5 reader, which we have found to be very slow and buggy. SimulationTools is available under the GPL and is developed on BitBucket, and would be a good candidate for including in the Einstein Toolkit at some point.
Erik Schnetter
I use gnuplot, together with bash and awk to postprocess data. For quick looks I work on the remote machine, for in-depth looks I rsync the data to my local machine and run my scripts there. I usually end up writing a shell script or makefile for each project that runs rsync, awk, gnuplot, etc. automatically, so that I can update my graphs with a single command if the data change. Sometimes I try to use VisIt, in particular to find out where nans are on a grid. This often fails because there is something wrong with the VisIt installation or its dependencies.
Bruno Mundim
I usually output as much as I possibly can (0D, 1D, 2D and 3D) for the initial data and scp these data to my desktop. If I am happy with that I send out the next run and wait until it is finished to look into the data again. Mostly 0D and 2D slices, and I don't output 3D data sets anymore. The 0D slices are usually in ASCII format. I run a set of scripts to combine the sets from previous checkpoints. I then use a set of supermongo (http://www.astro.princeton.edu/~rhl/sm/) scripts to generate a bunch of postscript files with the quantities I am interested. This is my quick way to look into 0D data: bring the data in, combine the sets, plot with supermongo. When I want to do a little bit more of analysis, I parse the ASCII output, piping into awk in order to send just to columns of data to xvs (http://cactuscode.org/documentation/visualization/DataVaultXVS/). This software is quite nice to do all sorts of analysis: derivatives, convergence tests, zoom in/out, merge data sets, animations, etc. Besides xvs could run on your desktop and have the data piped to it remotely from your HPC resource either in real time or the sdf file (or hdf file). To be continued...
Peter Diener
For quick and dirty looks at the data I use gnuplot (in combination with awk and paste where necessary) and ygraph usually on the remote machine. For more in-depth analysis (such as stuff that requires taking time derivatives, integrals, interpolation and/or fourier transforms which AFAIK can not be done in gnuplot) I transfer the necessary data to my laptop or workstation and write simple scripts in mathematica on a case by case basis. I have looked a little bit at Ian's Mathematica Scripts but didn't spend enough time on it to learn about the internal data structures so as to be able to write my own modules when something I needed was not present. For 3D visualization I still like to use OpenDX even though it's not developed anymore. I find the interface much nicer and easier to use than VisIt.
Jonah Miller
For quick and dirty looks at the data I use ipython or ipython notebooks. I prefer to read hdf5 output, since I don't have to parse it. For more detailed analysis, I copy the data over to my laptop and write python data analysis scripts on a case-by-case basis. For 3D visualization, I use yt. I often have to do a fair amount of work to feed the data into yt, however, since automatic readers for yt are preliminary.