Visualization of simulation results

From Einstein Toolkit Documentation
Revision as of 13:45, 14 February 2011 by Rhaas (talk | contribs) (add Roland Haas)
Jump to: navigation, search

The current state of people producing quick-and-dirty, overview-like visualizations from their running and run simulations seems to be that everyone has their own scripts/tools, usually just barely doing the specific task they are designed to do. It would be beneficial to have a set of common tools helping with at least some parts of this process of a) retrieving parts of the files, and b) producing some overview of the state of a simulation. The task in question is not to create high-quality plots for e.g. publications, but more a monitoring/debugging kind of overview.

To get this started, everyone interested is asked to shortly describe below what they currently do in that respect.

Frank Löffler
rsync (smaller) files by hand, use gnuplot/ygraph/VisIt to look at current results often using e.g. scripts for plotting multiple things with gnuplot
I would like to see support to obtain relevant files easily (simfactory comes to mind), and to have some kind of tool generating a short overview of the state of the simulation. An HTML page would probably not be a bad idea, and ideally this should also be able to run on (most) production machines, so that copying the actual data files would not be necessary.
Tanja Bode
Current: Collection of bash/python/gnuplot/ygraph/VisIt scripts automatically generate a variety of interesting plots and generate an internal webpage summarizing the most interesting. Our script for VisIt animations has been generalized to take a command-line description of the quantity to be plotted so its flexibility is maximized.
Interests: I would like to see support for dynamically set HTML summaries of a run and its status. Perhaps by specifying certain basic system properties (0-2 BHs, with/without hydro, presence of non-BH compact object) to select from subsets of standard plots and a few animations. Having more flexibility in the animation choices on top of this as we have locally would be useful. Having these function on a cluster would be a plus, but not necessary.
Roland Haas
same method that Tanja uses (shared scripts/script elements). Interests are similar, a modular system to quickly generate an overview page would be nice.
Ian Hinder
I have a script called "getsim" which is called as "getsim <cluster> <simulation1> <simulation2> ...". This performs an rsync of the simulation directory into my ~/Simulations folder on my laptop. It excludes all files expected to be large, such as 1D, 2D and 3D HDF5 output, Cactus executable, etc. I often modify the script to change what is excluded or included; it would be nice to have different "presets" so that you could say you wanted the 2D data now, or you wanted output from a particular thorn which you don't normally sync. It would be very nice for this functionality to be implemented in simfactory, since simfactory already knows how to ssh to remote machines, using gsissh and trampolines if necessary. Currently this is hard-coded into my script for the machines I use. Once I have the simulation(s) on my laptop, I use a Mathematica package called "NRMMA" that I have written for analysing NR data. It provides a functional interface to the data which deals transparently with merging output from different restarts, and can read data from several different thorns, depending on what is available. The package also supports reading Carpet HDF5 data and doing the required component-merging etc so that you can do analysis on the resulting data in Mathematica. This supports 1D, 2D and 3D data, but is essentially dimension-agnostic. I now use this instead of using VisIt for all my visualisation needs. It has a component called SimView which displays a panel summarising a BBH simulation, including run speed and memory usage as a function of coordinate time, BH trajectories, separation, waveforms, etc. NRMMA is coupled to a C replacement for the Mathematica HDF5 reader, which we have found to be very slow and buggy. I plan to make this package public at some point in the future, but it needs some tidying up before that happens.
Erik Schnetter
I use gnuplot, together with bash and awk to postprocess data. For quick looks I work on the remote machine, for in-depth looks I rsync the data to my local machine and run my scripts there. I usually end up writing a shell script or makefile for each project that runs rsync, awk, gnuplot, etc. automatically, so that I can update my graphs with a single command if the data change. Sometimes I try to use VisIt, in particular to find out where nans are on a grid. This often fails because there is something wrong with the VisIt installation or its dependencies.