<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.einsteintoolkit.org/et-docs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Noncct+jmiller</id>
	<title>Einstein Toolkit Documentation - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.einsteintoolkit.org/et-docs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Noncct+jmiller"/>
	<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/Special:Contributions/Noncct_jmiller"/>
	<updated>2026-05-13T16:16:57Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.0</generator>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Visualization_of_simulation_results&amp;diff=4045</id>
		<title>Visualization of simulation results</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Visualization_of_simulation_results&amp;diff=4045"/>
		<updated>2015-08-14T15:53:33Z</updated>

		<summary type="html">&lt;p&gt;Noncct jmiller: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The current state of people producing quick-and-dirty, overview-like visualizations from their running and run simulations seems to be that everyone has their own scripts/tools, usually just barely doing the specific task they are designed to do. It would be beneficial to have a set of common tools helping with at least some parts of this process of a) retrieving parts of the files, and b) producing some overview of the state of a simulation. The task in question is not to create high-quality plots for e.g. publications, but more a monitoring/debugging kind of overview.&lt;br /&gt;
&lt;br /&gt;
To get this started, everyone interested is asked to shortly describe below what they currently do in that respect.&lt;br /&gt;
&lt;br /&gt;
;Frank Löffler: rsync (smaller) files by hand, use gnuplot/ygraph/VisIt to look at current results often using e.g. scripts for plotting multiple things with matplotlib and generating web pages from that&lt;br /&gt;
:               I would like to see support to obtain relevant files easily (simfactory comes to mind)&lt;br /&gt;
;Tanja Bode&lt;br /&gt;
:&amp;#039;&amp;#039;Current&amp;#039;&amp;#039;: Collection of bash/python/gnuplot/ygraph/VisIt scripts automatically generate a variety of interesting plots and generate an internal webpage summarizing the most interesting.  Our script for VisIt animations has been generalized to take a command-line description of the quantity to be plotted so its flexibility is maximized.&lt;br /&gt;
:&amp;#039;&amp;#039;Interests&amp;#039;&amp;#039;: I would like to see support for dynamically set HTML summaries of a run and its status.  Perhaps by specifying certain basic system properties (0-2 BHs, with/without hydro, presence of non-BH compact object) to select from subsets of standard plots and a few animations.  Having more flexibility in the animation choices on top of this as we have locally would be useful. Having these function on a cluster would be a plus, but not necessary.&lt;br /&gt;
;Roland Haas: same method that Tanja uses (shared scripts/script elements). Interests are similar, a modular system to quickly generate an overview page would be nice.&lt;br /&gt;
;Ian Hinder: I have a script called &amp;quot;getsim&amp;quot; which is called as &amp;quot;getsim &amp;lt;cluster&amp;gt; &amp;lt;simulation1&amp;gt; &amp;lt;simulation2&amp;gt; ...&amp;quot;.  This performs an rsync of the simulation directory into my ~/Simulations folder on my laptop.  It excludes all files expected to be large, such as 1D, 2D and 3D HDF5 output, Cactus executable, etc.  I often modify the script to change what is excluded or included; it would be nice to have different &amp;quot;presets&amp;quot; so that you could say you wanted the 2D data now, or you wanted output from a particular thorn which you don&amp;#039;t normally sync.  It would be very nice for this functionality to be implemented in simfactory, since simfactory already knows how to ssh to remote machines, using gsissh and trampolines if necessary.  Currently this is hard-coded into my script for the machines I use.  Once I have the simulation(s) on my laptop, I use a Mathematica package called [http://simulationtools.org SimulationTools] written by Barry Wardell and me.  It provides a functional interface to the data which deals transparently with merging output from different restarts, and can read data from several different thorns, depending on what is available.  The package also supports reading Carpet HDF5 data and doing the required component-merging etc so that you can do analysis on the resulting data in Mathematica.  This supports 1D, 2D and 3D data, but is essentially dimension-agnostic.  I now use this instead of using VisIt for all my visualisation needs.  It has a component called SimView which displays a panel summarising a BBH simulation, including run speed and memory usage as a function of coordinate time, BH trajectories, separation, waveforms, etc.  SimulationTools is coupled to a C replacement for the Mathematica HDF5 reader, which we have found to be very slow and buggy.  SimulationTools is available under the GPL and is developed on BitBucket, and would be a good candidate for including in the Einstein Toolkit at some point.&lt;br /&gt;
&lt;br /&gt;
;Erik Schnetter: I use gnuplot, together with bash and awk to postprocess data. For quick looks I work on the remote machine, for in-depth looks I rsync the data to my local machine and run my scripts there. I usually end up writing a shell script or makefile for each project that runs rsync, awk, gnuplot, etc. automatically, so that I can update my graphs with a single command if the data change. Sometimes I try to use VisIt, in particular to find out where nans are on a grid. This often fails because there is something wrong with the VisIt installation or its dependencies.&lt;br /&gt;
&lt;br /&gt;
;Bruno Mundim: I usually output as much as I possibly can (0D, 1D, 2D and 3D) for the initial data and scp these data to my desktop. If I am happy with that I send out the next run and wait until it is finished to look into the data again. Mostly 0D and 2D slices, and I don&amp;#039;t output 3D data sets anymore. The 0D slices are usually in ASCII format. I run a set of scripts to combine the sets from previous checkpoints. I then use a set of supermongo (http://www.astro.princeton.edu/~rhl/sm/) scripts to generate a bunch of postscript files with the quantities I am interested. This is my quick way to look into 0D data: bring the data in, combine the sets, plot with supermongo. When I want to do a little bit more of analysis, I parse the ASCII output, piping into awk in order to send just to columns of data to xvs (http://cactuscode.org/documentation/visualization/DataVaultXVS/). This software is quite nice to do all sorts of analysis: derivatives, convergence tests, zoom in/out, merge data sets, animations, etc. Besides xvs could run on your desktop and have the data piped to it remotely from your HPC resource either in real time or the sdf file (or hdf file). To be continued...&lt;br /&gt;
&lt;br /&gt;
;Peter Diener: For quick and dirty looks at the data I use gnuplot (in combination with awk and paste where necessary) and ygraph usually on the remote machine. For more in-depth analysis (such as stuff that requires taking time derivatives, integrals, interpolation and/or fourier transforms which AFAIK can not be done in gnuplot) I transfer the necessary data to my laptop or workstation and write simple scripts in mathematica on a case by case basis. I have looked a little bit at Ian&amp;#039;s Mathematica Scripts but didn&amp;#039;t spend enough time on it to learn about the internal data structures so as to be able to write my own modules when something I needed was not present. For 3D visualization I still like to use OpenDX even though it&amp;#039;s not developed anymore. I find the interface much nicer and easier to use than VisIt.&lt;br /&gt;
&lt;br /&gt;
;Jonah Miller: For quick and dirty looks at the data I use ipython or ipython notebooks. I prefer to read hdf5 output, since I don&amp;#039;t have to parse it. For more detailed analysis, I copy the data over to my laptop and write python data analysis scripts on a case-by-case basis. For 3D visualization, I use [https://docs.einsteintoolkit.org/et-docs/Analysis_and_post-processing#yt yt]. I often have to do a fair amount of work to feed the data into yt, however, since automatic readers for yt are preliminary.&lt;/div&gt;</summary>
		<author><name>Noncct jmiller</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Visualization_of_simulation_results&amp;diff=4044</id>
		<title>Visualization of simulation results</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Visualization_of_simulation_results&amp;diff=4044"/>
		<updated>2015-08-14T15:53:08Z</updated>

		<summary type="html">&lt;p&gt;Noncct jmiller: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The current state of people producing quick-and-dirty, overview-like visualizations from their running and run simulations seems to be that everyone has their own scripts/tools, usually just barely doing the specific task they are designed to do. It would be beneficial to have a set of common tools helping with at least some parts of this process of a) retrieving parts of the files, and b) producing some overview of the state of a simulation. The task in question is not to create high-quality plots for e.g. publications, but more a monitoring/debugging kind of overview.&lt;br /&gt;
&lt;br /&gt;
To get this started, everyone interested is asked to shortly describe below what they currently do in that respect.&lt;br /&gt;
&lt;br /&gt;
;Frank Löffler: rsync (smaller) files by hand, use gnuplot/ygraph/VisIt to look at current results often using e.g. scripts for plotting multiple things with matplotlib and generating web pages from that&lt;br /&gt;
:               I would like to see support to obtain relevant files easily (simfactory comes to mind)&lt;br /&gt;
;Tanja Bode&lt;br /&gt;
:&amp;#039;&amp;#039;Current&amp;#039;&amp;#039;: Collection of bash/python/gnuplot/ygraph/VisIt scripts automatically generate a variety of interesting plots and generate an internal webpage summarizing the most interesting.  Our script for VisIt animations has been generalized to take a command-line description of the quantity to be plotted so its flexibility is maximized.&lt;br /&gt;
:&amp;#039;&amp;#039;Interests&amp;#039;&amp;#039;: I would like to see support for dynamically set HTML summaries of a run and its status.  Perhaps by specifying certain basic system properties (0-2 BHs, with/without hydro, presence of non-BH compact object) to select from subsets of standard plots and a few animations.  Having more flexibility in the animation choices on top of this as we have locally would be useful. Having these function on a cluster would be a plus, but not necessary.&lt;br /&gt;
;Roland Haas: same method that Tanja uses (shared scripts/script elements). Interests are similar, a modular system to quickly generate an overview page would be nice.&lt;br /&gt;
;Ian Hinder: I have a script called &amp;quot;getsim&amp;quot; which is called as &amp;quot;getsim &amp;lt;cluster&amp;gt; &amp;lt;simulation1&amp;gt; &amp;lt;simulation2&amp;gt; ...&amp;quot;.  This performs an rsync of the simulation directory into my ~/Simulations folder on my laptop.  It excludes all files expected to be large, such as 1D, 2D and 3D HDF5 output, Cactus executable, etc.  I often modify the script to change what is excluded or included; it would be nice to have different &amp;quot;presets&amp;quot; so that you could say you wanted the 2D data now, or you wanted output from a particular thorn which you don&amp;#039;t normally sync.  It would be very nice for this functionality to be implemented in simfactory, since simfactory already knows how to ssh to remote machines, using gsissh and trampolines if necessary.  Currently this is hard-coded into my script for the machines I use.  Once I have the simulation(s) on my laptop, I use a Mathematica package called [http://simulationtools.org SimulationTools] written by Barry Wardell and me.  It provides a functional interface to the data which deals transparently with merging output from different restarts, and can read data from several different thorns, depending on what is available.  The package also supports reading Carpet HDF5 data and doing the required component-merging etc so that you can do analysis on the resulting data in Mathematica.  This supports 1D, 2D and 3D data, but is essentially dimension-agnostic.  I now use this instead of using VisIt for all my visualisation needs.  It has a component called SimView which displays a panel summarising a BBH simulation, including run speed and memory usage as a function of coordinate time, BH trajectories, separation, waveforms, etc.  SimulationTools is coupled to a C replacement for the Mathematica HDF5 reader, which we have found to be very slow and buggy.  SimulationTools is available under the GPL and is developed on BitBucket, and would be a good candidate for including in the Einstein Toolkit at some point.&lt;br /&gt;
&lt;br /&gt;
;Erik Schnetter: I use gnuplot, together with bash and awk to postprocess data. For quick looks I work on the remote machine, for in-depth looks I rsync the data to my local machine and run my scripts there. I usually end up writing a shell script or makefile for each project that runs rsync, awk, gnuplot, etc. automatically, so that I can update my graphs with a single command if the data change. Sometimes I try to use VisIt, in particular to find out where nans are on a grid. This often fails because there is something wrong with the VisIt installation or its dependencies.&lt;br /&gt;
&lt;br /&gt;
;Bruno Mundim: I usually output as much as I possibly can (0D, 1D, 2D and 3D) for the initial data and scp these data to my desktop. If I am happy with that I send out the next run and wait until it is finished to look into the data again. Mostly 0D and 2D slices, and I don&amp;#039;t output 3D data sets anymore. The 0D slices are usually in ASCII format. I run a set of scripts to combine the sets from previous checkpoints. I then use a set of supermongo (http://www.astro.princeton.edu/~rhl/sm/) scripts to generate a bunch of postscript files with the quantities I am interested. This is my quick way to look into 0D data: bring the data in, combine the sets, plot with supermongo. When I want to do a little bit more of analysis, I parse the ASCII output, piping into awk in order to send just to columns of data to xvs (http://cactuscode.org/documentation/visualization/DataVaultXVS/). This software is quite nice to do all sorts of analysis: derivatives, convergence tests, zoom in/out, merge data sets, animations, etc. Besides xvs could run on your desktop and have the data piped to it remotely from your HPC resource either in real time or the sdf file (or hdf file). To be continued...&lt;br /&gt;
&lt;br /&gt;
;Peter Diener: For quick and dirty looks at the data I use gnuplot (in combination with awk and paste where necessary) and ygraph usually on the remote machine. For more in-depth analysis (such as stuff that requires taking time derivatives, integrals, interpolation and/or fourier transforms which AFAIK can not be done in gnuplot) I transfer the necessary data to my laptop or workstation and write simple scripts in mathematica on a case by case basis. I have looked a little bit at Ian&amp;#039;s Mathematica Scripts but didn&amp;#039;t spend enough time on it to learn about the internal data structures so as to be able to write my own modules when something I needed was not present. For 3D visualization I still like to use OpenDX even though it&amp;#039;s not developed anymore. I find the interface much nicer and easier to use than VisIt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Jonah Miller: For quick and dirty looks at the data I use ipython or ipython notebooks. I prefer to read hdf5 output, since I don&amp;#039;t have to parse it. For more detailed analysis, I copy the data over to my laptop and write python data analysis scripts on a case-by-case basis. For 3D visualization, I use [https://docs.einsteintoolkit.org/et-docs/Analysis_and_post-processing#yt yt]. I often have to do a fair amount of work to feed the data into yt, however, since automatic readers for yt are preliminary.&lt;/div&gt;</summary>
		<author><name>Noncct jmiller</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Visualization_of_simulation_results&amp;diff=4043</id>
		<title>Visualization of simulation results</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Visualization_of_simulation_results&amp;diff=4043"/>
		<updated>2015-08-14T15:52:25Z</updated>

		<summary type="html">&lt;p&gt;Noncct jmiller: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The current state of people producing quick-and-dirty, overview-like visualizations from their running and run simulations seems to be that everyone has their own scripts/tools, usually just barely doing the specific task they are designed to do. It would be beneficial to have a set of common tools helping with at least some parts of this process of a) retrieving parts of the files, and b) producing some overview of the state of a simulation. The task in question is not to create high-quality plots for e.g. publications, but more a monitoring/debugging kind of overview.&lt;br /&gt;
&lt;br /&gt;
To get this started, everyone interested is asked to shortly describe below what they currently do in that respect.&lt;br /&gt;
&lt;br /&gt;
;Frank Löffler: rsync (smaller) files by hand, use gnuplot/ygraph/VisIt to look at current results often using e.g. scripts for plotting multiple things with matplotlib and generating web pages from that&lt;br /&gt;
:               I would like to see support to obtain relevant files easily (simfactory comes to mind)&lt;br /&gt;
;Tanja Bode&lt;br /&gt;
:&amp;#039;&amp;#039;Current&amp;#039;&amp;#039;: Collection of bash/python/gnuplot/ygraph/VisIt scripts automatically generate a variety of interesting plots and generate an internal webpage summarizing the most interesting.  Our script for VisIt animations has been generalized to take a command-line description of the quantity to be plotted so its flexibility is maximized.&lt;br /&gt;
:&amp;#039;&amp;#039;Interests&amp;#039;&amp;#039;: I would like to see support for dynamically set HTML summaries of a run and its status.  Perhaps by specifying certain basic system properties (0-2 BHs, with/without hydro, presence of non-BH compact object) to select from subsets of standard plots and a few animations.  Having more flexibility in the animation choices on top of this as we have locally would be useful. Having these function on a cluster would be a plus, but not necessary.&lt;br /&gt;
;Roland Haas: same method that Tanja uses (shared scripts/script elements). Interests are similar, a modular system to quickly generate an overview page would be nice.&lt;br /&gt;
;Ian Hinder: I have a script called &amp;quot;getsim&amp;quot; which is called as &amp;quot;getsim &amp;lt;cluster&amp;gt; &amp;lt;simulation1&amp;gt; &amp;lt;simulation2&amp;gt; ...&amp;quot;.  This performs an rsync of the simulation directory into my ~/Simulations folder on my laptop.  It excludes all files expected to be large, such as 1D, 2D and 3D HDF5 output, Cactus executable, etc.  I often modify the script to change what is excluded or included; it would be nice to have different &amp;quot;presets&amp;quot; so that you could say you wanted the 2D data now, or you wanted output from a particular thorn which you don&amp;#039;t normally sync.  It would be very nice for this functionality to be implemented in simfactory, since simfactory already knows how to ssh to remote machines, using gsissh and trampolines if necessary.  Currently this is hard-coded into my script for the machines I use.  Once I have the simulation(s) on my laptop, I use a Mathematica package called [http://simulationtools.org SimulationTools] written by Barry Wardell and me.  It provides a functional interface to the data which deals transparently with merging output from different restarts, and can read data from several different thorns, depending on what is available.  The package also supports reading Carpet HDF5 data and doing the required component-merging etc so that you can do analysis on the resulting data in Mathematica.  This supports 1D, 2D and 3D data, but is essentially dimension-agnostic.  I now use this instead of using VisIt for all my visualisation needs.  It has a component called SimView which displays a panel summarising a BBH simulation, including run speed and memory usage as a function of coordinate time, BH trajectories, separation, waveforms, etc.  SimulationTools is coupled to a C replacement for the Mathematica HDF5 reader, which we have found to be very slow and buggy.  SimulationTools is available under the GPL and is developed on BitBucket, and would be a good candidate for including in the Einstein Toolkit at some point.&lt;br /&gt;
&lt;br /&gt;
;Erik Schnetter: I use gnuplot, together with bash and awk to postprocess data. For quick looks I work on the remote machine, for in-depth looks I rsync the data to my local machine and run my scripts there. I usually end up writing a shell script or makefile for each project that runs rsync, awk, gnuplot, etc. automatically, so that I can update my graphs with a single command if the data change. Sometimes I try to use VisIt, in particular to find out where nans are on a grid. This often fails because there is something wrong with the VisIt installation or its dependencies.&lt;br /&gt;
&lt;br /&gt;
;Bruno Mundim: I usually output as much as I possibly can (0D, 1D, 2D and 3D) for the initial data and scp these data to my desktop. If I am happy with that I send out the next run and wait until it is finished to look into the data again. Mostly 0D and 2D slices, and I don&amp;#039;t output 3D data sets anymore. The 0D slices are usually in ASCII format. I run a set of scripts to combine the sets from previous checkpoints. I then use a set of supermongo (http://www.astro.princeton.edu/~rhl/sm/) scripts to generate a bunch of postscript files with the quantities I am interested. This is my quick way to look into 0D data: bring the data in, combine the sets, plot with supermongo. When I want to do a little bit more of analysis, I parse the ASCII output, piping into awk in order to send just to columns of data to xvs (http://cactuscode.org/documentation/visualization/DataVaultXVS/). This software is quite nice to do all sorts of analysis: derivatives, convergence tests, zoom in/out, merge data sets, animations, etc. Besides xvs could run on your desktop and have the data piped to it remotely from your HPC resource either in real time or the sdf file (or hdf file). To be continued...&lt;br /&gt;
&lt;br /&gt;
;Peter Diener: For quick and dirty looks at the data I use gnuplot (in combination with awk and paste where necessary) and ygraph usually on the remote machine. For more in-depth analysis (such as stuff that requires taking time derivatives, integrals, interpolation and/or fourier transforms which AFAIK can not be done in gnuplot) I transfer the necessary data to my laptop or workstation and write simple scripts in mathematica on a case by case basis. I have looked a little bit at Ian&amp;#039;s Mathematica Scripts but didn&amp;#039;t spend enough time on it to learn about the internal data structures so as to be able to write my own modules when something I needed was not present. For 3D visualization I still like to use OpenDX even though it&amp;#039;s not developed anymore. I find the interface much nicer and easier to use than VisIt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Jonah Miller: For quick and dirty looks at the data I use ipython or ipython notebooks. I prefer to read hdf5 output, since I don&amp;#039;t have to parse it. For more detailed analysis, I copy the data over to my laptop and write python data analysis scripts on a case-by-case basis. For 3D visualization, I use yt. I often have to do a fair amount of work to feed the data into [https://docs.einsteintoolkit.org/et-docs/Analysis_and_post-processing#yt yt], however, since automatic readers for yt are preliminary.&lt;/div&gt;</summary>
		<author><name>Noncct jmiller</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Analysis_and_post-processing&amp;diff=4042</id>
		<title>Analysis and post-processing</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Analysis_and_post-processing&amp;diff=4042"/>
		<updated>2015-08-14T15:32:10Z</updated>

		<summary type="html">&lt;p&gt;Noncct jmiller: /* yt */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;(page under construction)&lt;br /&gt;
&lt;br /&gt;
This page collects information on existing tools which can be used to analyse data produced using the Einstein Toolkit. See also [Visualization of simulation results].&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
== SimulationTools ==&lt;br /&gt;
&lt;br /&gt;
[[Image:datavisualisation.png|right]]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! Homepage&lt;br /&gt;
| [http://simulationtools.org simulationtools.org]&lt;br /&gt;
|-&lt;br /&gt;
! Authors&lt;br /&gt;
| Ian Hinder and Barry Wardell&lt;br /&gt;
|-&lt;br /&gt;
! Licence&lt;br /&gt;
| GPLv2&lt;br /&gt;
|-&lt;br /&gt;
!Requirements&lt;br /&gt;
| Mathematica (proprietary)&lt;br /&gt;
|-&lt;br /&gt;
!Other info&lt;br /&gt;
| [http://simulationtools.org/Documentation/English/Tutorials/SimulationTools.html Documentation], [https://build.barrywardell.net/view/All/job/SimulationTools/ unit tests] (automatic on commit)&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
SimulationTools provides a functional interface to simulation data which deals transparently with merging output from different segments, and can read data from several different thorns, depending on what is available.  The package supports reading Carpet HDF5 data and doing the required component-merging etc so that you can do analysis on the resulting data in Mathematica.  This supports 1D, 2D and 3D data, but is essentially dimension-agnostic.  It has a component called SimView which displays a panel summarising a BBH simulation, including run speed and memory usage as a function of coordinate time, BH trajectories, separation, waveforms, etc.  SimulationTools is coupled to a C++ replacement for the Mathematica HDF5 reader.  &lt;br /&gt;
&lt;br /&gt;
SimulationTools is available under the GPL and is developed on BitBucket, and would be a good candidate for including in the Einstein Toolkit at some point.&lt;br /&gt;
&lt;br /&gt;
A more detailed overview of the [http://simulationtools.org/features.shtml features in SimulationTools] is available.&lt;br /&gt;
&lt;br /&gt;
== PostCactus ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! Homepage&lt;br /&gt;
| [https://bitbucket.org/DrWhat/pycactuset bitbucket.org/DrWhat/pycactuset]&lt;br /&gt;
|-&lt;br /&gt;
! Authors&lt;br /&gt;
| Wolfgang Kastaun&lt;br /&gt;
|-&lt;br /&gt;
! Licence&lt;br /&gt;
| GPLv3&lt;br /&gt;
|-&lt;br /&gt;
!Requirements&lt;br /&gt;
| Python (HDF5, PyTables, H5Py, NumPy, SciPy)&lt;br /&gt;
|-&lt;br /&gt;
!Other info&lt;br /&gt;
| Documentation&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This package contains modules to read and represent various CACTUS&lt;br /&gt;
data formats in Python, and some utilities for data analysis.&lt;br /&gt;
In detail,&lt;br /&gt;
&lt;br /&gt;
* simdir is an abstraction of one or more CACTUS output directories,  allowing access to data of various type by variable name, and   seamlessly combining data split over several directories.&lt;br /&gt;
* cactus_grid_h5 reads 1,2 and 3D data from CACTUS hdf5 datasets.&lt;br /&gt;
* cactus_grid_omni uses whatever data source it finds, e.g. cut   a 3D file to get xy plane data &lt;br /&gt;
* grid_data represents simple and mesh refinened datsets, common  arithmetic operations on them, as well as interpolation.&lt;br /&gt;
* cactus_scalars reads CACTUS 0D/Reductions ASCII files representing timeseries.&lt;br /&gt;
* cactus_gwsignal reads GW data&lt;br /&gt;
* cactus_multipoles reads multipole decomposition data (ASCII)&lt;br /&gt;
* timeseries represents timeseries and provides resampling numerical  differentiation.&lt;br /&gt;
* fourier_util performs FFT on timeseries and searches for peaks.&lt;br /&gt;
* unitconv performs unit conversion with focus on geometric units, including predefined CACTUS, PIZZA, and CGS systems.&lt;br /&gt;
&lt;br /&gt;
== yt ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! Homepage&lt;br /&gt;
| [http://yt-project.org/ yt project]&lt;br /&gt;
|-&lt;br /&gt;
! Authors&lt;br /&gt;
| [http://yt-project.org/about.html many]&lt;br /&gt;
|-&lt;br /&gt;
! Licence&lt;br /&gt;
| BSD 3-clause license&lt;br /&gt;
|-&lt;br /&gt;
!Requirements&lt;br /&gt;
| [http://bitbucket.org/yt_analysis/yt/raw/stable/doc/install_script.sh all in one install script]&lt;br /&gt;
|-&lt;br /&gt;
!Other info&lt;br /&gt;
| [http://yt-project.org/docs/3.2/ Documentation]&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Mostly a visualization toolkit with basic support to read Cactus data, however can do some data manipulation. The current version of the Einstein Toolkit frontend is available in [https://bitbucket.org/xarthisius/yt Kacper Krowalik&amp;#039;s fork] of yt. It is a work in progress and highly experimental. Therefore many datasets will not be read in correctly. To attain it:&lt;br /&gt;
&lt;br /&gt;
 hg clone ssh://hg@bitbucket.org/xarthisius/yt&lt;br /&gt;
 cd yt&lt;br /&gt;
 hg bookmark cactus&lt;br /&gt;
 python setup.py build_ext -i&lt;br /&gt;
&lt;br /&gt;
if this works, you installed yt. Now just add $(pwd) to your PYTHONPATH.&lt;br /&gt;
&lt;br /&gt;
Some example scripts for using and extending yt (and using the Einstein Toolkit frontend) can be found [https://bitbucket.org/Yurlungur/yt-examples here].&lt;br /&gt;
&lt;br /&gt;
== PyCactus ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:left;&amp;quot;&lt;br /&gt;
! Homepage&lt;br /&gt;
| [https://bitbucket.org/knarrff/pycactus https://bitbucket.org/knarrff/pycactus]&lt;br /&gt;
|-&lt;br /&gt;
! Authors&lt;br /&gt;
| Roberto De Pietri, Francesco Maione, Frank Löffler&lt;br /&gt;
|-&lt;br /&gt;
! Licence&lt;br /&gt;
| GPL&lt;br /&gt;
|-&lt;br /&gt;
!Requirements&lt;br /&gt;
| Python&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
PyCactus are tools currently primarily developed to read Carpet ASCII and HDF5 data (especially 0D/1D) for usage in python and plotting using matplotlib. Documentation is essentially non-existent at this point, but the scripts are not that long and for the most part should be understandable.&lt;/div&gt;</summary>
		<author><name>Noncct jmiller</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Yt&amp;diff=3906</id>
		<title>Yt</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Yt&amp;diff=3906"/>
		<updated>2015-05-29T12:20:15Z</updated>

		<summary type="html">&lt;p&gt;Noncct jmiller: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This Webpage is the wiki for developing the yt frontend for the Einstein Toolkit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Status ==&lt;br /&gt;
&lt;br /&gt;
We have an almost working frontend.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Challenges ==&lt;br /&gt;
&lt;br /&gt;
* yt supports cell-centred data only.&lt;br /&gt;
** To correct this we first implement a fake subgrid, filled by restriction&lt;br /&gt;
** Later Matt will add better support for true vertex-centred data&lt;br /&gt;
* Multiblock is not yet supported, but yt&amp;#039;s coordinate handler will be able to make it work.&lt;br /&gt;
* Discontinuous Galerkin methods are not supported. Support will come with hexahedral mesh support&lt;br /&gt;
** High-order finite element support is coming and then discontinuous Galerkin methods will work beautifully.&lt;br /&gt;
&lt;br /&gt;
== Shopping List ==&lt;br /&gt;
&lt;br /&gt;
Kacper, can you please write down what information you wish the einstein toolkit output format included?&lt;br /&gt;
&lt;br /&gt;
== Simulation List ==&lt;br /&gt;
&lt;br /&gt;
We have the following simulations available to test against&lt;br /&gt;
&lt;br /&gt;
* static_tov, a static neutron star with fixed mesh refinement&lt;br /&gt;
* ml_wavetoy, a series of simulations of a Gaussian pulse with refinement at the centre and periodic boundary conditions&lt;br /&gt;
* excited_tov_Lev0-i0000, a perturbed neutron star with adaptive mesh refinement and a sevenpatch multiblock infrastructure. &lt;br /&gt;
* bssn_dg4_sixpatches_kerrschild, a black hole with excision using discontinuous Galerkin methods&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== TODO ==&lt;br /&gt;
&lt;br /&gt;
* After it is working, Jonah will add comments to the frontend to ensure future generations have the knowledge of Carpet required to design it gleaned during the workshop at the NCSA.&lt;br /&gt;
* Other TODOS?&lt;/div&gt;</summary>
		<author><name>Noncct jmiller</name></author>
		
	</entry>
</feed>