<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.einsteintoolkit.org/et-docs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Noncct+barry</id>
	<title>Einstein Toolkit Documentation - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.einsteintoolkit.org/et-docs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Noncct+barry"/>
	<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/Special:Contributions/Noncct_barry"/>
	<updated>2026-05-02T14:34:48Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.0</generator>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Adding_requirements_to_the_Cactus_scheduler&amp;diff=2135</id>
		<title>Adding requirements to the Cactus scheduler</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Adding_requirements_to_the_Cactus_scheduler&amp;diff=2135"/>
		<updated>2011-01-25T14:04:31Z</updated>

		<summary type="html">&lt;p&gt;Noncct barry: /* Simple Test Case */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Problem Outline==&lt;br /&gt;
&lt;br /&gt;
One of the currently most complex aspects of programming with Cactus is writing schedule.ccl files for new routines, in particular if mesh refinement is used. The basic problem is that it is very difficult to ensure that routines are executed in the correct order, i.e. that all grid variables which are required for a routine are actually calculated beforehand. It is also difficult to ensure that boundary conditions (and synchronisation and symmetry boundaries) are applied when needed, in particular after regridding.&lt;br /&gt;
&lt;br /&gt;
The Cactus schedule consists of several independent &amp;quot;parts&amp;quot;: There are schedule bins defined by the flesh, there are schedule groups defined by infrastructure thorns (e.g. MoL or HydroBase), and there is the recursive Berger-Oliger algorithm traversing the bins implemented in Carpet. It is for the end-user difficult to see which groups are executed when and on what refinement level, and in which order this occurs.&lt;br /&gt;
&lt;br /&gt;
The Cactus schedule offers &amp;quot;before&amp;quot; and &amp;quot;after&amp;quot; clauses to ensure a partial ordering between routines. Unfortunately, this ordering applies only to routines within the same schedule group and the same schedule bin and refinement level. It is not possible to ensure a particular order between routines in different schedule groups or schedule bins, and it is very complex to ensure that a routine is executed e.g. after another routine has been executed on all refinement levels.&lt;br /&gt;
&lt;br /&gt;
There is one example setup that illustrates this problem. When setting up initial conditions for a hydrodynamics evolution, one may e.g. want to first set up a neutron star, then calculate its maximum density, and then set up the atmosphere to a value depending on this maximum density. Making this possible in Cactus required introducing a new schedule bin &amp;quot;postpostinitial&amp;quot; to the flesh, and requires careful arrangement of schedule groups defined by ADMBase and HydroBase. Even now that this is possible, it is probably not possible to ensure at run time that these actions occur in a correct order.&lt;br /&gt;
&lt;br /&gt;
==Suggested Solution==&lt;br /&gt;
&lt;br /&gt;
To resolve this issue, and to generally simplify the way in which schedule.ccl files are designed and written, the following was suggested:&lt;br /&gt;
&lt;br /&gt;
* Each scheduled routine declares which grid variables it reads and which grid variables it writes&lt;br /&gt;
* Since most routines write only parts of grid variables, the routine would also specify which part it reads/writes, e.g. the interior, outer boundary, symmetry boundary, etc.&lt;br /&gt;
* This allows the Cactus scheduler in a first step to validate the schedule and detect cases where a required variable has not been defined, or where a variable is calculated multiple times or synchronized multiple times&lt;br /&gt;
* In a second step this will also allow the Cactus scheduler to completely derive the schedule from these declarations. This may even make it possible to execute routines in parallel if they are independent. Even SYNC statements can be automatically derived, and schedule groups would not be necessary any more.&lt;br /&gt;
&lt;br /&gt;
One particular issue arises with routines which modify a variable, e.g. imposing the constraint that &amp;lt;math&amp;gt;\tilde A^i_i=0&amp;lt;/math&amp;gt;. These routines read and write the same variable, and it is thus not immediately clear why they should be executed or in which order they should be executed.&lt;br /&gt;
&lt;br /&gt;
One possibility to resolve this would be to add a tag to variables, declaring that this routine &amp;quot;reads Aij:original&amp;quot; and writes &amp;quot;Aij:constraints-enforced&amp;quot;. Each other routine accessing this variables would then also need to declare whether it reads or writes the original Aij or the Aij with constraints enforced. This is also the problem of this mechanism: it would create unwanted thorn-dependencies.&lt;br /&gt;
&lt;br /&gt;
Another possibility would be to use the existing BEFORE/AFTER mechanism for those cases where it is generally not possible to define proper data dependencies through variables alone. This would make it very easy to add a function which modifies e.g. Aij easily at any place, without making other thorns depend on the presence of this inserted routine.&lt;br /&gt;
&lt;br /&gt;
Another issue arises with loops in the schedule. This is currently mostly used by MoL for the sub-timesteps. There is currently no good idea for handling this. Logically those loops can be seen as a nested schedule tree: it should be possible to do the same as for the complete tree.&lt;br /&gt;
&lt;br /&gt;
==Current State==&lt;br /&gt;
&lt;br /&gt;
There is a patch &amp;lt;https://wiki.einsteintoolkit.org/et-docs/images/d/de/requirements.diff&amp;gt; to the flesh that allows adding REQUIRES and PROVIDES clauses to the schedule block for every routine. (&amp;#039;&amp;#039;&amp;#039;Update:&amp;#039;&amp;#039;&amp;#039; this patch has been applied to the flesh branch, please use the branch.) These clauses can be arbitrary strings (there is no syntax checking done), and they are stored in the schedule database of the flesh and are ignored by default. There is a suggestion to rename these clauses to READS and WRITES; this has not yet been done.&lt;br /&gt;
&lt;br /&gt;
The component list of the project can be found at the following URL, which includes the branch of the flesh, examples and other project files:&lt;br /&gt;
&lt;br /&gt;
 https://svn.cactuscode.org/projects/NewSchedule/NewSchedule/NewSchedule.th&lt;br /&gt;
&lt;br /&gt;
Carpet has a file Requirements.cc that detects the presence of these clauses and performs rudimentary checks. These checks are probably useless in their current form.&lt;br /&gt;
&lt;br /&gt;
==Next Steps==&lt;br /&gt;
&lt;br /&gt;
To bring this project further, we need to define how the &amp;quot;reads&amp;quot; and &amp;quot;writes&amp;quot; clauses should look like. As mentioned above, it is insufficient to list only grid variables there, since most routines access only parts of grid variables. &amp;#039;&amp;#039;&amp;#039;Ian Hinder&amp;#039;&amp;#039;&amp;#039; volunteered to come up with an initial plan for what kinds of &amp;quot;parts&amp;quot; there should be (e.g. interior, outer boundary, symmetry boundary, ghost zone, etc.). Those parts are driver-dependent, which means we have to come up with a way to tell the flesh about those parts and their connections (what is part of what).&lt;br /&gt;
&lt;br /&gt;
A simple example would look like the following (syntax arbitrary):&lt;br /&gt;
&lt;br /&gt;
 INTERIOR ∈ DOMAIN&lt;br /&gt;
 BOUNDARY ∈ DOMAIN&lt;br /&gt;
 INTERIOR ∩ BOUNDARY = ∅&lt;br /&gt;
 INTERIOR ∪ BOUNDARY = DOMAIN&lt;br /&gt;
&lt;br /&gt;
==Defining parts of grid functions==&lt;br /&gt;
&lt;br /&gt;
Application thorns typically write to either the interior of the grid (for example, those points which can be updated using finite differencing) or to the physical outer boundary (for applying user-supplied boundary conditions).  Other types of points are those on symmetry boundaries, interprocessor boundaries and mesh refinement boundaries, which an application thorn should never need to write to.  Symmetry thorns would write to symmetry boundaries, and the driver would write to interprocessor and mesh refinement boundaries.&lt;br /&gt;
&lt;br /&gt;
Consider a single local grid component.  It is a cuboidal set of points.  According to Cactus, each of the 6 faces of the component is either an interprocessor boundary (including refinement boundaries) or a symmetry boundary, or a physical boundary.  Each face can be only one of these.  Each face has a boundary width.  Points on edges and corners are associated with multiple faces, and are considered physical boundary points if they are not part of a symmetry or interprocessor boundary.  Hence, physical boundary points are only those which absolutely have to be updated, as they are not updated by any other mechanism.&lt;br /&gt;
&lt;br /&gt;
A typical application thorn only needs to be concerned with interior and physical boundary points.  We can divide the points in a component into the categories:&lt;br /&gt;
&lt;br /&gt;
* Interior;&lt;br /&gt;
* PhysicalBoundary;&lt;br /&gt;
* SymmetryBoundary;&lt;br /&gt;
* InterprocessorBoundary;&lt;br /&gt;
* RefinementBoundary.&lt;br /&gt;
&lt;br /&gt;
Most scheduled application functions need to read their variables from everywhere on the grid, and some write variables to everywhere on the grid.  We can use READS and WRITES lines in a schedule block to specify the variables and locations that each scheduled function reads from and writes to.  Each line would be a space-separated (we should think of a mechanism to allow new-lines) list of variables or groups (qualified with an implementation name if outside the current implementation).  To specify which part of the grid was being read or written, we could have &amp;quot;part&amp;quot; keywords in curly brackets after the grid function or group name.  If omitted, the default would be Everywhere. (FrankL: Shouldn&amp;#039;t we make the default for reading everywhere, but for writing only the interior? This is what most thorns do. IanH: I agree that most thorns do this, but we have to weigh that against the confusion of having two different defaults.)&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
&lt;br /&gt;
For example,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SCHEDULE TwoPunctures AT Initial &lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	WRITES: ADMBase::metric ADMBase::curv ADMBase::lapse&lt;br /&gt;
} &amp;quot;Create puncture black hole initial data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_convertFromADMBase AT Initial&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ADMBase::metric ADMBase::curv ADMBase::lapse ADMBase::shift&lt;br /&gt;
	WRITES: ML_log_confac ML_metric ML_trace_curv ML_curv ML_shift &lt;br /&gt;
} &amp;quot;ML_BSSN_convertFromADMBase&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_convertFromADMBaseGamma AT Initial&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric&lt;br /&gt;
	WRITES: ML_Gamma{Interior}&lt;br /&gt;
} &amp;quot;ML_BSSN_convertFromADMBaseGamma&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_RHS1 in MoL_CalcRHS&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric ML_trace_curv ML_curv ML_Gamma ADMBase::lapse ML_shift&lt;br /&gt;
	WRITES: ML_log_confac_rhs{Interior} ML_metric_rhs{Interior} ML_trace_curv_rhs{Interior} ML_curv_rhs{Interior} ML_Gamma_rhs{Interior} ADMBase::dtlapse{Interior} ML_shift_rhs{Interior}&lt;br /&gt;
} &amp;quot;ML_BSSN_RHS1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_RadiativeRHSBoundary in MoL_CalcRHS&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric ML_trace_curv ML_curv ML_Gamma ADMBase::lapse ML_shift&lt;br /&gt;
	WRITES: ML_log_confac_rhs{PhysicalBoundary} ML_metric_rhs{PhysicalBoundary} ML_trace_curv_rhs{PhysicalBoundary} ML_curv_rhs{PhysicalBoundary} ML_Gamma_rhs{PhysicalBoundary} ADMBase::dtlapse{PhysicalBoundary} ML_shift_rhs{PhysicalBoundary}&lt;br /&gt;
} &amp;quot;ML_BSSN_RHS1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_enforce in MoL_PostStep&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_metric ML_curv&lt;br /&gt;
	WRITES: ML_curv&lt;br /&gt;
} &amp;quot;ML_BSSN_enforce&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule psis_calc_4th AT Analysis&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ADMBase::metric ADMBase::curv&lt;br /&gt;
	WRITES: Psi4r{Interior} Psi4i{Interior}&lt;br /&gt;
} &amp;quot;psis_calc_4th&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule Multipole_Calc AT Analysis&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: Psi4r Psi4i&lt;br /&gt;
} &amp;quot;psis_calc_4th&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It might be useful to modify the syntax to say that variables are all read and all written from and to the same parts of the grid, as that will be the usual case.&lt;br /&gt;
&lt;br /&gt;
==Interaction with MoL==&lt;br /&gt;
&lt;br /&gt;
MoL is the time integrator that takes grid functions on the previous time level as input and produces new values for the grid functions on the current time level as output. It requires routines that calculate the RHS and/or apply boundary conditions to the evolved grid functions.&lt;br /&gt;
&lt;br /&gt;
Integrating MoL with the mechanism provided above faces several difficulties:&lt;br /&gt;
* The set of evolved grid functions is not defined in the schedule.ccl; it is instead defined via function calls at run time. One approach would be to define call-back functions that MoL has to provide, so that the scheduler can access this information.&lt;br /&gt;
* It is a priori not clear whether MoL evolves only the interior or also the boundary of grid functions. This can even be different for different grid functions. We can probably safely assume that MoL does not evolve ghost zones or symmetry zones (although this is technically also not defined).&lt;br /&gt;
* MoL integrates in time in a WHILE loop implemented in the scheduler. The WHILE condition depends on the particular time integrator that is chosen.&lt;br /&gt;
&lt;br /&gt;
To simplify things, I suggest that we leave MoL unmodified and treat it as black box. MoL needs to specify (e.g. via callback functions) what variables are integrated in time, and which region of these variables is integrated. The input to MoL is then the past time level of these variables, and the output of MoL is the current time level of these variables.&lt;br /&gt;
&lt;br /&gt;
Further, there is one special bin (or group) very similar to the existing MoL_RHS. In this bin, initially the current time level of these variables is defined (MoL needs to ensure this). At the end of this bin, the RHS grid functions need to be defined (MoL requires this). This is equivalent to a WRITES and READS statement.&lt;br /&gt;
&lt;br /&gt;
Since it is now known which regions of which variables MoL accesses (reads/writes), the scheduler can do the remainder and can schedule all other required routines, such as e.g. boundary conditions. For example, if MoL provides (&amp;quot;writes&amp;quot;) in the beginning of the RHS bin the interior of the state vector, and there is a routine which reads the whole domain of the state vector and writes the interior of the RHS, then the scheduler can easily deduce that the corresponding boundary condition routine must be called.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&lt;br /&gt;
MoL provides a call-back function that specifies the READS and WRITES declarations for MoL altogether and for MoL_RHS:&lt;br /&gt;
* MoL READS ML_BSSN{Interior, previous-timelevel}&lt;br /&gt;
* MoL WRITES ML_BSSN{Interior, current-timelevel}&lt;br /&gt;
* MoL_RHS WRITES ML_BSSN{Interior}&lt;br /&gt;
* MoL_RHS READS ML_BSSN_RHS{Interior}&lt;br /&gt;
&lt;br /&gt;
The declarations for MoL_RHS are understood as describing what is present in the beginning and what is required at the end of this bin.&lt;br /&gt;
&lt;br /&gt;
Of course, the programmer could also decide that certain evolved variables are integrated all over the domain, not just the interior.&lt;br /&gt;
&lt;br /&gt;
The application would then provide (at least) the following routines:&lt;br /&gt;
&lt;br /&gt;
RHS: READS ML_BSSN{All}, WRITES ML_BSSN_RHS{Interior}&lt;br /&gt;
BC: READS ML_BSSN{Interior}, WRITES ML_BSSN{Boundary}&lt;br /&gt;
&lt;br /&gt;
We can easily extend this example to include conversion to ADMBase if e.g. another RHS routine requires them.&lt;br /&gt;
&lt;br /&gt;
Synchronisation and symmetry boundaries would also be applied automatically. (There is a slight complication regarding whether &amp;quot;Boundary&amp;quot; includes ghost zones or not – grid points on the edge or in the corder of grid functions can be both an outer boundary and a ghost zone, and one needs to be clear whether these are included or not. However, this is a detail that can be solved later.)&lt;br /&gt;
&lt;br /&gt;
=== Simple Test Case ===&lt;br /&gt;
Since current schedules, even for WaveToy, are already very complex, we have a test code with a very simple schedule. This is implemented in the WaveToySimple thorn (https://svn.cactuscode.org/projects/NewSchedule/WaveToySimple/trunk/). To get the test code working, checkout Cactus using this thornlist: https://svn.cactuscode.org/projects/NewSchedule/NewSchedule/NewSchedule.th then apply the patch [[Image:requirements2.diff]] and compile. Simple parameter files are provided in arrangements/NewSchedule/WaveToySimple/par.&lt;br /&gt;
&lt;br /&gt;
The requirements part of the schedule looks as follows:&lt;br /&gt;
&lt;br /&gt;
* WaveToy_InitialData&lt;br /&gt;
:  PROVIDES: scalarevolve scalarevolve_p&lt;br /&gt;
* WaveToy_Evolution&lt;br /&gt;
:  REQUIRES: scalarevolve_p scalarevolve_p_p[Interior]&lt;br /&gt;
:  PROVIDES: scalarevolve[Interior]&lt;br /&gt;
* WaveToy_Boundaries&lt;br /&gt;
:  PROVIDES: scalarevolve[PhysicalBoundary]&lt;br /&gt;
* WaveToy_Analysis&lt;br /&gt;
:  REQUIRES: scalarevolve&lt;br /&gt;
:  PROVIDES: scalaranalysis&lt;br /&gt;
&lt;br /&gt;
There are some issues encountered with this schedule:&lt;br /&gt;
* Curly brackets do not work for specifying parts of the grid as they confuse the parser. Square brackets were used instead.&lt;br /&gt;
* It&amp;#039;s not clear how to refer to past time levels. The _p syntax was used, but that isn&amp;#039;t accepted by the schedule checker.&lt;br /&gt;
* WaveToy_Analysis requires scalarevolve, but the schedule checker does not recognize it as being provided (because the provides are in a separate schedule bin?).&lt;br /&gt;
* READS and WRITES seem more appropriate than REQUIRES and PROVIDES.&lt;/div&gt;</summary>
		<author><name>Noncct barry</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Adding_requirements_to_the_Cactus_scheduler&amp;diff=2134</id>
		<title>Adding requirements to the Cactus scheduler</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Adding_requirements_to_the_Cactus_scheduler&amp;diff=2134"/>
		<updated>2011-01-25T14:02:14Z</updated>

		<summary type="html">&lt;p&gt;Noncct barry: /* Simple Test Case */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Problem Outline==&lt;br /&gt;
&lt;br /&gt;
One of the currently most complex aspects of programming with Cactus is writing schedule.ccl files for new routines, in particular if mesh refinement is used. The basic problem is that it is very difficult to ensure that routines are executed in the correct order, i.e. that all grid variables which are required for a routine are actually calculated beforehand. It is also difficult to ensure that boundary conditions (and synchronisation and symmetry boundaries) are applied when needed, in particular after regridding.&lt;br /&gt;
&lt;br /&gt;
The Cactus schedule consists of several independent &amp;quot;parts&amp;quot;: There are schedule bins defined by the flesh, there are schedule groups defined by infrastructure thorns (e.g. MoL or HydroBase), and there is the recursive Berger-Oliger algorithm traversing the bins implemented in Carpet. It is for the end-user difficult to see which groups are executed when and on what refinement level, and in which order this occurs.&lt;br /&gt;
&lt;br /&gt;
The Cactus schedule offers &amp;quot;before&amp;quot; and &amp;quot;after&amp;quot; clauses to ensure a partial ordering between routines. Unfortunately, this ordering applies only to routines within the same schedule group and the same schedule bin and refinement level. It is not possible to ensure a particular order between routines in different schedule groups or schedule bins, and it is very complex to ensure that a routine is executed e.g. after another routine has been executed on all refinement levels.&lt;br /&gt;
&lt;br /&gt;
There is one example setup that illustrates this problem. When setting up initial conditions for a hydrodynamics evolution, one may e.g. want to first set up a neutron star, then calculate its maximum density, and then set up the atmosphere to a value depending on this maximum density. Making this possible in Cactus required introducing a new schedule bin &amp;quot;postpostinitial&amp;quot; to the flesh, and requires careful arrangement of schedule groups defined by ADMBase and HydroBase. Even now that this is possible, it is probably not possible to ensure at run time that these actions occur in a correct order.&lt;br /&gt;
&lt;br /&gt;
==Suggested Solution==&lt;br /&gt;
&lt;br /&gt;
To resolve this issue, and to generally simplify the way in which schedule.ccl files are designed and written, the following was suggested:&lt;br /&gt;
&lt;br /&gt;
* Each scheduled routine declares which grid variables it reads and which grid variables it writes&lt;br /&gt;
* Since most routines write only parts of grid variables, the routine would also specify which part it reads/writes, e.g. the interior, outer boundary, symmetry boundary, etc.&lt;br /&gt;
* This allows the Cactus scheduler in a first step to validate the schedule and detect cases where a required variable has not been defined, or where a variable is calculated multiple times or synchronized multiple times&lt;br /&gt;
* In a second step this will also allow the Cactus scheduler to completely derive the schedule from these declarations. This may even make it possible to execute routines in parallel if they are independent. Even SYNC statements can be automatically derived, and schedule groups would not be necessary any more.&lt;br /&gt;
&lt;br /&gt;
One particular issue arises with routines which modify a variable, e.g. imposing the constraint that &amp;lt;math&amp;gt;\tilde A^i_i=0&amp;lt;/math&amp;gt;. These routines read and write the same variable, and it is thus not immediately clear why they should be executed or in which order they should be executed.&lt;br /&gt;
&lt;br /&gt;
One possibility to resolve this would be to add a tag to variables, declaring that this routine &amp;quot;reads Aij:original&amp;quot; and writes &amp;quot;Aij:constraints-enforced&amp;quot;. Each other routine accessing this variables would then also need to declare whether it reads or writes the original Aij or the Aij with constraints enforced. This is also the problem of this mechanism: it would create unwanted thorn-dependencies.&lt;br /&gt;
&lt;br /&gt;
Another possibility would be to use the existing BEFORE/AFTER mechanism for those cases where it is generally not possible to define proper data dependencies through variables alone. This would make it very easy to add a function which modifies e.g. Aij easily at any place, without making other thorns depend on the presence of this inserted routine.&lt;br /&gt;
&lt;br /&gt;
Another issue arises with loops in the schedule. This is currently mostly used by MoL for the sub-timesteps. There is currently no good idea for handling this. Logically those loops can be seen as a nested schedule tree: it should be possible to do the same as for the complete tree.&lt;br /&gt;
&lt;br /&gt;
==Current State==&lt;br /&gt;
&lt;br /&gt;
There is a patch &amp;lt;https://wiki.einsteintoolkit.org/et-docs/images/d/de/requirements.diff&amp;gt; to the flesh that allows adding REQUIRES and PROVIDES clauses to the schedule block for every routine. (&amp;#039;&amp;#039;&amp;#039;Update:&amp;#039;&amp;#039;&amp;#039; this patch has been applied to the flesh branch, please use the branch.) These clauses can be arbitrary strings (there is no syntax checking done), and they are stored in the schedule database of the flesh and are ignored by default. There is a suggestion to rename these clauses to READS and WRITES; this has not yet been done.&lt;br /&gt;
&lt;br /&gt;
The component list of the project can be found at the following URL, which includes the branch of the flesh, examples and other project files:&lt;br /&gt;
&lt;br /&gt;
 https://svn.cactuscode.org/projects/NewSchedule/NewSchedule/NewSchedule.th&lt;br /&gt;
&lt;br /&gt;
Carpet has a file Requirements.cc that detects the presence of these clauses and performs rudimentary checks. These checks are probably useless in their current form.&lt;br /&gt;
&lt;br /&gt;
==Next Steps==&lt;br /&gt;
&lt;br /&gt;
To bring this project further, we need to define how the &amp;quot;reads&amp;quot; and &amp;quot;writes&amp;quot; clauses should look like. As mentioned above, it is insufficient to list only grid variables there, since most routines access only parts of grid variables. &amp;#039;&amp;#039;&amp;#039;Ian Hinder&amp;#039;&amp;#039;&amp;#039; volunteered to come up with an initial plan for what kinds of &amp;quot;parts&amp;quot; there should be (e.g. interior, outer boundary, symmetry boundary, ghost zone, etc.). Those parts are driver-dependent, which means we have to come up with a way to tell the flesh about those parts and their connections (what is part of what).&lt;br /&gt;
&lt;br /&gt;
A simple example would look like the following (syntax arbitrary):&lt;br /&gt;
&lt;br /&gt;
 INTERIOR ∈ DOMAIN&lt;br /&gt;
 BOUNDARY ∈ DOMAIN&lt;br /&gt;
 INTERIOR ∩ BOUNDARY = ∅&lt;br /&gt;
 INTERIOR ∪ BOUNDARY = DOMAIN&lt;br /&gt;
&lt;br /&gt;
==Defining parts of grid functions==&lt;br /&gt;
&lt;br /&gt;
Application thorns typically write to either the interior of the grid (for example, those points which can be updated using finite differencing) or to the physical outer boundary (for applying user-supplied boundary conditions).  Other types of points are those on symmetry boundaries, interprocessor boundaries and mesh refinement boundaries, which an application thorn should never need to write to.  Symmetry thorns would write to symmetry boundaries, and the driver would write to interprocessor and mesh refinement boundaries.&lt;br /&gt;
&lt;br /&gt;
Consider a single local grid component.  It is a cuboidal set of points.  According to Cactus, each of the 6 faces of the component is either an interprocessor boundary (including refinement boundaries) or a symmetry boundary, or a physical boundary.  Each face can be only one of these.  Each face has a boundary width.  Points on edges and corners are associated with multiple faces, and are considered physical boundary points if they are not part of a symmetry or interprocessor boundary.  Hence, physical boundary points are only those which absolutely have to be updated, as they are not updated by any other mechanism.&lt;br /&gt;
&lt;br /&gt;
A typical application thorn only needs to be concerned with interior and physical boundary points.  We can divide the points in a component into the categories:&lt;br /&gt;
&lt;br /&gt;
* Interior;&lt;br /&gt;
* PhysicalBoundary;&lt;br /&gt;
* SymmetryBoundary;&lt;br /&gt;
* InterprocessorBoundary;&lt;br /&gt;
* RefinementBoundary.&lt;br /&gt;
&lt;br /&gt;
Most scheduled application functions need to read their variables from everywhere on the grid, and some write variables to everywhere on the grid.  We can use READS and WRITES lines in a schedule block to specify the variables and locations that each scheduled function reads from and writes to.  Each line would be a space-separated (we should think of a mechanism to allow new-lines) list of variables or groups (qualified with an implementation name if outside the current implementation).  To specify which part of the grid was being read or written, we could have &amp;quot;part&amp;quot; keywords in curly brackets after the grid function or group name.  If omitted, the default would be Everywhere. (FrankL: Shouldn&amp;#039;t we make the default for reading everywhere, but for writing only the interior? This is what most thorns do. IanH: I agree that most thorns do this, but we have to weigh that against the confusion of having two different defaults.)&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
&lt;br /&gt;
For example,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SCHEDULE TwoPunctures AT Initial &lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	WRITES: ADMBase::metric ADMBase::curv ADMBase::lapse&lt;br /&gt;
} &amp;quot;Create puncture black hole initial data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_convertFromADMBase AT Initial&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ADMBase::metric ADMBase::curv ADMBase::lapse ADMBase::shift&lt;br /&gt;
	WRITES: ML_log_confac ML_metric ML_trace_curv ML_curv ML_shift &lt;br /&gt;
} &amp;quot;ML_BSSN_convertFromADMBase&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_convertFromADMBaseGamma AT Initial&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric&lt;br /&gt;
	WRITES: ML_Gamma{Interior}&lt;br /&gt;
} &amp;quot;ML_BSSN_convertFromADMBaseGamma&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_RHS1 in MoL_CalcRHS&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric ML_trace_curv ML_curv ML_Gamma ADMBase::lapse ML_shift&lt;br /&gt;
	WRITES: ML_log_confac_rhs{Interior} ML_metric_rhs{Interior} ML_trace_curv_rhs{Interior} ML_curv_rhs{Interior} ML_Gamma_rhs{Interior} ADMBase::dtlapse{Interior} ML_shift_rhs{Interior}&lt;br /&gt;
} &amp;quot;ML_BSSN_RHS1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_RadiativeRHSBoundary in MoL_CalcRHS&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric ML_trace_curv ML_curv ML_Gamma ADMBase::lapse ML_shift&lt;br /&gt;
	WRITES: ML_log_confac_rhs{PhysicalBoundary} ML_metric_rhs{PhysicalBoundary} ML_trace_curv_rhs{PhysicalBoundary} ML_curv_rhs{PhysicalBoundary} ML_Gamma_rhs{PhysicalBoundary} ADMBase::dtlapse{PhysicalBoundary} ML_shift_rhs{PhysicalBoundary}&lt;br /&gt;
} &amp;quot;ML_BSSN_RHS1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_enforce in MoL_PostStep&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_metric ML_curv&lt;br /&gt;
	WRITES: ML_curv&lt;br /&gt;
} &amp;quot;ML_BSSN_enforce&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule psis_calc_4th AT Analysis&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ADMBase::metric ADMBase::curv&lt;br /&gt;
	WRITES: Psi4r{Interior} Psi4i{Interior}&lt;br /&gt;
} &amp;quot;psis_calc_4th&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule Multipole_Calc AT Analysis&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: Psi4r Psi4i&lt;br /&gt;
} &amp;quot;psis_calc_4th&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It might be useful to modify the syntax to say that variables are all read and all written from and to the same parts of the grid, as that will be the usual case.&lt;br /&gt;
&lt;br /&gt;
==Interaction with MoL==&lt;br /&gt;
&lt;br /&gt;
MoL is the time integrator that takes grid functions on the previous time level as input and produces new values for the grid functions on the current time level as output. It requires routines that calculate the RHS and/or apply boundary conditions to the evolved grid functions.&lt;br /&gt;
&lt;br /&gt;
Integrating MoL with the mechanism provided above faces several difficulties:&lt;br /&gt;
* The set of evolved grid functions is not defined in the schedule.ccl; it is instead defined via function calls at run time. One approach would be to define call-back functions that MoL has to provide, so that the scheduler can access this information.&lt;br /&gt;
* It is a priori not clear whether MoL evolves only the interior or also the boundary of grid functions. This can even be different for different grid functions. We can probably safely assume that MoL does not evolve ghost zones or symmetry zones (although this is technically also not defined).&lt;br /&gt;
* MoL integrates in time in a WHILE loop implemented in the scheduler. The WHILE condition depends on the particular time integrator that is chosen.&lt;br /&gt;
&lt;br /&gt;
To simplify things, I suggest that we leave MoL unmodified and treat it as black box. MoL needs to specify (e.g. via callback functions) what variables are integrated in time, and which region of these variables is integrated. The input to MoL is then the past time level of these variables, and the output of MoL is the current time level of these variables.&lt;br /&gt;
&lt;br /&gt;
Further, there is one special bin (or group) very similar to the existing MoL_RHS. In this bin, initially the current time level of these variables is defined (MoL needs to ensure this). At the end of this bin, the RHS grid functions need to be defined (MoL requires this). This is equivalent to a WRITES and READS statement.&lt;br /&gt;
&lt;br /&gt;
Since it is now known which regions of which variables MoL accesses (reads/writes), the scheduler can do the remainder and can schedule all other required routines, such as e.g. boundary conditions. For example, if MoL provides (&amp;quot;writes&amp;quot;) in the beginning of the RHS bin the interior of the state vector, and there is a routine which reads the whole domain of the state vector and writes the interior of the RHS, then the scheduler can easily deduce that the corresponding boundary condition routine must be called.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&lt;br /&gt;
MoL provides a call-back function that specifies the READS and WRITES declarations for MoL altogether and for MoL_RHS:&lt;br /&gt;
* MoL READS ML_BSSN{Interior, previous-timelevel}&lt;br /&gt;
* MoL WRITES ML_BSSN{Interior, current-timelevel}&lt;br /&gt;
* MoL_RHS WRITES ML_BSSN{Interior}&lt;br /&gt;
* MoL_RHS READS ML_BSSN_RHS{Interior}&lt;br /&gt;
&lt;br /&gt;
The declarations for MoL_RHS are understood as describing what is present in the beginning and what is required at the end of this bin.&lt;br /&gt;
&lt;br /&gt;
Of course, the programmer could also decide that certain evolved variables are integrated all over the domain, not just the interior.&lt;br /&gt;
&lt;br /&gt;
The application would then provide (at least) the following routines:&lt;br /&gt;
&lt;br /&gt;
RHS: READS ML_BSSN{All}, WRITES ML_BSSN_RHS{Interior}&lt;br /&gt;
BC: READS ML_BSSN{Interior}, WRITES ML_BSSN{Boundary}&lt;br /&gt;
&lt;br /&gt;
We can easily extend this example to include conversion to ADMBase if e.g. another RHS routine requires them.&lt;br /&gt;
&lt;br /&gt;
Synchronisation and symmetry boundaries would also be applied automatically. (There is a slight complication regarding whether &amp;quot;Boundary&amp;quot; includes ghost zones or not – grid points on the edge or in the corder of grid functions can be both an outer boundary and a ghost zone, and one needs to be clear whether these are included or not. However, this is a detail that can be solved later.)&lt;br /&gt;
&lt;br /&gt;
=== Simple Test Case ===&lt;br /&gt;
Since current schedules, even for WaveToy, are already very complex, we have a test code with a very simple schedule. This is implemented in the WaveToySimple thorn (https://svn.cactuscode.org/projects/NewSchedule/WaveToySimple/trunk/). To get the test code working, checkout Cactus using this thornlist: https://svn.cactuscode.org/projects/NewSchedule/NewSchedule/NewSchedule.th then apply the patche [[Image:requirements2.diff]] and compile. Simple parameter files are provided in arrangements/NewSchedule/WaveToySimple/par.&lt;br /&gt;
&lt;br /&gt;
The requirements part of the schedule looks as follows:&lt;br /&gt;
&lt;br /&gt;
* WaveToy_InitialData&lt;br /&gt;
  PROVIDES: scalarevolve scalarevolve_p&lt;br /&gt;
* WaveToy_Evolution&lt;br /&gt;
  REQUIRES: scalarevolve_p scalarevolve_p_p[Interior]&lt;br /&gt;
  PROVIDES: scalarevolve[Interior]&lt;br /&gt;
* WaveToy_Boundaries&lt;br /&gt;
  PROVIDES: scalarevolve[PhysicalBoundary]&lt;br /&gt;
* WaveToy_Analysis&lt;br /&gt;
  REQUIRES: scalarevolve&lt;br /&gt;
  PROVIDES: scalaranalysis&lt;br /&gt;
&lt;br /&gt;
There are some issues encountered with this schedule:&lt;br /&gt;
* Curly brackets do not work for specifying parts of the grid as they confuse the parser. Square brackets were used instead.&lt;br /&gt;
* It&amp;#039;s not clear how to refer to past time levels. The _p syntax was used, but that isn&amp;#039;t accepted by the schedule checker.&lt;br /&gt;
* WaveToy_Analysis requires scalarevolve, but the schedule checker does not recognize it as being provided (because the provides are in a separate schedule bin?).&lt;br /&gt;
* READS and WRITES seem more appropriate than REQUIRES and PROVIDES.&lt;/div&gt;</summary>
		<author><name>Noncct barry</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Adding_requirements_to_the_Cactus_scheduler&amp;diff=2133</id>
		<title>Adding requirements to the Cactus scheduler</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Adding_requirements_to_the_Cactus_scheduler&amp;diff=2133"/>
		<updated>2011-01-23T15:50:48Z</updated>

		<summary type="html">&lt;p&gt;Noncct barry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Problem Outline==&lt;br /&gt;
&lt;br /&gt;
One of the currently most complex aspects of programming with Cactus is writing schedule.ccl files for new routines, in particular if mesh refinement is used. The basic problem is that it is very difficult to ensure that routines are executed in the correct order, i.e. that all grid variables which are required for a routine are actually calculated beforehand. It is also difficult to ensure that boundary conditions (and synchronisation and symmetry boundaries) are applied when needed, in particular after regridding.&lt;br /&gt;
&lt;br /&gt;
The Cactus schedule consists of several independent &amp;quot;parts&amp;quot;: There are schedule bins defined by the flesh, there are schedule groups defined by infrastructure thorns (e.g. MoL or HydroBase), and there is the recursive Berger-Oliger algorithm traversing the bins implemented in Carpet. It is for the end-user difficult to see which groups are executed when and on what refinement level, and in which order this occurs.&lt;br /&gt;
&lt;br /&gt;
The Cactus schedule offers &amp;quot;before&amp;quot; and &amp;quot;after&amp;quot; clauses to ensure a partial ordering between routines. Unfortunately, this ordering applies only to routines within the same schedule group and the same schedule bin and refinement level. It is not possible to ensure a particular order between routines in different schedule groups or schedule bins, and it is very complex to ensure that a routine is executed e.g. after another routine has been executed on all refinement levels.&lt;br /&gt;
&lt;br /&gt;
There is one example setup that illustrates this problem. When setting up initial conditions for a hydrodynamics evolution, one may e.g. want to first set up a neutron star, then calculate its maximum density, and then set up the atmosphere to a value depending on this maximum density. Making this possible in Cactus required introducing a new schedule bin &amp;quot;postpostinitial&amp;quot; to the flesh, and requires careful arrangement of schedule groups defined by ADMBase and HydroBase. Even now that this is possible, it is probably not possible to ensure at run time that these actions occur in a correct order.&lt;br /&gt;
&lt;br /&gt;
==Suggested Solution==&lt;br /&gt;
&lt;br /&gt;
To resolve this issue, and to generally simplify the way in which schedule.ccl files are designed and written, the following was suggested:&lt;br /&gt;
&lt;br /&gt;
* Each scheduled routine declares which grid variables it reads and which grid variables it writes&lt;br /&gt;
* Since most routines write only parts of grid variables, the routine would also specify which part it reads/writes, e.g. the interior, outer boundary, symmetry boundary, etc.&lt;br /&gt;
* This allows the Cactus scheduler in a first step to validate the schedule and detect cases where a required variable has not been defined, or where a variable is calculated multiple times or synchronized multiple times&lt;br /&gt;
* In a second step this will also allow the Cactus scheduler to completely derive the schedule from these declarations. This may even make it possible to execute routines in parallel if they are independent. Even SYNC statements can be automatically derived, and schedule groups would not be necessary any more.&lt;br /&gt;
&lt;br /&gt;
One particular issue arises with routines which modify a variable, e.g. imposing the constraint that &amp;lt;math&amp;gt;\tilde A^i_i=0&amp;lt;/math&amp;gt;. These routines read and write the same variable, and it is thus not immediately clear why they should be executed or in which order they should be executed.&lt;br /&gt;
&lt;br /&gt;
One possibility to resolve this would be to add a tag to variables, declaring that this routine &amp;quot;reads Aij:original&amp;quot; and writes &amp;quot;Aij:constraints-enforced&amp;quot;. Each other routine accessing this variables would then also need to declare whether it reads or writes the original Aij or the Aij with constraints enforced. This is also the problem of this mechanism: it would create unwanted thorn-dependencies.&lt;br /&gt;
&lt;br /&gt;
Another possibility would be to use the existing BEFORE/AFTER mechanism for those cases where it is generally not possible to define proper data dependencies through variables alone. This would make it very easy to add a function which modifies e.g. Aij easily at any place, without making other thorns depend on the presence of this inserted routine.&lt;br /&gt;
&lt;br /&gt;
Another issue arises with loops in the schedule. This is currently mostly used by MoL for the sub-timesteps. There is currently no good idea for handling this. Logically those loops can be seen as a nested schedule tree: it should be possible to do the same as for the complete tree.&lt;br /&gt;
&lt;br /&gt;
==Current State==&lt;br /&gt;
&lt;br /&gt;
There is a patch &amp;lt;https://wiki.einsteintoolkit.org/et-docs/images/d/de/requirements.diff&amp;gt; to the flesh that allows adding REQUIRES and PROVIDES clauses to the schedule block for every routine. (&amp;#039;&amp;#039;&amp;#039;Update:&amp;#039;&amp;#039;&amp;#039; this patch has been applied to the flesh branch, please use the branch.) These clauses can be arbitrary strings (there is no syntax checking done), and they are stored in the schedule database of the flesh and are ignored by default. There is a suggestion to rename these clauses to READS and WRITES; this has not yet been done.&lt;br /&gt;
&lt;br /&gt;
The component list of the project can be found at the following URL, which includes the branch of the flesh, examples and other project files:&lt;br /&gt;
&lt;br /&gt;
 https://svn.cactuscode.org/projects/NewSchedule/NewSchedule/NewSchedule.th&lt;br /&gt;
&lt;br /&gt;
Carpet has a file Requirements.cc that detects the presence of these clauses and performs rudimentary checks. These checks are probably useless in their current form.&lt;br /&gt;
&lt;br /&gt;
==Next Steps==&lt;br /&gt;
&lt;br /&gt;
To bring this project further, we need to define how the &amp;quot;reads&amp;quot; and &amp;quot;writes&amp;quot; clauses should look like. As mentioned above, it is insufficient to list only grid variables there, since most routines access only parts of grid variables. &amp;#039;&amp;#039;&amp;#039;Ian Hinder&amp;#039;&amp;#039;&amp;#039; volunteered to come up with an initial plan for what kinds of &amp;quot;parts&amp;quot; there should be (e.g. interior, outer boundary, symmetry boundary, ghost zone, etc.). Those parts are driver-dependent, which means we have to come up with a way to tell the flesh about those parts and their connections (what is part of what).&lt;br /&gt;
&lt;br /&gt;
A simple example would look like the following (syntax arbitrary):&lt;br /&gt;
&lt;br /&gt;
 INTERIOR ∈ DOMAIN&lt;br /&gt;
 BOUNDARY ∈ DOMAIN&lt;br /&gt;
 INTERIOR ∩ BOUNDARY = ∅&lt;br /&gt;
 INTERIOR ∪ BOUNDARY = DOMAIN&lt;br /&gt;
&lt;br /&gt;
==Defining parts of grid functions==&lt;br /&gt;
&lt;br /&gt;
Application thorns typically write to either the interior of the grid (for example, those points which can be updated using finite differencing) or to the physical outer boundary (for applying user-supplied boundary conditions).  Other types of points are those on symmetry boundaries, interprocessor boundaries and mesh refinement boundaries, which an application thorn should never need to write to.  Symmetry thorns would write to symmetry boundaries, and the driver would write to interprocessor and mesh refinement boundaries.&lt;br /&gt;
&lt;br /&gt;
Consider a single local grid component.  It is a cuboidal set of points.  According to Cactus, each of the 6 faces of the component is either an interprocessor boundary (including refinement boundaries) or a symmetry boundary, or a physical boundary.  Each face can be only one of these.  Each face has a boundary width.  Points on edges and corners are associated with multiple faces, and are considered physical boundary points if they are not part of a symmetry or interprocessor boundary.  Hence, physical boundary points are only those which absolutely have to be updated, as they are not updated by any other mechanism.&lt;br /&gt;
&lt;br /&gt;
A typical application thorn only needs to be concerned with interior and physical boundary points.  We can divide the points in a component into the categories:&lt;br /&gt;
&lt;br /&gt;
* Interior;&lt;br /&gt;
* PhysicalBoundary;&lt;br /&gt;
* SymmetryBoundary;&lt;br /&gt;
* InterprocessorBoundary;&lt;br /&gt;
* RefinementBoundary.&lt;br /&gt;
&lt;br /&gt;
Most scheduled application functions need to read their variables from everywhere on the grid, and some write variables to everywhere on the grid.  We can use READS and WRITES lines in a schedule block to specify the variables and locations that each scheduled function reads from and writes to.  Each line would be a space-separated (we should think of a mechanism to allow new-lines) list of variables or groups (qualified with an implementation name if outside the current implementation).  To specify which part of the grid was being read or written, we could have &amp;quot;part&amp;quot; keywords in curly brackets after the grid function or group name.  If omitted, the default would be Everywhere. (FrankL: Shouldn&amp;#039;t we make the default for reading everywhere, but for writing only the interior? This is what most thorns do. IanH: I agree that most thorns do this, but we have to weigh that against the confusion of having two different defaults.)&lt;br /&gt;
&lt;br /&gt;
==Examples==&lt;br /&gt;
&lt;br /&gt;
For example,&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SCHEDULE TwoPunctures AT Initial &lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	WRITES: ADMBase::metric ADMBase::curv ADMBase::lapse&lt;br /&gt;
} &amp;quot;Create puncture black hole initial data&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_convertFromADMBase AT Initial&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ADMBase::metric ADMBase::curv ADMBase::lapse ADMBase::shift&lt;br /&gt;
	WRITES: ML_log_confac ML_metric ML_trace_curv ML_curv ML_shift &lt;br /&gt;
} &amp;quot;ML_BSSN_convertFromADMBase&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_convertFromADMBaseGamma AT Initial&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric&lt;br /&gt;
	WRITES: ML_Gamma{Interior}&lt;br /&gt;
} &amp;quot;ML_BSSN_convertFromADMBaseGamma&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_RHS1 in MoL_CalcRHS&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric ML_trace_curv ML_curv ML_Gamma ADMBase::lapse ML_shift&lt;br /&gt;
	WRITES: ML_log_confac_rhs{Interior} ML_metric_rhs{Interior} ML_trace_curv_rhs{Interior} ML_curv_rhs{Interior} ML_Gamma_rhs{Interior} ADMBase::dtlapse{Interior} ML_shift_rhs{Interior}&lt;br /&gt;
} &amp;quot;ML_BSSN_RHS1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_RadiativeRHSBoundary in MoL_CalcRHS&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_log_confac ML_metric ML_trace_curv ML_curv ML_Gamma ADMBase::lapse ML_shift&lt;br /&gt;
	WRITES: ML_log_confac_rhs{PhysicalBoundary} ML_metric_rhs{PhysicalBoundary} ML_trace_curv_rhs{PhysicalBoundary} ML_curv_rhs{PhysicalBoundary} ML_Gamma_rhs{PhysicalBoundary} ADMBase::dtlapse{PhysicalBoundary} ML_shift_rhs{PhysicalBoundary}&lt;br /&gt;
} &amp;quot;ML_BSSN_RHS1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule ML_BSSN_enforce in MoL_PostStep&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ML_metric ML_curv&lt;br /&gt;
	WRITES: ML_curv&lt;br /&gt;
} &amp;quot;ML_BSSN_enforce&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule psis_calc_4th AT Analysis&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: ADMBase::metric ADMBase::curv&lt;br /&gt;
	WRITES: Psi4r{Interior} Psi4i{Interior}&lt;br /&gt;
} &amp;quot;psis_calc_4th&amp;quot;&lt;br /&gt;
&lt;br /&gt;
schedule Multipole_Calc AT Analysis&lt;br /&gt;
{&lt;br /&gt;
	LANG: C&lt;br /&gt;
	READS: Psi4r Psi4i&lt;br /&gt;
} &amp;quot;psis_calc_4th&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It might be useful to modify the syntax to say that variables are all read and all written from and to the same parts of the grid, as that will be the usual case.&lt;br /&gt;
&lt;br /&gt;
==Interaction with MoL==&lt;br /&gt;
&lt;br /&gt;
MoL is the time integrator that takes grid functions on the previous time level as input and produces new values for the grid functions on the current time level as output. It requires routines that calculate the RHS and/or apply boundary conditions to the evolved grid functions.&lt;br /&gt;
&lt;br /&gt;
Integrating MoL with the mechanism provided above faces several difficulties:&lt;br /&gt;
* The set of evolved grid functions is not defined in the schedule.ccl; it is instead defined via function calls at run time. One approach would be to define call-back functions that MoL has to provide, so that the scheduler can access this information.&lt;br /&gt;
* It is a priori not clear whether MoL evolves only the interior or also the boundary of grid functions. This can even be different for different grid functions. We can probably safely assume that MoL does not evolve ghost zones or symmetry zones (although this is technically also not defined).&lt;br /&gt;
* MoL integrates in time in a WHILE loop implemented in the scheduler. The WHILE condition depends on the particular time integrator that is chosen.&lt;br /&gt;
&lt;br /&gt;
To simplify things, I suggest that we leave MoL unmodified and treat it as black box. MoL needs to specify (e.g. via callback functions) what variables are integrated in time, and which region of these variables is integrated. The input to MoL is then the past time level of these variables, and the output of MoL is the current time level of these variables.&lt;br /&gt;
&lt;br /&gt;
Further, there is one special bin (or group) very similar to the existing MoL_RHS. In this bin, initially the current time level of these variables is defined (MoL needs to ensure this). At the end of this bin, the RHS grid functions need to be defined (MoL requires this). This is equivalent to a WRITES and READS statement.&lt;br /&gt;
&lt;br /&gt;
Since it is now known which regions of which variables MoL accesses (reads/writes), the scheduler can do the remainder and can schedule all other required routines, such as e.g. boundary conditions. For example, if MoL provides (&amp;quot;writes&amp;quot;) in the beginning of the RHS bin the interior of the state vector, and there is a routine which reads the whole domain of the state vector and writes the interior of the RHS, then the scheduler can easily deduce that the corresponding boundary condition routine must be called.&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&lt;br /&gt;
MoL provides a call-back function that specifies the READS and WRITES declarations for MoL altogether and for MoL_RHS:&lt;br /&gt;
* MoL READS ML_BSSN{Interior, previous-timelevel}&lt;br /&gt;
* MoL WRITES ML_BSSN{Interior, current-timelevel}&lt;br /&gt;
* MoL_RHS WRITES ML_BSSN{Interior}&lt;br /&gt;
* MoL_RHS READS ML_BSSN_RHS{Interior}&lt;br /&gt;
&lt;br /&gt;
The declarations for MoL_RHS are understood as describing what is present in the beginning and what is required at the end of this bin.&lt;br /&gt;
&lt;br /&gt;
Of course, the programmer could also decide that certain evolved variables are integrated all over the domain, not just the interior.&lt;br /&gt;
&lt;br /&gt;
The application would then provide (at least) the following routines:&lt;br /&gt;
&lt;br /&gt;
RHS: READS ML_BSSN{All}, WRITES ML_BSSN_RHS{Interior}&lt;br /&gt;
BC: READS ML_BSSN{Interior}, WRITES ML_BSSN{Boundary}&lt;br /&gt;
&lt;br /&gt;
We can easily extend this example to include conversion to ADMBase if e.g. another RHS routine requires them.&lt;br /&gt;
&lt;br /&gt;
Synchronisation and symmetry boundaries would also be applied automatically. (There is a slight complication regarding whether &amp;quot;Boundary&amp;quot; includes ghost zones or not – grid points on the edge or in the corder of grid functions can be both an outer boundary and a ghost zone, and one needs to be clear whether these are included or not. However, this is a detail that can be solved later.)&lt;br /&gt;
&lt;br /&gt;
=== Simple Test Case ===&lt;br /&gt;
Since current schedules, even for WaveToy, are already very complex, we have a test code with a very simple schedule. This is implemented in the WaveToySimple thorn (https://svn.cactuscode.org/projects/NewSchedule/WaveToySimple/trunk/). To get the test code working, checkout Cactus using this thornlist: https://svn.cactuscode.org/projects/NewSchedule/NewSchedule/NewSchedule.th then apply the patches [[Image:requirements2.diff]] and [[Image:WaveToySimple.patch]] and compile. Simple parameter files are provided in arrangements/NewSchedule/WaveToySimple/par.&lt;br /&gt;
&lt;br /&gt;
The requirements part of the schedule looks as follows:&lt;br /&gt;
&lt;br /&gt;
* WaveToy_InitialData&lt;br /&gt;
  PROVIDES: scalarevolve scalarevolve_p&lt;br /&gt;
* WaveToy_Evolution&lt;br /&gt;
  REQUIRES: scalarevolve_p scalarevolve_p_p[Interior]&lt;br /&gt;
  PROVIDES: scalarevolve[Interior]&lt;br /&gt;
* WaveToy_Boundaries&lt;br /&gt;
  PROVIDES: scalarevolve[PhysicalBoundary]&lt;br /&gt;
* WaveToy_Analysis&lt;br /&gt;
  REQUIRES: scalarevolve&lt;br /&gt;
  PROVIDES: scalaranalysis&lt;br /&gt;
&lt;br /&gt;
There are some issues encountered with this schedule:&lt;br /&gt;
* Curly brackets do not work for specifying parts of the grid as they confuse the parser. Square brackets were used instead.&lt;br /&gt;
* It&amp;#039;s not clear how to refer to past time levels. The _p syntax was used, but that isn&amp;#039;t accepted by the schedule checker.&lt;br /&gt;
* WaveToy_Analysis requires scalarevolve, but the schedule checker does not recognize it as being provided (because the provides are in a separate schedule bin?).&lt;br /&gt;
* READS and WRITES seem more appropriate than REQUIRES and PROVIDES.&lt;/div&gt;</summary>
		<author><name>Noncct barry</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=File:WaveToySimple.patch&amp;diff=2132</id>
		<title>File:WaveToySimple.patch</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=File:WaveToySimple.patch&amp;diff=2132"/>
		<updated>2011-01-23T14:33:21Z</updated>

		<summary type="html">&lt;p&gt;Noncct barry: Patch to the WaveToySimple thorn.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Patch to the WaveToySimple thorn.&lt;/div&gt;</summary>
		<author><name>Noncct barry</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=File:requirements2.diff&amp;diff=2131</id>
		<title>File:requirements2.diff</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=File:requirements2.diff&amp;diff=2131"/>
		<updated>2011-01-23T14:23:55Z</updated>

		<summary type="html">&lt;p&gt;Noncct barry: Uncommitted part of the requirements.diff patch&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Uncommitted part of the requirements.diff patch&lt;/div&gt;</summary>
		<author><name>Noncct barry</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=ConfiguringMacOSX&amp;diff=1720</id>
		<title>ConfiguringMacOSX</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=ConfiguringMacOSX&amp;diff=1720"/>
		<updated>2010-09-23T21:53:34Z</updated>

		<summary type="html">&lt;p&gt;Noncct barry: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Configuring Mac OS X for Cactus =&lt;br /&gt;
&lt;br /&gt;
This document explains how to set up a computer running Mac OS X for compiling and running Cactus simulations. These instructions specifically apply to OS X 10.6 (Snow Leopard). Other versions should be similar.&lt;br /&gt;
&lt;br /&gt;
== Step 1: Install Xcode ==&lt;br /&gt;
&lt;br /&gt;
Download and install the latest version of Xcode from Apple. The version&amp;quot;Xcode for Mac-only Development&amp;quot; is sufficient.&lt;br /&gt;
&lt;br /&gt;
== Step 2: Install MacPorts ==&lt;br /&gt;
&lt;br /&gt;
MacPorts allows you to install extra libraries and tools not included with OS X. Download and install the version for your version of OS X.&lt;br /&gt;
&lt;br /&gt;
== Step 3: Install git ==&lt;br /&gt;
&lt;br /&gt;
git can be easily installed using MacPorts:&lt;br /&gt;
 sudo port install git-core&lt;br /&gt;
&lt;br /&gt;
== Step 4: Install gnuplot ==&lt;br /&gt;
&lt;br /&gt;
gnuplot can be easily installed using MacPorts:&lt;br /&gt;
 sudo port install gnuplot&lt;br /&gt;
&lt;br /&gt;
== Step 5: Install GSL ==&lt;br /&gt;
&lt;br /&gt;
GSL can be easily installed using MacPorts:&lt;br /&gt;
 sudo port install gsl&lt;br /&gt;
&lt;br /&gt;
== Step 6: Install Visit ==&lt;br /&gt;
&lt;br /&gt;
Visit is used for visualizing the 3D data produced in simulations. Download the &amp;quot;Mac OS X - Intel&amp;quot; executable and &amp;quot;Visit install script&amp;quot; from here. Make the install script executable:&lt;br /&gt;
 chmod +x visit-install&lt;br /&gt;
then execute it to install visit&lt;br /&gt;
 sudo ./visit-install 1.12.1 darwin-i386 /usr/local/visit&lt;br /&gt;
&lt;br /&gt;
where 1.12.1 is the version of Visit you downloaded. Add /usr/local/visit/bin to your $PATH so that you can run Visit by just typing visit on the command line.&lt;br /&gt;
&lt;br /&gt;
== Step 7: Install HDF5 library ==&lt;br /&gt;
&lt;br /&gt;
The HDF5 format can be used for outputting data from your simulations. Both Visit and Mathematica support reading data in HDF5 format. HDF5 can be installed by MacPorts using the command&lt;br /&gt;
 sudo port install hdf5-18&lt;br /&gt;
&lt;br /&gt;
== Step 8: Install Visit Carpet plugin ==&lt;br /&gt;
&lt;br /&gt;
To effectively read the data output by Carpter, Visit requires a plugin. The source code for the plugin may be checked out from the Cactus CVS repository (using password &amp;#039;anon&amp;#039;):&lt;br /&gt;
 cvs -d :pserver:cvs_anon@cvs.cactuscode.org:/cactus login&lt;br /&gt;
 cvs -d :pserver:cvs_anon@cvs.cactuscode.org:/cactus checkout VizTools/visitCarpetHDF5&lt;br /&gt;
&lt;br /&gt;
The plugin requires the HDF5 library, but cannot use the previously installed version for two reasons:&lt;br /&gt;
&lt;br /&gt;
* It must be the same version of the HDF5 library that is used by Visit. In the case of Visit version 1.12.x, HDF5 library version 1.8.1 is required.&lt;br /&gt;
* The current version of Visit is compiled as 32-bit (i386). However, Snow Leopard compiles as 64-bit by default (x86_64).&lt;br /&gt;
&lt;br /&gt;
For this reason, it is best to compile a usable version of HDF5 by hand:&lt;br /&gt;
&lt;br /&gt;
* Download the source for HDF5 1.8.1 into VizTools/visitCarpetHDF5.&lt;br /&gt;
* Extract the source&lt;br /&gt;
 cd VizTools/visitCarpetHDF5&lt;br /&gt;
 tar zxvf hdf5-1.8.1.tar.gz&lt;br /&gt;
&lt;br /&gt;
* Compile this version of HDF5:&lt;br /&gt;
 cd hdf5-1.8.1&lt;br /&gt;
 CFLAGS=&amp;quot;-arch i386&amp;quot; CXXFLAGS=&amp;quot;-arch i386&amp;quot; LDFLAGS=&amp;quot;-arch i386&amp;quot; ./configure --host=i386 --enable-cxx --enable-production --enable-static&lt;br /&gt;
 make&lt;br /&gt;
 make install&lt;br /&gt;
 cd ../&lt;br /&gt;
&lt;br /&gt;
We can now compile the Visit plugin using this version of the hdf5 library. Before doing so, we need to make a small change to one of the files installed by Visit. Edit /usr/local/visit/1.12.1/darwin-i386/include/make-variables and remove the -Wno-long-double from the line starting with PY_CXXFLAGS. Now, compile and install the plugin:&lt;br /&gt;
 ./install&lt;br /&gt;
&lt;br /&gt;
In the window that appears, select the Makefile tab. Ensure the CXXFLAGS includes -I./hdf5-1.8.1/hdf5/include -arch i386 and the LDFLAGS include -L./hdf5-1.8.1/hdf5/lib -arch i386, then save and quit. The plugin should now compile and install a location where it can be found by Visit.&lt;br /&gt;
&lt;br /&gt;
== Step 9: Other libraries ==&lt;br /&gt;
&lt;br /&gt;
There are some other libraries which may be useful and are easily installed using MacPorts (gcc44 is important as it provides gfortran):&lt;br /&gt;
 sudo port install fftw fftw-3 zlib szip openssl gcc44&lt;br /&gt;
&lt;br /&gt;
== Step 10: Configure Cactus ==&lt;br /&gt;
&lt;br /&gt;
In order for Cactus to find all of the installed libraries, make sure the your optionlist looks like the following (this is based off the version included in simfactory):&lt;br /&gt;
&lt;br /&gt;
 # macbook-gcc&lt;br /&gt;
 &lt;br /&gt;
 # Whenever this version string changes, the application is configured&lt;br /&gt;
 # and rebuilt from scratch&lt;br /&gt;
 VERSION = 2009-11-25&lt;br /&gt;
 &lt;br /&gt;
 CPP = cpp&lt;br /&gt;
 FPP = cpp&lt;br /&gt;
 CC  = gcc&lt;br /&gt;
 CXX = g++&lt;br /&gt;
 F77 = gfortran-mp-4.3&lt;br /&gt;
 F90 = gfortran-mp-4.3&lt;br /&gt;
 &lt;br /&gt;
 # -fmudflapth does not work with current gcc 4.2.0&lt;br /&gt;
 # -march=prescott and -march=core2 lead to an ICE&lt;br /&gt;
 # -march=native prevents undefined references to ___sync_fetch_and_add_4&lt;br /&gt;
 # -malign-double may lead to crashes in Fortran I/O&lt;br /&gt;
 CPPFLAGS = -DMPICH_IGNORE_CXX_SEEK&lt;br /&gt;
 FPPFLAGS = -traditional&lt;br /&gt;
 CFLAGS   = -g3 -fshow-column -mmacosx-version-min=10.5 -m128bit-long-double -std=gnu99&lt;br /&gt;
 CXXFLAGS = -g3 -fshow-column -mmacosx-version-min=10.5 -m128bit-long-double -I/opt/local/include&lt;br /&gt;
 F77FLAGS = -g3 -fshow-column -mmacosx-version-min=10.5 -m128bit-long-double -fcray-pointer&lt;br /&gt;
 F90FLAGS = -g3 -fshow-column -mmacosx-version-min=10.5 -m128bit-long-double -fcray-pointer&lt;br /&gt;
 &lt;br /&gt;
 LDFLAGS = /System/Library/Frameworks/vecLib.framework/vecLib -L/opt/local/lib/gcc43 -lgfortran&lt;br /&gt;
 &lt;br /&gt;
 C_LINE_DIRECTIVES = yes&lt;br /&gt;
 F_LINE_DIRECTIVES = yes&lt;br /&gt;
 &lt;br /&gt;
 REAL16_KIND = 10&lt;br /&gt;
 &lt;br /&gt;
 DEBUG           = no&lt;br /&gt;
 CPP_DEBUG_FLAGS = -DCARPET_DEBUG&lt;br /&gt;
 FPP_DEBUG_FLAGS = -DCARPET_DEBUG&lt;br /&gt;
 C_DEBUG_FLAGS   = -fbounds-check -ftrapv -fstack-protector-all&lt;br /&gt;
 CXX_DEBUG_FLAGS = -fbounds-check -ftrapv -fstack-protector-all&lt;br /&gt;
 F77_DEBUG_FLAGS = -fbounds-check -ftrapv -fstack-protector-all&lt;br /&gt;
 F90_DEBUG_FLAGS = -fbounds-check -ftrapv -fstack-protector-all&lt;br /&gt;
 &lt;br /&gt;
 # Changing ANSI C semantics:&lt;br /&gt;
 # -funsafe-loop-optimizations -ffast-math-errno -fassociative-math&lt;br /&gt;
 # Graphite optimisations are not implemented:&lt;br /&gt;
 # -floop-interchange -floop-strip-mine -floop-block&lt;br /&gt;
 OPTIMISE           = yes&lt;br /&gt;
 CPP_OPTIMISE_FLAGS = # -DCARPET_OPTIMISE -DNDEBUG&lt;br /&gt;
 FPP_OPTIMISE_FLAGS = # -DCARPET_OPTIMISE -DNDEBUG&lt;br /&gt;
 C_OPTIMISE_FLAGS   = -O2&lt;br /&gt;
 CXX_OPTIMISE_FLAGS = -O2&lt;br /&gt;
 F77_OPTIMISE_FLAGS = -O2&lt;br /&gt;
 F90_OPTIMISE_FLAGS = -O2&lt;br /&gt;
 &lt;br /&gt;
 PROFILE           = no&lt;br /&gt;
 CPP_PROFILE_FLAGS =&lt;br /&gt;
 FPP_PROFILE_FLAGS =&lt;br /&gt;
 C_PROFILE_FLAGS   = -pg&lt;br /&gt;
 CXX_PROFILE_FLAGS = -pg&lt;br /&gt;
 F77_PROFILE_FLAGS = -pg&lt;br /&gt;
 F90_PROFILE_FLAGS = -pg&lt;br /&gt;
 &lt;br /&gt;
 # -Wuninitialized is not supported without -O&lt;br /&gt;
 WARN           = no&lt;br /&gt;
 #CPP_WARN_FLAGS = -Wall&lt;br /&gt;
 #FPP_WARN_FLAGS = -Wall&lt;br /&gt;
 #C_WARN_FLAGS   = -Wall&lt;br /&gt;
 #CXX_WARN_FLAGS = -Wall&lt;br /&gt;
 #F77_WARN_FLAGS = -Wall&lt;br /&gt;
 #F90_WARN_FLAGS = -Wall&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 BLAS_DIR  = /System/Library/Frameworks/vecLib.framework&lt;br /&gt;
 BLAS_LIBS = gfortran&lt;br /&gt;
 &lt;br /&gt;
 FFTW_DIR  = /opt/local&lt;br /&gt;
 FFTW_LIBS = drfftw dfftw m&lt;br /&gt;
 &lt;br /&gt;
 #CURL_DIR = /opt/local&lt;br /&gt;
 &lt;br /&gt;
 #FLICKCURL_DIR = /Users/eschnett/flickcurl-1.10&lt;br /&gt;
 &lt;br /&gt;
 GSL     = yes&lt;br /&gt;
 GSL_DIR = /opt/local&lt;br /&gt;
 &lt;br /&gt;
 HDF5      = yes&lt;br /&gt;
 HDF5_DIR  = /opt/local&lt;br /&gt;
 LIBSZ_DIR = /opt/local&lt;br /&gt;
 &lt;br /&gt;
 LAPACK      = yes&lt;br /&gt;
 LAPACK_DIR  = /System/Library/Frameworks/vecLib.framework&lt;br /&gt;
 LAPACK_LIBS =&lt;br /&gt;
 &lt;br /&gt;
 MPI              = OpenMPI&lt;br /&gt;
 MPI_LIBS         = mpi&lt;br /&gt;
 #OPENMPI_DIR      = /opt/local/lib/openmpi&lt;br /&gt;
 #OPENMPI_INC_DIR  = /opt/local/include&lt;br /&gt;
 #OPENMPI_LIB_DIR  = /opt/local/lib&lt;br /&gt;
 &lt;br /&gt;
 OPENMP           = yes&lt;br /&gt;
 CPP_OPENMP_FLAGS = -fopenmp&lt;br /&gt;
 FPP_OPENMP_FLAGS = -fopenmp&lt;br /&gt;
 C_OPENMP_FLAGS   = -fopenmp&lt;br /&gt;
 CXX_OPENMP_FLAGS = -fopenmp&lt;br /&gt;
 F77_OPENMP_FLAGS = -fopenmp&lt;br /&gt;
 F90_OPENMP_FLAGS = -fopenmp&lt;br /&gt;
 &lt;br /&gt;
 #PETSC           = yes&lt;br /&gt;
 #PETSC_DIR       = /opt/local/lib/petsc &lt;br /&gt;
 #PETSC_ARCH      = macx&lt;br /&gt;
 #PETSC_ARCH_LIBS = X11   mpich   gfortran   dl   pthread&lt;br /&gt;
 &lt;br /&gt;
 PTHREADS = yes&lt;br /&gt;
 &lt;br /&gt;
 SSL_DIR = /opt/local&lt;br /&gt;
 &lt;br /&gt;
 X_LIB_DIR = /usr/X11R6/lib&lt;br /&gt;
&lt;br /&gt;
== Step 11: Install Globus Tools (gsissh, GridFTP, myproxy, etc.) ==&lt;br /&gt;
&lt;br /&gt;
In order to access any of the TeraGrid machines (eg. Kraken), you will need to first install the Global Tools. These must currently be compiled from source as follows:&lt;br /&gt;
&lt;br /&gt;
* Download the source package.&lt;br /&gt;
* Extract the files:&lt;br /&gt;
 tar zxvf gt5.0.0-all-source-installer.tar.bz2&lt;br /&gt;
* cd to the directory where you extract the files and build gsi-ssh, GridFTP, myproxy&lt;br /&gt;
 ./configure --prefix=/usr/local&lt;br /&gt;
 sudo make gsi-myproxy gsi-openssh gridftp&lt;br /&gt;
 sudo make install&lt;br /&gt;
* Add the the following lines your ~/.bash_profile and then run source ~/.bash_profile:&lt;br /&gt;
 export GLOBUS_LOCATION=/usr/local&lt;br /&gt;
 export MYPROXY_SERVER=myproxy.teragrid.org&lt;br /&gt;
 export MYPROXY_SERVER_PORT=7514&lt;br /&gt;
 source $GLOBUS_LOCATION/etc/globus-user-env.sh&lt;/div&gt;</summary>
		<author><name>Noncct barry</name></author>
		
	</entry>
</feed>