<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://docs.einsteintoolkit.org/et-docs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eschnett</id>
	<title>Einstein Toolkit Documentation - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://docs.einsteintoolkit.org/et-docs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Eschnett"/>
	<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/Special:Contributions/Eschnett"/>
	<updated>2026-04-17T09:53:46Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.0</generator>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=CarpetX_Transition&amp;diff=8166</id>
		<title>CarpetX Transition</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=CarpetX_Transition&amp;diff=8166"/>
		<updated>2023-04-17T13:45:41Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: List new thorns&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Transitioning to CarpetX&lt;br /&gt;
* Reviewers&lt;br /&gt;
** Steve Brandt&lt;br /&gt;
** Roland Haas&lt;br /&gt;
** Zach Etienne&lt;br /&gt;
* We should have documentation&lt;br /&gt;
** Documentation should be versioned along with the code&lt;br /&gt;
** How to build (standard images)&lt;br /&gt;
** How to write loops&lt;br /&gt;
** How to make use of flux&lt;br /&gt;
** How to do time integration&lt;br /&gt;
** How to write boundary conditions&lt;br /&gt;
** How to specify things in CCL files&lt;br /&gt;
** How to get mesh refinement&lt;br /&gt;
** How to analyze results from openPMD/SILO&lt;br /&gt;
** How to interpolate data&lt;br /&gt;
* Simfactory should remember the machine and any configs using it should use that machine&lt;br /&gt;
* We should have tutorials covering the above&lt;br /&gt;
* We should have gallery examples&lt;br /&gt;
** GW150914&lt;br /&gt;
** TOV star&lt;br /&gt;
** and more, about 10 things&lt;br /&gt;
* What thorns need to be migrated over?&lt;br /&gt;
** Algorithms: Algo, Arith, CarpetX, Coordinates, Derivs, ErrorEstimator, Loop, ODESolvers, PDESolvers&lt;br /&gt;
** Physics: ADMBase, HydroBase, TmunuBase&lt;br /&gt;
** Examples: Poisson2, WaveToyX&lt;br /&gt;
** Tests: TestArrayGroup, TestInterpolate, TestMultiPatch, TestNorms, TestODESolvers, TestODESolvers2, TestOutput, TestProlongate, TestSymmetries&lt;br /&gt;
** There are also a few new ExternalLibraries thorns, currently owned by Roland: ADIOS2, AMReX, NSIMD, openPMD, Silo, ssht, yaml_cpp&lt;br /&gt;
* Do test cases cover the functionality?&lt;br /&gt;
* The reviewers need to look at the code&lt;br /&gt;
* A timeline for the review&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=ET_hackathon&amp;diff=7690</id>
		<title>ET hackathon</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=ET_hackathon&amp;diff=7690"/>
		<updated>2022-02-02T17:34:43Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* List of topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== First online hackathon 2022-02-02 ==&lt;br /&gt;
&lt;br /&gt;
participants: Roland H., Steve B., Bill G., Leo Werneck (please add yourself)&lt;br /&gt;
&lt;br /&gt;
==== When ====&lt;br /&gt;
&lt;br /&gt;
February 2nd, 2022 11am CST to end of day&lt;br /&gt;
&lt;br /&gt;
==== Where ====&lt;br /&gt;
Join via Zoom: [https://illinois.zoom.us/j/87664874736?pwd=YzFmb2l3aHhiY1ROSHpEVzNWd1NIUT09 Zoom link].&lt;br /&gt;
&lt;br /&gt;
You will need to use a Zoom account, there will be a waiting room&lt;br /&gt;
&lt;br /&gt;
==== Mentors ====&lt;br /&gt;
&lt;br /&gt;
* Steve R. Brandt&lt;br /&gt;
* Peter Diener&lt;br /&gt;
* Roland Haas&lt;br /&gt;
* Erik Schnetter&lt;br /&gt;
* Leo Werneck&lt;br /&gt;
* TBD&lt;br /&gt;
&lt;br /&gt;
==== List of topics ====&lt;br /&gt;
&lt;br /&gt;
Most can be found using a [https://bitbucket.org/einsteintoolkit/tickets/issues?status=new&amp;amp;status=open&amp;amp;q=hackathon search query]&lt;br /&gt;
&lt;br /&gt;
Please claim the ones you like in the respective tickets.&lt;br /&gt;
&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2070/shallow-checkouts-do-not-work-with #2070: shallow checkouts do not work with branches]&lt;br /&gt;
* (done – Erik Schnetter) [https://bitbucket.org/einsteintoolkit/tickets/issues/2364/avoid-opening-and-closing-hdf5-files #2364: avoid opening and closing HDF5 files multiple times during output]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2587/update-list-of-et-related-lectures #2587: update list of ET related lectures]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/1437/missing-option-for-deterministic-noise-in #1437: Missing option for deterministic noise in CactusNumerical/Noise]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2332/piraha-does-not-allow-variable-expansion #2332: piraha does not allow variable expansion in ActiveThorns]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2529/flang-error-in-cactus-src-util #2529: Flang error in Cactus/src/util/PointerTo.F90]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2541/some-predefined-schedule-bins-missing-from #2541: some predefined schedule bins missing from documentation]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2518/retire-bluegene-q-support-in-thorn-vectors #2518: retire BlueGene/Q support in thorn vectors]&lt;br /&gt;
* Bill Gabella [https://bitbucket.org/einsteintoolkit/tickets/issues/2531/remove-non-piraha-parser-from-flesh #2531: remove non piraha parser from Flesh]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2254/let-cactus-thorns-register-citation #2254: let Cactus thorns register citation requests at runtime]&lt;br /&gt;
* Leo Werneck [https://bitbucket.org/einsteintoolkit/tickets/issues/2469/none-of-the-thorns-in #2469: None of the thorns in WVUThorns_Diagnostics have documentation]&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=ET_hackathon&amp;diff=7687</id>
		<title>ET hackathon</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=ET_hackathon&amp;diff=7687"/>
		<updated>2022-02-02T17:24:22Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* List of topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== First online hackathon 2022-02-02 ==&lt;br /&gt;
&lt;br /&gt;
participants: Roland H., Steve B., Bill G., Leo Werneck (please add yourself)&lt;br /&gt;
&lt;br /&gt;
==== When ====&lt;br /&gt;
&lt;br /&gt;
February 2nd, 2022 11am CST to end of day&lt;br /&gt;
&lt;br /&gt;
==== Where ====&lt;br /&gt;
Join via Zoom: [https://illinois.zoom.us/j/87664874736?pwd=YzFmb2l3aHhiY1ROSHpEVzNWd1NIUT09 Zoom link].&lt;br /&gt;
&lt;br /&gt;
You will need to use a Zoom account, there will be a waiting room&lt;br /&gt;
&lt;br /&gt;
==== Mentors ====&lt;br /&gt;
&lt;br /&gt;
* Steve R. Brandt&lt;br /&gt;
* Peter Diener&lt;br /&gt;
* Roland Haas&lt;br /&gt;
* Erik Schnetter&lt;br /&gt;
* Leo Werneck&lt;br /&gt;
* TBD&lt;br /&gt;
&lt;br /&gt;
==== List of topics ====&lt;br /&gt;
&lt;br /&gt;
Most can be found using a [https://bitbucket.org/einsteintoolkit/tickets/issues?status=new&amp;amp;status=open&amp;amp;q=hackathon search query]&lt;br /&gt;
&lt;br /&gt;
Please claim the ones you like in the respective tickets.&lt;br /&gt;
&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2070/shallow-checkouts-do-not-work-with #2070: shallow checkouts do not work with branches]&lt;br /&gt;
* (Erik Schnetter) [https://bitbucket.org/einsteintoolkit/tickets/issues/2364/avoid-opening-and-closing-hdf5-files #2364: avoid opening and closing HDF5 files multiple times during output]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2587/update-list-of-et-related-lectures #2587: update list of ET related lectures]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/1437/missing-option-for-deterministic-noise-in #1437: Missing option for deterministic noise in CactusNumerical/Noise]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2332/piraha-does-not-allow-variable-expansion #2332: piraha does not allow variable expansion in ActiveThorns]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2529/flang-error-in-cactus-src-util #2529: Flang error in Cactus/src/util/PointerTo.F90]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2541/some-predefined-schedule-bins-missing-from #2541: some predefined schedule bins missing from documentation]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2518/retire-bluegene-q-support-in-thorn-vectors #2518: retire BlueGene/Q support in thorn vectors]&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2531/remove-non-piraha-parser-from-flesh #2531: remove non piraha parser from Flesh]&lt;br /&gt;
Bill Gabella&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2254/let-cactus-thorns-register-citation #2254: let Cactus thorns register citation requests at runtime]&lt;br /&gt;
Leo Werneck&lt;br /&gt;
* [https://bitbucket.org/einsteintoolkit/tickets/issues/2469/none-of-the-thorns-in #2469: None of the thorns in WVUThorns_Diagnostics have documentation]&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Working_Group_on_Performance_Optimization&amp;diff=5142</id>
		<title>Working Group on Performance Optimization</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Working_Group_on_Performance_Optimization&amp;diff=5142"/>
		<updated>2018-03-23T17:43:10Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Milestones */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Organization ==&lt;br /&gt;
&lt;br /&gt;
Type: Working group&lt;br /&gt;
&lt;br /&gt;
=== Leads ===&lt;br /&gt;
* Roland Haas&lt;br /&gt;
* Erik Schnetter&lt;br /&gt;
&lt;br /&gt;
=== Initial Members ===&lt;br /&gt;
* Roland Haas&lt;br /&gt;
* Erik Schnetter&lt;br /&gt;
* Zach Etienne&lt;br /&gt;
&lt;br /&gt;
=== Funding ===&lt;br /&gt;
* NSF OAC-1550514&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
 &lt;br /&gt;
=== Activities ===&lt;br /&gt;
The working group engages in researching, developing, implementing and promoting performance optimization for codes included in the Einstein Toolkit. This includes optimizations for currently supported architectures in the Einstein Toolkit (for example CPUs and GPUs) as well as new architectures that are not yet well supported (eg Intel Phi accelerators, modern GPUs).&lt;br /&gt;
&lt;br /&gt;
The group interacts with [Data_Dependant_Task_Scheduler] to coordinate optimization efforts.&lt;br /&gt;
&lt;br /&gt;
The group defines the targets of interest and meets regularly via online media as well as in person in small workshops to push forward specific optimization projects.&lt;br /&gt;
&lt;br /&gt;
=== Members ===&lt;br /&gt;
We welcome new members to the working group! If you are working on performance optimization in some way (e.g. supporting accelerators, SIMD vectorization, new AMR schemes, improving convergence, fine-tunin parameters), then we are looking forward to hearing from you. We expect that this working group will help us share experience and expertise, and will allow us to have some technical discussions that might be out of the range of general interest.&lt;br /&gt;
&lt;br /&gt;
=== Milestones ===&lt;br /&gt;
# review existing optimization efforts currently in private branches: &amp;lt;DEADLINE&amp;gt; to be added by bracketed persons by 2018-04&lt;br /&gt;
## [Erik] Carpet/eschnett/funhpc&lt;br /&gt;
### by 2018-05-31: review of features, decision which features to include into Cactus, extract features into new branch, start discussion on ET mailing list&lt;br /&gt;
## [Ian] CactusNumerical/ianhinder/rkprol &lt;br /&gt;
## [Roland, Erik] CactusExamples/eschnett/hydro&lt;br /&gt;
## [Zach] NRPy+, a “Kranc-like”, but Python/SymPy-based code capable of creating the mathematical “guts” of ETK thorns (as C code, supporting AVX256/AVX512 intrinsics). ([https://bitbucket.org/zach_etienne/nrpy public git repo], [https://arxiv.org/abs/1712.07658 code announcement paper]). NRPy+ already provides&lt;br /&gt;
### RHSs for SphericalBSSN thorn ([https://arxiv.org/abs/1802.09625 code announcement paper]), and &lt;br /&gt;
### an ETK GRMHD initial data thorn (magnetized BH accretion disk) ([https://bitbucket.org/zach_etienne/nrpy/src/f479feedb090de80630ea1b49d01492e2b293e05/ETK_thorns/FishboneMoncrief_ET_thorn/?at=master public git repo])&lt;br /&gt;
# import identified optimization efforts into master branches: &amp;lt;DEADLINE&amp;gt; date TBD by 2018-04&lt;br /&gt;
# review discussion on in &amp;quot;Breakout Discussion on Scalability&amp;quot; in [https://docs.google.com/document/d/1u4-EgQM3DngPa0QfPoHZGVJy69jDrMvbmxOgxFmaOfg/edit Notes from ET 2017 meeting at NCSA]: 2018-04 (next call)&lt;br /&gt;
# advertise efforts and bring in more developers: 2018-04 (next call)&lt;br /&gt;
&lt;br /&gt;
=== Deliverables ===&lt;br /&gt;
# the identified optimization options listed above&lt;br /&gt;
# graphs and data to back up the observed performance improvements&lt;br /&gt;
# code to include in the Einstein Toolkit&lt;br /&gt;
&lt;br /&gt;
== Engagement ==&lt;br /&gt;
The working groups communicates via personal email, the Einstein Toolkit User&amp;#039;s mailing list, and through periodic video-conferences.&lt;br /&gt;
&lt;br /&gt;
Persons interested in joining the working group and that are themselves working on performance optimization are encouraged to contact the leads at [mailto:rhaas@illinois.edu rhaas@illinois.edu] or [mailto:eschnetter@perimeterinstitute.ca eschnetter@perimeterinstitute.ca] for instructions.&lt;br /&gt;
&lt;br /&gt;
=== Agenda and minutes of calls ===&lt;br /&gt;
We keep all files in a shared [https://drive.google.com/drive/folders/1TEAYTb3pSE3NfQXAgvcCjKciRONQwuYU?usp=sharing Google drive folder] which is publicly readable but only editable by group members.&lt;br /&gt;
&lt;br /&gt;
* Kick-off call on [https://docs.google.com/document/d/1XrGVQn8BgODLys2BDGhoVXSJuiWcMMXtjYnhY_PKMN4/edit?usp=sharing Friday March 23rd] which is currently world-writable.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Meeting_agenda&amp;diff=5025</id>
		<title>Meeting agenda</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Meeting_agenda&amp;diff=5025"/>
		<updated>2018-01-29T02:13:20Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;onlyinclude&amp;gt;&amp;lt;includeonly&amp;gt;&amp;lt;ul&amp;gt;&amp;lt;li&amp;gt;&amp;lt;/includeonly&amp;gt;The link for the call is: https://bluejeans.com/194244555.&lt;br /&gt;
&amp;lt;includeonly&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/includeonly&amp;gt;&amp;lt;includeonly&amp;gt;&amp;lt;li&amp;gt;Today&amp;#039;s [[meeting_agenda|meeting agenda]] is on this wiki too.&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&amp;lt;/includeonly&amp;gt;&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Connection Details =&lt;br /&gt;
&lt;br /&gt;
We are using Blue Jeans to host the Einstein Toolkit meetings. The meeting is sponsored by the Perimeter Institute.&lt;br /&gt;
&lt;br /&gt;
* Meeting Title: Einstein Toolkit&lt;br /&gt;
* Meeting Time: Every Monday at 10:00 EST / 2 hrs&lt;br /&gt;
* Connect via a web browser (recommended): https://bluejeans.com/194244555&lt;br /&gt;
* Dial in via phone (if you cannot use a web browser):&lt;br /&gt;
*# +1 866 826 8602 (Canada/USA) +1 647 427 3061 (Global Numbers)&lt;br /&gt;
*# Enter Meeting ID: 256 841 6361&lt;br /&gt;
*# Press # &lt;br /&gt;
* Want to test your video connection? https://bluejeans.com/111&lt;br /&gt;
&lt;br /&gt;
= Meeting Agenda =&lt;br /&gt;
&lt;br /&gt;
When adding a topic, please add your &amp;#039;&amp;#039;&amp;#039;name&amp;#039;&amp;#039;&amp;#039; next to the item you are proposing.&lt;br /&gt;
Meetings are every Monday at 9:00 AM US Central time.&lt;br /&gt;
&lt;br /&gt;
== 2018-01-29 ==&lt;br /&gt;
&lt;br /&gt;
* [EB] [HW] Announce cosmoparticle mailing list and invite subscriptions.&lt;br /&gt;
* [RH] [YZ] Failing tests in PITTNullCode when using --fast-math&lt;br /&gt;
* [RH] status of ET testsuites for release&lt;br /&gt;
* [RH} Hydro_RNSID review results&lt;br /&gt;
&lt;br /&gt;
== 2018-01-22 ==&lt;br /&gt;
&lt;br /&gt;
* [ES] OpenMP scalability&lt;br /&gt;
* [MZ &amp;amp; HW] Adding [https://bitbucket.org/canuda/lean_public Lean public] and [https://bitbucket.org/canuda/proca Proca] thorns to the Toolkit&lt;br /&gt;
* Tickets to be looked at for the upcoming ET_2018_02 release: [https://trac.einsteintoolkit.org/query?status=%21closed&amp;amp;milestone=ET_2018_02&amp;amp;or&amp;amp;status=%21closed&amp;amp;priority=critical&amp;amp;component=%21Mojave&amp;amp;or&amp;amp;status=%21closed&amp;amp;priority=blocker&amp;amp;component=%21Mojave&amp;amp;col=id&amp;amp;col=summary&amp;amp;col=milestone&amp;amp;col=priority&amp;amp;col=status&amp;amp;col=owner&amp;amp;col=type&amp;amp;col=component&amp;amp;col=reporter&amp;amp;order=priority ET_2018_02 tickets]&lt;br /&gt;
&lt;br /&gt;
Minutes are [http://lists.einsteintoolkit.org/pipermail/users/2018-January/006013.html here]&lt;br /&gt;
&lt;br /&gt;
== 2018-01-15 ==&lt;br /&gt;
&lt;br /&gt;
* [RH] ET release planning&lt;br /&gt;
** date for release&lt;br /&gt;
** thorns to include&lt;br /&gt;
** items past inclusion deadline&lt;br /&gt;
** status of review requests&lt;br /&gt;
*** RNSID [https://bitbucket.org/einsteintoolkit/einsteininitialdata/pull-requests/3/new-initial-data-thorn-for-grhydro/diff pull request]&lt;br /&gt;
*** Giraffe [https://bitbucket.org/zach_etienne/wvuthorns/pull-requests/1/giraffe-review/diff pull request]&lt;br /&gt;
&lt;br /&gt;
* [RH] ET US workshop at GT&lt;br /&gt;
** dates&lt;br /&gt;
** program&lt;br /&gt;
** committees&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2018-January/005998.html here]&lt;br /&gt;
&lt;br /&gt;
== 2018-01-08 ==&lt;br /&gt;
&lt;br /&gt;
* standing item: unanswered emails on list&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2018-January/005990.html here]&lt;br /&gt;
&lt;br /&gt;
== 2017-12-04 ==&lt;br /&gt;
* [EB] [HV] Discussion on Cosmo/Particle WG [https://docs.google.com/document/d/1tN5m74VPJ18V9KKCXcGjD4yECsraoqQLy34B-EX0azc]&lt;br /&gt;
* [RH] ET tutorial machine moved to JetStream at TACC but still through NDS&lt;br /&gt;
* standing item: unanswered emails on list&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-December/005951.html here]&lt;br /&gt;
&lt;br /&gt;
== 2017-11-27 ==&lt;br /&gt;
* [EB] [HV] Review of Cosmo/Particle WG [https://docs.google.com/document/d/1tN5m74VPJ18V9KKCXcGjD4yECsraoqQLy34B-EX0azc]&lt;br /&gt;
* [BG] Review of &amp;quot;Matter Codes&amp;quot; WG [https://docs.google.com/document/d/1i9a5pCqHLVGumLOqy5VEsBMaU42p9AtW_is3Inz7FlA/edit]&lt;br /&gt;
* [MZ] [BG] European ET workshop 2018&lt;br /&gt;
* [SB] Tutorial for new users https://www.einsteintoolkit.nationaldataservice.org&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-November/005927.html here]&lt;br /&gt;
&lt;br /&gt;
== 2017-11-20 ==&lt;br /&gt;
* [RH] [SB] ET tutorial using jupyter: https://www.einsteintoolkit.nationaldataservice.org&lt;br /&gt;
* [RH] update mailing list to mailman 3.1&lt;br /&gt;
* [RH] RNSID review&lt;br /&gt;
* [GDA] Next ET Workshop&lt;br /&gt;
* [GDA] Review of ET Facebook page, who else should post?&lt;br /&gt;
* [GDA/ZE] First review of new WG: https://docs.google.com/document/d/1qiNeVgPlleg6Mjq9WI3iS--IA5o1FbpywAcq1Do_9xI/edit &lt;br /&gt;
* [GDA] Review of working groups&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-November/005905.html here]&lt;br /&gt;
&lt;br /&gt;
== 2017-11-06 ==&lt;br /&gt;
* [RH] status update on the ET tutorial machine [http://lists.einsteintoolkit.org/pipermail/users/2017-November/005861.html]&lt;br /&gt;
* [RH] status update on the ET Jenkins server and failing tests&lt;br /&gt;
* [RH] working groups [https://docs.einsteintoolkit.org/et-docs/Working_List_of_ET_Groups]&lt;br /&gt;
* [RH] rework tutorials using methods of software carpentry [https://software-carpentry.org/]&lt;br /&gt;
* standing item: unanswered emails on list&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-November/005869.html here].&lt;br /&gt;
&lt;br /&gt;
== 2017-10-30 ==&lt;br /&gt;
* [GA] Discussion of the EU ET Workshop -- what worked? what could be improved? where do we archive the google doc and photos etc? planning for next meeting?&lt;br /&gt;
* [GA] Working groups: proposed template and groups discussed at the workshop: look at first two pages of https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs/edit&lt;br /&gt;
* [IH] Volunteers to answer unanswered questions on the ET list:&lt;br /&gt;
** [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005829.html Is there any document of TwoPuncture&amp;#039;s code?]&lt;br /&gt;
** [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005792.html About the thorn Einstein exact]&lt;br /&gt;
* [HW] code optimization using pop-coe services (https://pop-coe.eu/services)&lt;br /&gt;
* [GA] Potential NSF funding opportunity: https://www.nsf.gov/pubs/2018/nsf18505/nsf18505.htm?WT.mc_id=USNSF_25&amp;amp;WT.mc_ev=click&lt;br /&gt;
* [MZ] [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005835.html Change the behaviour of &amp;quot;copy&amp;quot; in CarpetLib/src/operators.hh?]&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005842.html here].&lt;br /&gt;
&lt;br /&gt;
== 2017-10-23 ==&lt;br /&gt;
* [ES] Introduce Nix https://nixos.org package manager for building Cactus dependencies&lt;br /&gt;
* [EB] Discuss the new ET video channel and how to incorporate it in einsteintoolkit.org&lt;br /&gt;
* [GA] Help with code optimization from Prace (email from Helvi)&lt;br /&gt;
* [HW] ET &amp;quot;starting body&amp;quot; for new users&lt;br /&gt;
* [GA] Discussion of the EU ET Workshop -- what worked? what could be improved? where do we archive the google doc and photos etc?&lt;br /&gt;
* [GA] Working groups: proposed template and groups discussed at the workshop: look at first two pages of https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs/edit&lt;br /&gt;
* [GA] Trailer from EdFest: https://www.dropbox.com/s/qiyebml11nqr3r0/EdFest%20Trailer%202017.mp4?dl=0&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005830.html here].&lt;br /&gt;
&lt;br /&gt;
== 2017-10-16 ==&lt;br /&gt;
&lt;br /&gt;
[This will likely overlap with a LIGO/Virgo announcement]&lt;br /&gt;
&lt;br /&gt;
== 2017-10-09 ==&lt;br /&gt;
&lt;br /&gt;
no meeting&lt;br /&gt;
&lt;br /&gt;
== 02-Oct 2017 ==&lt;br /&gt;
* [RH] update on Euro ET workshop in Mallorca, Oct 11-14, http://grg.uib.es/EinsteinToolkit2017/&lt;br /&gt;
* [RH] thorns for inclusion: GiRaFFE, RNSID. Review status?&lt;br /&gt;
* [RH] Jenkins status update&lt;br /&gt;
* [RH] Comet lustre data corruption update&lt;br /&gt;
* [RH] Next ET release date&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005787.html here].&lt;br /&gt;
&lt;br /&gt;
== 25-Sep 2017 ==&lt;br /&gt;
&lt;br /&gt;
* The cactusmaint@cactuscode.org email address is the one all users are directed to on cactuscode.org, and it seems to not work. Do we want to fix it, or just steer users to users@einsteintoolkit.org?&lt;br /&gt;
* Please send to Gabrielle Allen any pictures or movies that can be used for Edfest. A first version of the movie is here: https://www.dropbox.com/s/mpmcwmw6ewdklkf/EdFest.mp4?dl=0&lt;br /&gt;
* First working group added to wiki, along with policies, procedures -- suggest that the first meeting of each month includes a review of milestones. I didn&amp;#039;t get to work on the others from the SI2 proposal this week :(.&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005780.html here].&lt;br /&gt;
&lt;br /&gt;
== 18-Sep 2017 ==&lt;br /&gt;
* Progress on new tutorial machine setup&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005772.html here].&lt;br /&gt;
&lt;br /&gt;
== 11-Sep-2017 ==&lt;br /&gt;
*  Demo of the RunView tool by Anna Neshyba and David Koppelman. This tool uses PAPI and CactusTimers to generate a graphical output (html) of Cactus performance.&lt;br /&gt;
*  Is it time for a new release of Cactus? The lack of new releases led at least one online user to conclude that Cactus was a dead project. Perhaps we should have a new Cactus release with each release of the ET?&lt;br /&gt;
*  Creation of a popular book about the science done by the Einstein Toolkit (Steven) -- I was thinking about ways to improve the visibility of the platform and attract new users and/or funding. Coincidentally, I&amp;#039;ve recently had some experience figuring out how to publish books on various platforms. These days it&amp;#039;s fairly straightforward to publish books with no cost as paperback and ebook, or even hardcover. I was thinking that many of the people working on the ET could write a chapter, and I could handle the publishing. One can configure any level of profit for such endeavors, and I would like to suggest zero, since there are probably legal issues with the toolkit actually making money from such a thing. Does this sound like something people would be interested in?&lt;br /&gt;
* switch ExternalLibraries over to github: svn checkout https://github.com/rhaas80/GSL&lt;br /&gt;
&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005769.html here].&lt;br /&gt;
&lt;br /&gt;
== 28-Aug-2017 ==&lt;br /&gt;
* Status update on KNL&lt;br /&gt;
* update on Euro ET workshop in Mallorca, Oct 11-14, http://grg.uib.es/EinsteinToolkit2017/&lt;br /&gt;
* status of servers at LSU&lt;br /&gt;
** continued use of svn&lt;br /&gt;
** switch ExternalLibraries over to github: svn checkout https://github.com/rhaas80/GSL&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005739.html&lt;br /&gt;
&lt;br /&gt;
== 21-Aug-2017 ==&lt;br /&gt;
* Progress on tutorial machine at NCSA&lt;br /&gt;
* Comments/discussion on updates to the Working Group Document (including procedures) (Gabrielle) https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005739.html&lt;br /&gt;
&lt;br /&gt;
== 14-Aug-2017 ==&lt;br /&gt;
* Report on ET workshop survey (Nasir Eisty)&lt;br /&gt;
* NSF SI2 Project posted (Gabrielle Allen) https://figshare.com/articles/Einstein_Toolkit_Community_Integration_and_Data_Exploration/5306587&lt;br /&gt;
* Template for working groups: https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs/edit?usp=sharing&lt;br /&gt;
* Discuss new code contributions&lt;br /&gt;
** RNSID&lt;br /&gt;
&lt;br /&gt;
== 07-Aug-2017 ==&lt;br /&gt;
&lt;br /&gt;
* Report from summer school and workshop:&lt;br /&gt;
** Summer school recordings etc. are here: https://docs.einsteintoolkit.org/et-docs/NCSAETK2017/setup&lt;br /&gt;
* Introduce working groups&lt;br /&gt;
* Discuss new code contributions&lt;br /&gt;
** GiRAFFE&lt;br /&gt;
** RNSID&lt;br /&gt;
* DataVault updates&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005706.html&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Meeting_agenda&amp;diff=4885</id>
		<title>Meeting agenda</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Meeting_agenda&amp;diff=4885"/>
		<updated>2017-10-12T19:18:16Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The link for the call is: https://hangouts.google.com/hangouts/_/stevenrbrandt.com/etcall. &amp;#039;&amp;#039;&amp;#039;After connecting, you will need to wait for the moderator to add you to the call&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&amp;lt;includeonly&amp;gt;Today&amp;#039;s [[meeting_agenda|meeting agenda]] is on this wiki too.&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
When adding a topic please add you &amp;#039;&amp;#039;&amp;#039;name&amp;#039;&amp;#039;&amp;#039; next to the item you are proposing.&lt;br /&gt;
&lt;br /&gt;
== 2017-10-23 ==&lt;br /&gt;
* [ES] Introduce Nix https://nixos.org package manager for building Cactus dependencies&lt;br /&gt;
&lt;br /&gt;
== 2017-10-16 ==&lt;br /&gt;
&lt;br /&gt;
[This will likely overlap with a LIGO/Virgo announcement]&lt;br /&gt;
&lt;br /&gt;
== 2017-10-09 ==&lt;br /&gt;
&lt;br /&gt;
no meeting&lt;br /&gt;
&lt;br /&gt;
== 02-Oct 2017 ==&lt;br /&gt;
* [RH] update on Euro ET workshop in Mallorca, Oct 11-14, http://grg.uib.es/EinsteinToolkit2017/&lt;br /&gt;
* [RH] thorns for inclusion: GiRaFFE, RNSID. Review status?&lt;br /&gt;
* [RH] Jenkins status update&lt;br /&gt;
* [RH] Comet lustre data corruption update&lt;br /&gt;
* [RH] Next ET release date&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005787.html here].&lt;br /&gt;
&lt;br /&gt;
== 25-Sep 2017 ==&lt;br /&gt;
&lt;br /&gt;
* The cactusmaint@cactuscode.org email address is the one all users are directed to on cactuscode.org, and it seems to not work. Do we want to fix it, or just steer users to users@einsteintoolkit.org?&lt;br /&gt;
* Please send to Gabrielle Allen any pictures or movies that can be used for Edfest. A first version of the movie is here: https://www.dropbox.com/s/mpmcwmw6ewdklkf/EdFest.mp4?dl=0&lt;br /&gt;
* First working group added to wiki, along with policies, procedures -- suggest that the first meeting of each month includes a review of milestones. I didn&amp;#039;t get to work on the others from the SI2 proposal this week :(.&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005780.html here].&lt;br /&gt;
&lt;br /&gt;
== 18-Sep 2017 ==&lt;br /&gt;
* Progress on new tutorial machine setup&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005772.html here].&lt;br /&gt;
&lt;br /&gt;
== 11-Sep-2017 ==&lt;br /&gt;
*  Demo of the RunView tool by Anna Neshyba and David Koppelman. This tool uses PAPI and CactusTimers to generate a graphical output (html) of Cactus performance.&lt;br /&gt;
*  Is it time for a new release of Cactus? The lack of new releases led at least one online user to conclude that Cactus was a dead project. Perhaps we should have a new Cactus release with each release of the ET?&lt;br /&gt;
*  Creation of a popular book about the science done by the Einstein Toolkit (Steven) -- I was thinking about ways to improve the visibility of the platform and attract new users and/or funding. Coincidentally, I&amp;#039;ve recently had some experience figuring out how to publish books on various platforms. These days it&amp;#039;s fairly straightforward to publish books with no cost as paperback and ebook, or even hardcover. I was thinking that many of the people working on the ET could write a chapter, and I could handle the publishing. One can configure any level of profit for such endeavors, and I would like to suggest zero, since there are probably legal issues with the toolkit actually making money from such a thing. Does this sound like something people would be interested in?&lt;br /&gt;
* switch ExternalLibraries over to github: svn checkout https://github.com/rhaas80/GSL&lt;br /&gt;
&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005769.html here].&lt;br /&gt;
&lt;br /&gt;
== 28-Aug-2017 ==&lt;br /&gt;
* Status update on KNL&lt;br /&gt;
* update on Euro ET workshop in Mallorca, Oct 11-14, http://grg.uib.es/EinsteinToolkit2017/&lt;br /&gt;
* status of servers at LSU&lt;br /&gt;
** continued use of svn&lt;br /&gt;
** switch ExternalLibraries over to github: svn checkout https://github.com/rhaas80/GSL&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005739.html&lt;br /&gt;
&lt;br /&gt;
== 21-Aug-2017 ==&lt;br /&gt;
* Progress on tutorial machine at NCSA&lt;br /&gt;
* Comments/discussion on updates to the Working Group Document (including procedures) (Gabrielle) https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005739.html&lt;br /&gt;
&lt;br /&gt;
== 14-Aug-2017 ==&lt;br /&gt;
* Report on ET workshop survey (Nasir Eisty)&lt;br /&gt;
* NSF SI2 Project posted (Gabrielle Allen) https://figshare.com/articles/Einstein_Toolkit_Community_Integration_and_Data_Exploration/5306587&lt;br /&gt;
* Template for working groups: https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs/edit?usp=sharing&lt;br /&gt;
* Discuss new code contributions&lt;br /&gt;
** RNSID&lt;br /&gt;
&lt;br /&gt;
== 07-Aug-2017 ==&lt;br /&gt;
&lt;br /&gt;
* Report from summer school and workshop:&lt;br /&gt;
** Summer school recordings etc. are here: https://docs.einsteintoolkit.org/et-docs/NCSAETK2017/setup&lt;br /&gt;
* Introduce working groups&lt;br /&gt;
* Discuss new code contributions&lt;br /&gt;
** GiRAFFE&lt;br /&gt;
** RNSID&lt;br /&gt;
* DataVault updates&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005706.html&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Meeting_agenda&amp;diff=4884</id>
		<title>Meeting agenda</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Meeting_agenda&amp;diff=4884"/>
		<updated>2017-10-12T19:17:24Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;onlyinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The link for the call is: https://hangouts.google.com/hangouts/_/stevenrbrandt.com/etcall. &amp;#039;&amp;#039;&amp;#039;After connecting, you will need to wait for the moderator to add you to the call&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&amp;lt;includeonly&amp;gt;Today&amp;#039;s [[meeting_agenda|meeting agenda]] is on this wiki too.&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;/onlyinclude&amp;gt;&lt;br /&gt;
When adding a topic please add you &amp;#039;&amp;#039;&amp;#039;name&amp;#039;&amp;#039;&amp;#039; next to the item you are proposing.&lt;br /&gt;
&lt;br /&gt;
== 2017-10-16 ==&lt;br /&gt;
&lt;br /&gt;
[This will likely overlap with a LIGO/Virgo announcement]&lt;br /&gt;
&lt;br /&gt;
== 2017-10-09 ==&lt;br /&gt;
&lt;br /&gt;
no meeting&lt;br /&gt;
&lt;br /&gt;
== 02-Oct 2017 ==&lt;br /&gt;
* [RH] update on Euro ET workshop in Mallorca, Oct 11-14, http://grg.uib.es/EinsteinToolkit2017/&lt;br /&gt;
* [RH] thorns for inclusion: GiRaFFE, RNSID. Review status?&lt;br /&gt;
* [RH] Jenkins status update&lt;br /&gt;
* [RH] Comet lustre data corruption update&lt;br /&gt;
* [RH] Next ET release date&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-October/005787.html here].&lt;br /&gt;
&lt;br /&gt;
== 25-Sep 2017 ==&lt;br /&gt;
&lt;br /&gt;
* The cactusmaint@cactuscode.org email address is the one all users are directed to on cactuscode.org, and it seems to not work. Do we want to fix it, or just steer users to users@einsteintoolkit.org?&lt;br /&gt;
* Please send to Gabrielle Allen any pictures or movies that can be used for Edfest. A first version of the movie is here: https://www.dropbox.com/s/mpmcwmw6ewdklkf/EdFest.mp4?dl=0&lt;br /&gt;
* First working group added to wiki, along with policies, procedures -- suggest that the first meeting of each month includes a review of milestones. I didn&amp;#039;t get to work on the others from the SI2 proposal this week :(.&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005780.html here].&lt;br /&gt;
&lt;br /&gt;
== 18-Sep 2017 ==&lt;br /&gt;
* Progress on new tutorial machine setup&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005772.html here].&lt;br /&gt;
&lt;br /&gt;
== 11-Sep-2017 ==&lt;br /&gt;
*  Demo of the RunView tool by Anna Neshyba and David Koppelman. This tool uses PAPI and CactusTimers to generate a graphical output (html) of Cactus performance.&lt;br /&gt;
*  Is it time for a new release of Cactus? The lack of new releases led at least one online user to conclude that Cactus was a dead project. Perhaps we should have a new Cactus release with each release of the ET?&lt;br /&gt;
*  Creation of a popular book about the science done by the Einstein Toolkit (Steven) -- I was thinking about ways to improve the visibility of the platform and attract new users and/or funding. Coincidentally, I&amp;#039;ve recently had some experience figuring out how to publish books on various platforms. These days it&amp;#039;s fairly straightforward to publish books with no cost as paperback and ebook, or even hardcover. I was thinking that many of the people working on the ET could write a chapter, and I could handle the publishing. One can configure any level of profit for such endeavors, and I would like to suggest zero, since there are probably legal issues with the toolkit actually making money from such a thing. Does this sound like something people would be interested in?&lt;br /&gt;
* switch ExternalLibraries over to github: svn checkout https://github.com/rhaas80/GSL&lt;br /&gt;
&lt;br /&gt;
Meeting minutes are [http://lists.einsteintoolkit.org/pipermail/users/2017-September/005769.html here].&lt;br /&gt;
&lt;br /&gt;
== 28-Aug-2017 ==&lt;br /&gt;
* Status update on KNL&lt;br /&gt;
* update on Euro ET workshop in Mallorca, Oct 11-14, http://grg.uib.es/EinsteinToolkit2017/&lt;br /&gt;
* status of servers at LSU&lt;br /&gt;
** continued use of svn&lt;br /&gt;
** switch ExternalLibraries over to github: svn checkout https://github.com/rhaas80/GSL&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005739.html&lt;br /&gt;
&lt;br /&gt;
== 21-Aug-2017 ==&lt;br /&gt;
* Progress on tutorial machine at NCSA&lt;br /&gt;
* Comments/discussion on updates to the Working Group Document (including procedures) (Gabrielle) https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005739.html&lt;br /&gt;
&lt;br /&gt;
== 14-Aug-2017 ==&lt;br /&gt;
* Report on ET workshop survey (Nasir Eisty)&lt;br /&gt;
* NSF SI2 Project posted (Gabrielle Allen) https://figshare.com/articles/Einstein_Toolkit_Community_Integration_and_Data_Exploration/5306587&lt;br /&gt;
* Template for working groups: https://docs.google.com/document/d/1j12IEY8kH6R33qfXpD5BaDDXLEYdNceEqf9crLupsPs/edit?usp=sharing&lt;br /&gt;
* Discuss new code contributions&lt;br /&gt;
** RNSID&lt;br /&gt;
&lt;br /&gt;
== 07-Aug-2017 ==&lt;br /&gt;
&lt;br /&gt;
* Report from summer school and workshop:&lt;br /&gt;
** Summer school recordings etc. are here: https://docs.einsteintoolkit.org/et-docs/NCSAETK2017/setup&lt;br /&gt;
* Introduce working groups&lt;br /&gt;
* Discuss new code contributions&lt;br /&gt;
** GiRAFFE&lt;br /&gt;
** RNSID&lt;br /&gt;
* DataVault updates&lt;br /&gt;
Minutes: http://lists.einsteintoolkit.org/pipermail/users/2017-August/005706.html&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Main_Page&amp;diff=4805</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Main_Page&amp;diff=4805"/>
		<updated>2017-08-21T14:39:41Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Weekly Users Call */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Instructions==&lt;br /&gt;
&lt;br /&gt;
Documentation for the [http://www.einsteintoolkit.org Einstein Toolkit] is a community effort and everyone is encouraged to contribute towards these pages. You need to login to the wiki to edit the documentation, to do this either&lt;br /&gt;
&lt;br /&gt;
* if you already have a personal LDAP login with the CCT use this &lt;br /&gt;
* create you own login/password for the wiki (select local domain)&lt;br /&gt;
&lt;br /&gt;
If you would like to make major changes to these pages please discuss first on the users@einsteintoolkit.org mail list. &lt;br /&gt;
&lt;br /&gt;
Thanks for your contributions!&lt;br /&gt;
&lt;br /&gt;
== Weekly Users Call ==&lt;br /&gt;
&lt;br /&gt;
The link for the call is: https://hangouts.google.com/hangouts/_/stevenrbrandt.com/etcall . &amp;#039;&amp;#039;&amp;#039;After connecting, you will need to wait for the moderator to add you to the call.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Todays [[meeting agenda]] is on this wiki too.&lt;br /&gt;
&lt;br /&gt;
==Einstein Toolkit workshop information==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Past ===&lt;br /&gt;
&lt;br /&gt;
* [[ET Workshop 2017 at NCSA| ET Workshop Summer 2017, NCSA]]&lt;br /&gt;
* [[2017 MHD Workshop| (non-ET) MHD workshop, 2017]]&lt;br /&gt;
* [[ET Workshop 2016| ET Workshop Summer 2016, Trento]]&lt;br /&gt;
* [[ET Workshop 2015| ET Workshop Summer 2015, Stockholm]]&lt;br /&gt;
* [[ET_Workshop Summer 2013]] (shows both workshops)&lt;br /&gt;
** [[ET_Workshop Summer 2013 (New Users Workshop)]]&lt;br /&gt;
** [[ET_Workshop Summer 2013 (Developers Workshop)]]&lt;br /&gt;
* [[ET_Workshop Fall 2012]]&lt;br /&gt;
* [[ET_Workshop Spring 2012]]&lt;br /&gt;
* [[ET_Workshop Fall 2011]]&lt;br /&gt;
* [[ET_Workshop Spring 2011]]&lt;br /&gt;
* [http://ccrg.rit.edu/~carpet/index.php/Main_Page Carpet Developer Workshop Summer 2010]&lt;br /&gt;
&lt;br /&gt;
==Einstein Toolkit Seminars==&lt;br /&gt;
&lt;br /&gt;
It would be nice to organize a semi-regular series of talks with topics of general interest. This should include&lt;br /&gt;
not only people from within the Einstein Toolkit. The ET web page already contains a (currently quite short)&lt;br /&gt;
[http://einsteintoolkit.org/seminars/ list], and now it is time to propose upcoming talks. In order to record and&lt;br /&gt;
story voiced proposals, please use the wiki page [[Einstein Toolkit Seminar Proposals]].&lt;br /&gt;
&lt;br /&gt;
==Release planning==&lt;br /&gt;
* [[Release Details]] - Plan, status and checklist for the upcoming release&lt;br /&gt;
* [[Release coordination]] - Discussion of the upcoming release&lt;br /&gt;
* [[Release Process]] - Description of the process that we go through for each release&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
&lt;br /&gt;
* [[Tutorial for New Users]]&lt;br /&gt;
* [[Simplified Tutorial for New Users]] (requires Debian, Linux Mint, Fedora or Mac OSX)&lt;br /&gt;
* [[Getting Started for Cactus Experts]]&lt;br /&gt;
* [[Thorns_we_know_of|Non-Einstein-Toolkit thorns]]&lt;br /&gt;
* [[Einstein Toolkit standards]]: ADMBase, HydroBase, SphericalSurface&lt;br /&gt;
* [[Supported Machines]], [[Configuring a new machine]]&lt;br /&gt;
* [[Machines]] - Notes for specific machines&lt;br /&gt;
* [[Using Eclipse / Mojave]]&lt;br /&gt;
* [[Running Cactus On Windows]]&lt;br /&gt;
* Adding your own analysis method&lt;br /&gt;
* Adding your own [[adding initial data|initial data]]&lt;br /&gt;
* Adding a [[adding a test case|test case]]&lt;br /&gt;
* CCE [http://ccrg.rit.edu/~yosef/cce.html tutorial]&lt;br /&gt;
* [[Visualizing magnetic field lines]]&lt;br /&gt;
* [[Using the multi-model mechanism in Carpet]]&lt;br /&gt;
* [[Working with git]]&lt;br /&gt;
* [[FAQ]]&lt;br /&gt;
* [[Compiling the Einstein Toolkit]]&lt;br /&gt;
* [[Analysis and post-processing]]&lt;br /&gt;
* [[Editing the Einstein Toolkit website]]&lt;br /&gt;
* [[Licences]]&lt;br /&gt;
&lt;br /&gt;
==Regression Test Results==&lt;br /&gt;
You can run [[Simulation_Factory_Advanced_Tutorial#Test_suites | regression tests using SimFactory]].&lt;br /&gt;
&lt;br /&gt;
Results from automated tests are available at https://build.barrywardell.net.&lt;br /&gt;
&lt;br /&gt;
==Performance Test Results==&lt;br /&gt;
* [[Single-node benchmark results]] for the Einstein Toolkit&lt;br /&gt;
* TODO: Scalability benchmark results for the Einstein Toolkit&lt;br /&gt;
&lt;br /&gt;
==Projects==&lt;br /&gt;
&lt;br /&gt;
* [[Adding requirements to the Cactus scheduler]]&lt;br /&gt;
* [[Visualization of simulation results]]&lt;br /&gt;
* [[Automated testing]]&lt;br /&gt;
* [[Vectorization]] Improving code performance by using the CPU&amp;#039;s vector instructions&lt;br /&gt;
* [[Padding]] Improving code performance by optimizing cache access&lt;br /&gt;
* [[Making Cactus Installable]] like any other Ubuntu package&lt;br /&gt;
* [[Test suite results are unwieldy]]&lt;br /&gt;
* [[Summer student projects]]&lt;br /&gt;
* [[Improving the treatment of external libraries]]&lt;br /&gt;
* [[Piraha Parser Discussion]]&lt;br /&gt;
* [[Version control]]&lt;br /&gt;
* [[Rewrite McLachlan]]&lt;br /&gt;
* [[Fixing examples]]&lt;br /&gt;
* [[A new I/O file format]]&lt;br /&gt;
* [[Adding Llama to the ET]]&lt;br /&gt;
* [[Remote Mini-Workshop Series]]&lt;br /&gt;
* [[Running Cactus on Knights Landing]]&lt;br /&gt;
* [[Improving the new user experience]]&lt;br /&gt;
&lt;br /&gt;
==Maintainers==&lt;br /&gt;
&lt;br /&gt;
* [[Organization and Responsibilities]]&lt;br /&gt;
* [[How to Review a Patch]]&lt;br /&gt;
* [[Policies to retire functionality]]&lt;br /&gt;
* [[Preparing a Patch for Review]]&lt;br /&gt;
* [http://einsteintoolkit.org/release-info/parse_testsuite_results.php Release Testsuite Status], [https://build.barrywardell.net/job/EinsteinToolkit/ Trunk Testsuite Status]&lt;br /&gt;
* [[Standard Emails]]&lt;br /&gt;
* [[MHD implementation details and discussions]]&lt;br /&gt;
* [[Carpet Wish List]]&lt;br /&gt;
* [[Editing the website]]&lt;br /&gt;
* [[Usage poll]]&lt;br /&gt;
* [[Repository transition]]&lt;br /&gt;
* [[Tickets]]&lt;br /&gt;
&lt;br /&gt;
== Getting Started with Wikis ==&lt;br /&gt;
&lt;br /&gt;
* Consult the [http://meta.wikimedia.org/wiki/Help:Contents User&amp;#039;s Guide] for information on using the wiki software.&lt;br /&gt;
* [http://www.mediawiki.org/wiki/Help:Configuration_settings Configuration settings list]&lt;br /&gt;
* [http://www.mediawiki.org/wiki/Help:FAQ MediaWiki FAQ]&lt;br /&gt;
* [http://mail.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4415</id>
		<title>2017 MHD Workshop</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4415"/>
		<updated>2017-02-03T11:07:45Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* FunHPC parallelization progress */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Workshop in New York City and at Columbia University.&lt;br /&gt;
&lt;br /&gt;
== Chelsea Venue ==&lt;br /&gt;
&lt;br /&gt;
* On the first afternoon, we&amp;#039;ll meet off-campus to get the workshop started.&lt;br /&gt;
* We will start at 14:00.&lt;br /&gt;
* We have a conference room for 8 booked under Christian D. Ott, California Institute of Technology at &amp;lt;br&amp;gt;Select Office Suites Chelsea&amp;lt;br&amp;gt;116 W 23rd St, New York, NY 10011&amp;lt;br&amp;gt;https://goo.gl/maps/WDBNQpwadfE2 &amp;lt;br&amp;gt;From Columbia, the best way to get there is to take Subway Line 1 to 23rd Street Station.&lt;br /&gt;
&lt;br /&gt;
== Columbia Venue==&lt;br /&gt;
&lt;br /&gt;
* The workshop will take place in the Conference Room of the Center for Theoretical Physics at Columbia. The Conference Room (Room 907) is located on the 9th floor of the Physics building (Pupin Laboratories).&lt;br /&gt;
* Street address: 538 West 120th Street, New York, NY 10027&lt;br /&gt;
* Note that the main entrance is on the Campus Level (5th floor, south side) as indicated here:&lt;br /&gt;
[https://www.google.com/maps/place/40°48&amp;#039;35.9%22N+73°57&amp;#039;41.2%22W/@40.809983,-73.9619872,19z/data=!3m1!4b1!4m5!3m4!1s0x0:0x0!8m2!3d40.809982!4d-73.96144 map]&lt;br /&gt;
* You can enter Campus at 116th/Broadway. There is also the possibility of entering Pupin Hall from 120th/Broadway. In this case, enter through the North West Corner Building and take the stairs/escalators up to the Campus level, exit the building and enter Pupin through the main entrance.&lt;br /&gt;
* The conference room has a board and a big presentation screen -&amp;gt; There is the possibility of giving a black board talk or a slide presentation, or a combination of both.&lt;br /&gt;
&lt;br /&gt;
==Time and Agenda==&lt;br /&gt;
&lt;br /&gt;
Monday&amp;lt;br&amp;gt;&lt;br /&gt;
2:00 — 2:10: Welcome&amp;lt;br&amp;gt;&lt;br /&gt;
2:10 — 2:35: Philipp to talk on [[2017_MHD_Workshop/fluxCT|flux CT]]&amp;lt;br&amp;gt;&lt;br /&gt;
2:35 — 3:00: Daniel to talk on [[2017_MHD_Workshop/vector_potential|vector potential]] and [[2017_MHD_Workshop/MHD_con2prim|MHD con2prim]]&amp;lt;br&amp;gt;&lt;br /&gt;
3:00 — 3:25: &amp;#039;&amp;#039;(Propose switch with Christian&amp;#039;s talk)&amp;#039;&amp;#039; David to talk on DG and [[2017_MHD_Workshop/Zelmani_M1|Zelmani M1]]&amp;lt;br&amp;gt;&lt;br /&gt;
3:25 — 3:55: Coffee break&amp;lt;br&amp;gt;&lt;br /&gt;
3:55 — 4:20: &amp;#039;&amp;#039;(Propose switch with David&amp;#039;s talk)&amp;#039;&amp;#039; Christian to talk on [[2017_MHD_Workshop/Zelmani_M1|M1 work]]/results&amp;lt;br&amp;gt;&lt;br /&gt;
4:20 — 4:45: Erik: [[2017_MHD_Workshop/FunHPC|FunHPC]] — current state, toward AMR, toward KNL&amp;lt;br&amp;gt;&lt;br /&gt;
4:45 — 5:10: Roland: [[2017_MHD_Workshop/FunHPC|HydroFunToy]] — code design and capability&amp;lt;br&amp;gt;&lt;br /&gt;
5:10 — 5:45: Closing discussion and roadmap for the week&lt;br /&gt;
&lt;br /&gt;
Tuesday to Thursday&amp;lt;br&amp;gt;&lt;br /&gt;
9:00 am till late: “workshop” -&amp;gt; work on topics identified on Monday&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Friday&amp;lt;br&amp;gt;&lt;br /&gt;
9:00 — 11:00: Wrap it up&amp;lt;br&amp;gt;&lt;br /&gt;
11:00 — 12:30: Summary of the workshop and defining future directions/coordinate future work&amp;lt;br&amp;gt;&lt;br /&gt;
12:30: lunch/end of workshop&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* You are free to arrive earlier on Monday and leave later on Friday, we have the room booked for the entire day.&lt;br /&gt;
* Let me know if this tentative agenda conflicts with any of your travel plans, we can reschedule things.&lt;br /&gt;
&lt;br /&gt;
==Talks==&lt;br /&gt;
&lt;br /&gt;
* The talks on Monday afternoon are meant to be informal kick-off talks in order to start the discussion in a well defined way and to identify the topics/aspects that we want to work on in the following days (or that we want to keep in mind while working on other aspects). I’d suggest that we keep them fairly short (no more than ~20-25 mins including some discussion), just trying to provide a basic introduction that brings everyone onto the same page, addressing the following basic questions like:&lt;br /&gt;
** What is the analytic formulation (basic intro to the equations — if applicable)?&lt;br /&gt;
** What is the current status, or what have people been working on so far?&lt;br /&gt;
** What are the short &amp;amp; long-term goals? What do we need to implement/what needs to be done to move forward?&lt;br /&gt;
** What do you think could be the specific goals for the week?&lt;br /&gt;
&lt;br /&gt;
* Please check and update the title of your talk you volunteered for (see above)&lt;br /&gt;
&lt;br /&gt;
* Feel free to add some notes/thoughts/ideas regarding your talk on the wiki&lt;br /&gt;
&lt;br /&gt;
== Knights Landing on Stampede ==&lt;br /&gt;
&lt;br /&gt;
* Stampede manual: https://portal.tacc.utexas.edu/user-guides/stampede&lt;br /&gt;
* Login node: login-knl1.stampede.tacc.utexas.edu&lt;br /&gt;
* option list (beta): https://dl.dropboxusercontent.com/u/17077801/stampede-knl.cfg&lt;br /&gt;
* environment (beta): https://dl.dropboxusercontent.com/u/17077801/stampede-knl.env&lt;br /&gt;
* submission script (beta): https://dl.dropboxusercontent.com/u/17077801/stampede-knl.sub&lt;br /&gt;
&lt;br /&gt;
David Radice&amp;#039;s simfactory replacement: https://bitbucket.org/dradice/batchtools&lt;br /&gt;
&lt;br /&gt;
== flux-CT implementation progress ==&lt;br /&gt;
&lt;br /&gt;
* Identified which equations to solve and that there is indeed a Riemann problem to solve for the electric field&lt;br /&gt;
* This is not a Riemann problem for the state vector (which is the magnetic field), but the electric field is used to update the magnetic field&lt;br /&gt;
* The Riemann problem for the electric field requires reconstruction of the velocity and magnetic field to the cell edge, requiring a double reconstruction for the velocity&lt;br /&gt;
* Started implementation for that and bookkeeping is going to be tedious but otherwise implementation should be straightforward&lt;br /&gt;
* Need to come up with good test cases to make sure this is implemented correctly&lt;br /&gt;
&lt;br /&gt;
==FunHPC parallelization progress==&lt;br /&gt;
&lt;br /&gt;
* Continued to make FunHPC work, as alternative to OpenMP&lt;br /&gt;
* Improve performance by taking cache alignment into account&lt;br /&gt;
* * Many internal modifications to Cactus and Carpet&lt;br /&gt;
* Benchmarking&lt;br /&gt;
* * Need to measure performance details automatically, make available to users&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4414</id>
		<title>2017 MHD Workshop</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4414"/>
		<updated>2017-02-03T11:04:39Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* flux-CT implementation progress */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Workshop in New York City and at Columbia University.&lt;br /&gt;
&lt;br /&gt;
== Chelsea Venue ==&lt;br /&gt;
&lt;br /&gt;
* On the first afternoon, we&amp;#039;ll meet off-campus to get the workshop started.&lt;br /&gt;
* We will start at 14:00.&lt;br /&gt;
* We have a conference room for 8 booked under Christian D. Ott, California Institute of Technology at &amp;lt;br&amp;gt;Select Office Suites Chelsea&amp;lt;br&amp;gt;116 W 23rd St, New York, NY 10011&amp;lt;br&amp;gt;https://goo.gl/maps/WDBNQpwadfE2 &amp;lt;br&amp;gt;From Columbia, the best way to get there is to take Subway Line 1 to 23rd Street Station.&lt;br /&gt;
&lt;br /&gt;
== Columbia Venue==&lt;br /&gt;
&lt;br /&gt;
* The workshop will take place in the Conference Room of the Center for Theoretical Physics at Columbia. The Conference Room (Room 907) is located on the 9th floor of the Physics building (Pupin Laboratories).&lt;br /&gt;
* Street address: 538 West 120th Street, New York, NY 10027&lt;br /&gt;
* Note that the main entrance is on the Campus Level (5th floor, south side) as indicated here:&lt;br /&gt;
[https://www.google.com/maps/place/40°48&amp;#039;35.9%22N+73°57&amp;#039;41.2%22W/@40.809983,-73.9619872,19z/data=!3m1!4b1!4m5!3m4!1s0x0:0x0!8m2!3d40.809982!4d-73.96144 map]&lt;br /&gt;
* You can enter Campus at 116th/Broadway. There is also the possibility of entering Pupin Hall from 120th/Broadway. In this case, enter through the North West Corner Building and take the stairs/escalators up to the Campus level, exit the building and enter Pupin through the main entrance.&lt;br /&gt;
* The conference room has a board and a big presentation screen -&amp;gt; There is the possibility of giving a black board talk or a slide presentation, or a combination of both.&lt;br /&gt;
&lt;br /&gt;
==Time and Agenda==&lt;br /&gt;
&lt;br /&gt;
Monday&amp;lt;br&amp;gt;&lt;br /&gt;
2:00 — 2:10: Welcome&amp;lt;br&amp;gt;&lt;br /&gt;
2:10 — 2:35: Philipp to talk on [[2017_MHD_Workshop/fluxCT|flux CT]]&amp;lt;br&amp;gt;&lt;br /&gt;
2:35 — 3:00: Daniel to talk on [[2017_MHD_Workshop/vector_potential|vector potential]] and [[2017_MHD_Workshop/MHD_con2prim|MHD con2prim]]&amp;lt;br&amp;gt;&lt;br /&gt;
3:00 — 3:25: &amp;#039;&amp;#039;(Propose switch with Christian&amp;#039;s talk)&amp;#039;&amp;#039; David to talk on DG and [[2017_MHD_Workshop/Zelmani_M1|Zelmani M1]]&amp;lt;br&amp;gt;&lt;br /&gt;
3:25 — 3:55: Coffee break&amp;lt;br&amp;gt;&lt;br /&gt;
3:55 — 4:20: &amp;#039;&amp;#039;(Propose switch with David&amp;#039;s talk)&amp;#039;&amp;#039; Christian to talk on [[2017_MHD_Workshop/Zelmani_M1|M1 work]]/results&amp;lt;br&amp;gt;&lt;br /&gt;
4:20 — 4:45: Erik: [[2017_MHD_Workshop/FunHPC|FunHPC]] — current state, toward AMR, toward KNL&amp;lt;br&amp;gt;&lt;br /&gt;
4:45 — 5:10: Roland: [[2017_MHD_Workshop/FunHPC|HydroFunToy]] — code design and capability&amp;lt;br&amp;gt;&lt;br /&gt;
5:10 — 5:45: Closing discussion and roadmap for the week&lt;br /&gt;
&lt;br /&gt;
Tuesday to Thursday&amp;lt;br&amp;gt;&lt;br /&gt;
9:00 am till late: “workshop” -&amp;gt; work on topics identified on Monday&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Friday&amp;lt;br&amp;gt;&lt;br /&gt;
9:00 — 11:00: Wrap it up&amp;lt;br&amp;gt;&lt;br /&gt;
11:00 — 12:30: Summary of the workshop and defining future directions/coordinate future work&amp;lt;br&amp;gt;&lt;br /&gt;
12:30: lunch/end of workshop&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* You are free to arrive earlier on Monday and leave later on Friday, we have the room booked for the entire day.&lt;br /&gt;
* Let me know if this tentative agenda conflicts with any of your travel plans, we can reschedule things.&lt;br /&gt;
&lt;br /&gt;
==Talks==&lt;br /&gt;
&lt;br /&gt;
* The talks on Monday afternoon are meant to be informal kick-off talks in order to start the discussion in a well defined way and to identify the topics/aspects that we want to work on in the following days (or that we want to keep in mind while working on other aspects). I’d suggest that we keep them fairly short (no more than ~20-25 mins including some discussion), just trying to provide a basic introduction that brings everyone onto the same page, addressing the following basic questions like:&lt;br /&gt;
** What is the analytic formulation (basic intro to the equations — if applicable)?&lt;br /&gt;
** What is the current status, or what have people been working on so far?&lt;br /&gt;
** What are the short &amp;amp; long-term goals? What do we need to implement/what needs to be done to move forward?&lt;br /&gt;
** What do you think could be the specific goals for the week?&lt;br /&gt;
&lt;br /&gt;
* Please check and update the title of your talk you volunteered for (see above)&lt;br /&gt;
&lt;br /&gt;
* Feel free to add some notes/thoughts/ideas regarding your talk on the wiki&lt;br /&gt;
&lt;br /&gt;
== Knights Landing on Stampede ==&lt;br /&gt;
&lt;br /&gt;
* Stampede manual: https://portal.tacc.utexas.edu/user-guides/stampede&lt;br /&gt;
* Login node: login-knl1.stampede.tacc.utexas.edu&lt;br /&gt;
* option list (beta): https://dl.dropboxusercontent.com/u/17077801/stampede-knl.cfg&lt;br /&gt;
* environment (beta): https://dl.dropboxusercontent.com/u/17077801/stampede-knl.env&lt;br /&gt;
* submission script (beta): https://dl.dropboxusercontent.com/u/17077801/stampede-knl.sub&lt;br /&gt;
&lt;br /&gt;
David Radice&amp;#039;s simfactory replacement: https://bitbucket.org/dradice/batchtools&lt;br /&gt;
&lt;br /&gt;
== flux-CT implementation progress ==&lt;br /&gt;
&lt;br /&gt;
* Identified which equations to solve and that there is indeed a Riemann problem to solve for the electric field&lt;br /&gt;
* This is not a Riemann problem for the state vector (which is the magnetic field), but the electric field is used to update the magnetic field&lt;br /&gt;
* The Riemann problem for the electric field requires reconstruction of the velocity and magnetic field to the cell edge, requiring a double reconstruction for the velocity&lt;br /&gt;
* Started implementation for that and bookkeeping is going to be tedious but otherwise implementation should be straightforward&lt;br /&gt;
* Need to come up with good test cases to make sure this is implemented correctly&lt;br /&gt;
&lt;br /&gt;
==FunHPC parallelization progress==&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4368</id>
		<title>2017 MHD Workshop</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4368"/>
		<updated>2017-01-23T12:45:53Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;workshop at Columbia.&lt;br /&gt;
&lt;br /&gt;
Meeting minutes (delete once done):&lt;br /&gt;
* Daniel and Philipp have discussed a bit what they want to cover and&lt;br /&gt;
prepared a draft and to use a wiki (suggest ET wiki, all agree)&lt;br /&gt;
** a page is here:&lt;br /&gt;
https://docs.einsteintoolkit.org/et-docs/2017_MHD_Workshop&lt;br /&gt;
&lt;br /&gt;
* Daniel will provide some local information&lt;br /&gt;
* interest in radiation transport and its current limitations&lt;br /&gt;
* interest to get an overview about HydroFunHPC code (design idea, iintention) and FunHPC in general&lt;br /&gt;
** Currently HydroFunHPC has all the pieces but fails (likely in con2prim) with NaNs on the surface of a TOV star.&lt;br /&gt;
** ultimately this requires integration with Carpet to overlap more communication and computation&lt;br /&gt;
&lt;br /&gt;
* plan:&lt;br /&gt;
** kickoff talks on Monday&lt;br /&gt;
** Philipp to talk about CT&lt;br /&gt;
** Daniel to talk about vector potential method and MHD con2prim&lt;br /&gt;
** Christian and David to talk about M1&lt;br /&gt;
** Erik and Roland to talk about (Hydro)FunHPC&lt;br /&gt;
&lt;br /&gt;
==Kick-Off Talks==&lt;br /&gt;
&lt;br /&gt;
* Erik: FunHPC -- current state, towards AMR, towards KNL&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4367</id>
		<title>2017 MHD Workshop</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4367"/>
		<updated>2017-01-23T12:45:37Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;workshop at Columbia.&lt;br /&gt;
&lt;br /&gt;
Meeting minutes (delete once done):&lt;br /&gt;
* Daniel and Philipp have discussed a bit what they want to cover and&lt;br /&gt;
prepared a draft and to use a wiki (suggest ET wiki, all agree)&lt;br /&gt;
** a page is here:&lt;br /&gt;
https://docs.einsteintoolkit.org/et-docs/2017_MHD_Workshop&lt;br /&gt;
&lt;br /&gt;
* Daniel will provide some local information&lt;br /&gt;
* interest in radiation transport and its current limitations&lt;br /&gt;
* interest to get an overview about HydroFunHPC code (design idea, iintention) and FunHPC in general&lt;br /&gt;
** Currently HydroFunHPC has all the pieces but fails (likely in con2prim) with NaNs on the surface of a TOV star.&lt;br /&gt;
** ultimately this requires integration with Carpet to overlap more communication and computation&lt;br /&gt;
&lt;br /&gt;
* plan:&lt;br /&gt;
** kickoff talks on Monday&lt;br /&gt;
** Philipp to talk about CT&lt;br /&gt;
** Daniel to talk about vector potential method and MHD con2prim&lt;br /&gt;
** Christian and David to talk about M1&lt;br /&gt;
** Erik and Roland to talk about (Hydro)FunHPC&lt;br /&gt;
&lt;br /&gt;
## Kick-Off Talks&lt;br /&gt;
&lt;br /&gt;
* Erik: FunHPC -- current state, towards AMR, towards KNL&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4366</id>
		<title>2017 MHD Workshop</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=2017_MHD_Workshop&amp;diff=4366"/>
		<updated>2017-01-23T12:44:38Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;workshop at Columbia.&lt;br /&gt;
&lt;br /&gt;
Meeting minutes (delete once done):&lt;br /&gt;
* Daniel and Philipp have discussed a bit what they want to cover and&lt;br /&gt;
prepared a draft and to use a wiki (suggest ET wiki, all agree)&lt;br /&gt;
** a page is here:&lt;br /&gt;
https://docs.einsteintoolkit.org/et-docs/2017_MHD_Workshop&lt;br /&gt;
&lt;br /&gt;
* Daniel will provide some local information&lt;br /&gt;
* interest in radiation transport and its current limitations&lt;br /&gt;
* interest to get an overview about HydroFunHPC code (design idea, iintention) and FunHPC in general&lt;br /&gt;
** Currently HydroFunHPC has all the pieces but fails (likely in con2prim) with NaNs on the surface of a TOV star.&lt;br /&gt;
** ultimately this requires integration with Carpet to overlap more communication and computation&lt;br /&gt;
&lt;br /&gt;
* plan:&lt;br /&gt;
** kickoff talks on Monday&lt;br /&gt;
** Philipp to talk about CT&lt;br /&gt;
** Daniel to talk about vector potential method and MHD con2prim&lt;br /&gt;
** Christian and David to talk about M1&lt;br /&gt;
** Erik and Roland to talk about (Hydro)FunHPC&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4342</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4342"/>
		<updated>2016-12-14T12:23:16Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* To Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===Loop Example===&lt;br /&gt;
&lt;br /&gt;
Let us look at a simple loop. We are going to parallelize it once with OpenMP, and once with FunHPC.&lt;br /&gt;
&lt;br /&gt;
  #include &amp;lt;funhpc/async.hpp&amp;gt;&lt;br /&gt;
  #include &amp;lt;funhpc/main.hpp&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  #include &amp;lt;algorithm&amp;gt;&lt;br /&gt;
  #include &amp;lt;cassert&amp;gt;&lt;br /&gt;
  #include &amp;lt;vector&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  // Synchronize the ghost zones (the outermost points in each direction)&lt;br /&gt;
  void sync(double *y, int n) {&lt;br /&gt;
    // we just assume periodic boundaries&lt;br /&gt;
    assert(n &amp;gt;= 2);&lt;br /&gt;
    y[0] = y[n - 2];&lt;br /&gt;
    y[n - 1] = y[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // A basic loop, calculating a first derivative&lt;br /&gt;
  void vdiff(double *y, const double *x, int n) {&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, parallelized via OpenMP: The number of iterations is&lt;br /&gt;
  // split over the available number of cores (e.g. 12 on a NUMA node of&lt;br /&gt;
  // Wheeler). The disadvantages of this are:&lt;br /&gt;
  // - There is a implicit barrier at the end of the loop, so that the&lt;br /&gt;
  //   sync afterwards cannot overlap with the loop&lt;br /&gt;
  // - A single thread might handle too much work, overflowing the cache&lt;br /&gt;
  // - A single thread might not have enough work, so that the thread&lt;br /&gt;
  //   management overhead is too large&lt;br /&gt;
  void vdiff_openmp(double *y, const double *x, int n) {&lt;br /&gt;
  #pragma omp parallel for&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, this time parallelized via FunHPC. Each thread&lt;br /&gt;
  // handles a well-defined amount of work, to be chosen based on the&lt;br /&gt;
  // complexity of the loop kernel.&lt;br /&gt;
  // - Each thread runs until its result is needed. The interior of the&lt;br /&gt;
  //   domain can be calculated overlapping with the synchronization.&lt;br /&gt;
  // - If the number of iterations is small, only a single thread is&lt;br /&gt;
  //   used. Other cores are free to do other work, e.g. analysis or&lt;br /&gt;
  //   I/O.&lt;br /&gt;
  // - If the number of iterations is large, then many threads will be&lt;br /&gt;
  //   created. The threads will be executed in some arbitrary order.&lt;br /&gt;
  //   The cost of creating a thread is small (but of not negligible) --&lt;br /&gt;
  //   there is no problem if thousands of threads are created.&lt;br /&gt;
  void vdiff_funhpc(double *y, const double *x, int n) {&lt;br /&gt;
    // number of points per thread (depending on architecture and cache size)&lt;br /&gt;
    // (the number here is much too small; this is just for testing)&lt;br /&gt;
    const int blocksize = 8;&lt;br /&gt;
  &lt;br /&gt;
    // loop over blocks, starting one thread for each&lt;br /&gt;
    std::vector&amp;lt;qthread::future&amp;lt;void&amp;gt;&amp;gt; fs;&lt;br /&gt;
    for (int i0 = 1; i0 &amp;lt; n - 1; i0 += blocksize) {&lt;br /&gt;
      fs.push_back(qthread::async(qthread::launch::async, [=]() {&lt;br /&gt;
  &lt;br /&gt;
        // loop over the work of a single thread&lt;br /&gt;
        const int imin = i0;&lt;br /&gt;
        const int imax = std::min(i0 + blocksize, n - 1);&lt;br /&gt;
        for (int i = imin; i &amp;lt; imax; ++i) {&lt;br /&gt;
          y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
        }&lt;br /&gt;
      }));&lt;br /&gt;
    }&lt;br /&gt;
  &lt;br /&gt;
    // synchronize as soon as the boundary results are available&lt;br /&gt;
    assert(!fs.empty());&lt;br /&gt;
    fs[0].wait();&lt;br /&gt;
    fs[fs.size() - 1].wait();&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  &lt;br /&gt;
    // wait for all threads to finish&lt;br /&gt;
    for (const auto &amp;amp;f : fs)&lt;br /&gt;
      f.wait();&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  int funhpc_main(int argc, char **argv) {&lt;br /&gt;
    const int n = 1000000;&lt;br /&gt;
    std::vector&amp;lt;double&amp;gt; x(n), y(n);&lt;br /&gt;
    vdiff(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_openmp(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_funhpc(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    return 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #2: Wed, Dec 14, 2016, 12:00 EST==&lt;br /&gt;
&lt;br /&gt;
(By popular request, later in the day.)&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC in Cactus&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC vs. OpenMP (recap from last week)&lt;br /&gt;
* Overview of current proof-of-concept code&lt;br /&gt;
* Look at current benchmark results&lt;br /&gt;
* How to download, build, run, benchmark&lt;br /&gt;
* Next steps&lt;br /&gt;
&lt;br /&gt;
===Source code===&lt;br /&gt;
* FunHPC and its dependencies, as described last week (see above)&lt;br /&gt;
* FunHPC arrangement, basic thorns and examples (get it at https://bitbucket.org/eschnett/funhpc.cactus.git)&lt;br /&gt;
* Cactus flesh, branch eschnett/funhpc (mostly for startup)&lt;br /&gt;
* Carpet, branch eschnett/funhpc (mostly to disable OpenMP)&lt;br /&gt;
* Kranc, branch eschnett/funhpc (to generate FunHPC-parallelized code)&lt;br /&gt;
* McLachlan, branch eschnett/funhpc (contains new FunHPC-parallelized thorns with &amp;quot;_FH&amp;quot; suffix)&lt;br /&gt;
&lt;br /&gt;
I recommend using Wheeler to build and run, as I&amp;#039;ve tested this.&lt;br /&gt;
&lt;br /&gt;
===To Do===&lt;br /&gt;
* Put example parameter file online&lt;br /&gt;
* Check whether a vanilla checkout builds&lt;br /&gt;
* Update benchmark results&lt;br /&gt;
* Next meeting: Next Wed, 12:00 EST (tentatively); continue with FunHPC in Cactus&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4341</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4341"/>
		<updated>2016-12-14T11:46:02Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Source code */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===Loop Example===&lt;br /&gt;
&lt;br /&gt;
Let us look at a simple loop. We are going to parallelize it once with OpenMP, and once with FunHPC.&lt;br /&gt;
&lt;br /&gt;
  #include &amp;lt;funhpc/async.hpp&amp;gt;&lt;br /&gt;
  #include &amp;lt;funhpc/main.hpp&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  #include &amp;lt;algorithm&amp;gt;&lt;br /&gt;
  #include &amp;lt;cassert&amp;gt;&lt;br /&gt;
  #include &amp;lt;vector&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  // Synchronize the ghost zones (the outermost points in each direction)&lt;br /&gt;
  void sync(double *y, int n) {&lt;br /&gt;
    // we just assume periodic boundaries&lt;br /&gt;
    assert(n &amp;gt;= 2);&lt;br /&gt;
    y[0] = y[n - 2];&lt;br /&gt;
    y[n - 1] = y[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // A basic loop, calculating a first derivative&lt;br /&gt;
  void vdiff(double *y, const double *x, int n) {&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, parallelized via OpenMP: The number of iterations is&lt;br /&gt;
  // split over the available number of cores (e.g. 12 on a NUMA node of&lt;br /&gt;
  // Wheeler). The disadvantages of this are:&lt;br /&gt;
  // - There is a implicit barrier at the end of the loop, so that the&lt;br /&gt;
  //   sync afterwards cannot overlap with the loop&lt;br /&gt;
  // - A single thread might handle too much work, overflowing the cache&lt;br /&gt;
  // - A single thread might not have enough work, so that the thread&lt;br /&gt;
  //   management overhead is too large&lt;br /&gt;
  void vdiff_openmp(double *y, const double *x, int n) {&lt;br /&gt;
  #pragma omp parallel for&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, this time parallelized via FunHPC. Each thread&lt;br /&gt;
  // handles a well-defined amount of work, to be chosen based on the&lt;br /&gt;
  // complexity of the loop kernel.&lt;br /&gt;
  // - Each thread runs until its result is needed. The interior of the&lt;br /&gt;
  //   domain can be calculated overlapping with the synchronization.&lt;br /&gt;
  // - If the number of iterations is small, only a single thread is&lt;br /&gt;
  //   used. Other cores are free to do other work, e.g. analysis or&lt;br /&gt;
  //   I/O.&lt;br /&gt;
  // - If the number of iterations is large, then many threads will be&lt;br /&gt;
  //   created. The threads will be executed in some arbitrary order.&lt;br /&gt;
  //   The cost of creating a thread is small (but of not negligible) --&lt;br /&gt;
  //   there is no problem if thousands of threads are created.&lt;br /&gt;
  void vdiff_funhpc(double *y, const double *x, int n) {&lt;br /&gt;
    // number of points per thread (depending on architecture and cache size)&lt;br /&gt;
    // (the number here is much too small; this is just for testing)&lt;br /&gt;
    const int blocksize = 8;&lt;br /&gt;
  &lt;br /&gt;
    // loop over blocks, starting one thread for each&lt;br /&gt;
    std::vector&amp;lt;qthread::future&amp;lt;void&amp;gt;&amp;gt; fs;&lt;br /&gt;
    for (int i0 = 1; i0 &amp;lt; n - 1; i0 += blocksize) {&lt;br /&gt;
      fs.push_back(qthread::async(qthread::launch::async, [=]() {&lt;br /&gt;
  &lt;br /&gt;
        // loop over the work of a single thread&lt;br /&gt;
        const int imin = i0;&lt;br /&gt;
        const int imax = std::min(i0 + blocksize, n - 1);&lt;br /&gt;
        for (int i = imin; i &amp;lt; imax; ++i) {&lt;br /&gt;
          y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
        }&lt;br /&gt;
      }));&lt;br /&gt;
    }&lt;br /&gt;
  &lt;br /&gt;
    // synchronize as soon as the boundary results are available&lt;br /&gt;
    assert(!fs.empty());&lt;br /&gt;
    fs[0].wait();&lt;br /&gt;
    fs[fs.size() - 1].wait();&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  &lt;br /&gt;
    // wait for all threads to finish&lt;br /&gt;
    for (const auto &amp;amp;f : fs)&lt;br /&gt;
      f.wait();&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  int funhpc_main(int argc, char **argv) {&lt;br /&gt;
    const int n = 1000000;&lt;br /&gt;
    std::vector&amp;lt;double&amp;gt; x(n), y(n);&lt;br /&gt;
    vdiff(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_openmp(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_funhpc(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    return 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #2: Wed, Dec 14, 2016, 12:00 EST==&lt;br /&gt;
&lt;br /&gt;
(By popular request, later in the day.)&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC in Cactus&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC vs. OpenMP (recap from last week)&lt;br /&gt;
* Overview of current proof-of-concept code&lt;br /&gt;
* Look at current benchmark results&lt;br /&gt;
* How to download, build, run, benchmark&lt;br /&gt;
* Next steps&lt;br /&gt;
&lt;br /&gt;
===Source code===&lt;br /&gt;
* FunHPC and its dependencies, as described last week (see above)&lt;br /&gt;
* FunHPC arrangement, basic thorns and examples (get it at https://bitbucket.org/eschnett/funhpc.cactus.git)&lt;br /&gt;
* Cactus flesh, branch eschnett/funhpc (mostly for startup)&lt;br /&gt;
* Carpet, branch eschnett/funhpc (mostly to disable OpenMP)&lt;br /&gt;
* Kranc, branch eschnett/funhpc (to generate FunHPC-parallelized code)&lt;br /&gt;
* McLachlan, branch eschnett/funhpc (contains new FunHPC-parallelized thorns with &amp;quot;_FH&amp;quot; suffix)&lt;br /&gt;
&lt;br /&gt;
I recommend using Wheeler to build and run, as I&amp;#039;ve tested this.&lt;br /&gt;
&lt;br /&gt;
===To Do===&lt;br /&gt;
* Put example parameter file online&lt;br /&gt;
* Check whether a vanilla checkout builds&lt;br /&gt;
* Update benchmark results&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4340</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4340"/>
		<updated>2016-12-14T09:30:24Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* To-Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===Loop Example===&lt;br /&gt;
&lt;br /&gt;
Let us look at a simple loop. We are going to parallelize it once with OpenMP, and once with FunHPC.&lt;br /&gt;
&lt;br /&gt;
  #include &amp;lt;funhpc/async.hpp&amp;gt;&lt;br /&gt;
  #include &amp;lt;funhpc/main.hpp&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  #include &amp;lt;algorithm&amp;gt;&lt;br /&gt;
  #include &amp;lt;cassert&amp;gt;&lt;br /&gt;
  #include &amp;lt;vector&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  // Synchronize the ghost zones (the outermost points in each direction)&lt;br /&gt;
  void sync(double *y, int n) {&lt;br /&gt;
    // we just assume periodic boundaries&lt;br /&gt;
    assert(n &amp;gt;= 2);&lt;br /&gt;
    y[0] = y[n - 2];&lt;br /&gt;
    y[n - 1] = y[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // A basic loop, calculating a first derivative&lt;br /&gt;
  void vdiff(double *y, const double *x, int n) {&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, parallelized via OpenMP: The number of iterations is&lt;br /&gt;
  // split over the available number of cores (e.g. 12 on a NUMA node of&lt;br /&gt;
  // Wheeler). The disadvantages of this are:&lt;br /&gt;
  // - There is a implicit barrier at the end of the loop, so that the&lt;br /&gt;
  //   sync afterwards cannot overlap with the loop&lt;br /&gt;
  // - A single thread might handle too much work, overflowing the cache&lt;br /&gt;
  // - A single thread might not have enough work, so that the thread&lt;br /&gt;
  //   management overhead is too large&lt;br /&gt;
  void vdiff_openmp(double *y, const double *x, int n) {&lt;br /&gt;
  #pragma omp parallel for&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, this time parallelized via FunHPC. Each thread&lt;br /&gt;
  // handles a well-defined amount of work, to be chosen based on the&lt;br /&gt;
  // complexity of the loop kernel.&lt;br /&gt;
  // - Each thread runs until its result is needed. The interior of the&lt;br /&gt;
  //   domain can be calculated overlapping with the synchronization.&lt;br /&gt;
  // - If the number of iterations is small, only a single thread is&lt;br /&gt;
  //   used. Other cores are free to do other work, e.g. analysis or&lt;br /&gt;
  //   I/O.&lt;br /&gt;
  // - If the number of iterations is large, then many threads will be&lt;br /&gt;
  //   created. The threads will be executed in some arbitrary order.&lt;br /&gt;
  //   The cost of creating a thread is small (but of not negligible) --&lt;br /&gt;
  //   there is no problem if thousands of threads are created.&lt;br /&gt;
  void vdiff_funhpc(double *y, const double *x, int n) {&lt;br /&gt;
    // number of points per thread (depending on architecture and cache size)&lt;br /&gt;
    // (the number here is much too small; this is just for testing)&lt;br /&gt;
    const int blocksize = 8;&lt;br /&gt;
  &lt;br /&gt;
    // loop over blocks, starting one thread for each&lt;br /&gt;
    std::vector&amp;lt;qthread::future&amp;lt;void&amp;gt;&amp;gt; fs;&lt;br /&gt;
    for (int i0 = 1; i0 &amp;lt; n - 1; i0 += blocksize) {&lt;br /&gt;
      fs.push_back(qthread::async(qthread::launch::async, [=]() {&lt;br /&gt;
  &lt;br /&gt;
        // loop over the work of a single thread&lt;br /&gt;
        const int imin = i0;&lt;br /&gt;
        const int imax = std::min(i0 + blocksize, n - 1);&lt;br /&gt;
        for (int i = imin; i &amp;lt; imax; ++i) {&lt;br /&gt;
          y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
        }&lt;br /&gt;
      }));&lt;br /&gt;
    }&lt;br /&gt;
  &lt;br /&gt;
    // synchronize as soon as the boundary results are available&lt;br /&gt;
    assert(!fs.empty());&lt;br /&gt;
    fs[0].wait();&lt;br /&gt;
    fs[fs.size() - 1].wait();&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  &lt;br /&gt;
    // wait for all threads to finish&lt;br /&gt;
    for (const auto &amp;amp;f : fs)&lt;br /&gt;
      f.wait();&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  int funhpc_main(int argc, char **argv) {&lt;br /&gt;
    const int n = 1000000;&lt;br /&gt;
    std::vector&amp;lt;double&amp;gt; x(n), y(n);&lt;br /&gt;
    vdiff(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_openmp(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_funhpc(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    return 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #2: Wed, Dec 14, 2016, 12:00 EST==&lt;br /&gt;
&lt;br /&gt;
(By popular request, later in the day.)&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC in Cactus&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC vs. OpenMP (recap from last week)&lt;br /&gt;
* Overview of current proof-of-concept code&lt;br /&gt;
* Look at current benchmark results&lt;br /&gt;
* How to download, build, run, benchmark&lt;br /&gt;
* Next steps&lt;br /&gt;
&lt;br /&gt;
===Source code===&lt;br /&gt;
* FunHPC and its dependencies, as described last week (see above)&lt;br /&gt;
* FunHPC arrangements, basic thorns and examples (get it at https://bitbucket.org/eschnett/funhpc.cactus.git)&lt;br /&gt;
* Cactus flesh, branch eschnett/funhpc (mostly for startup)&lt;br /&gt;
* Carpet, branch eschnett/funhpc (mostly to disable OpenMP)&lt;br /&gt;
* Kranc, branch eschnett/funhpc (to generate FunHPC-parallelized code)&lt;br /&gt;
* McLachlan, branch eschnett/funhpc (contains new FunHPC-parallelized thorns with &amp;quot;_FH&amp;quot; suffix)&lt;br /&gt;
&lt;br /&gt;
I recommend using Wheeler to build and run, as I&amp;#039;ve tested this.&lt;br /&gt;
&lt;br /&gt;
===To Do===&lt;br /&gt;
* Put example parameter file online&lt;br /&gt;
* Check whether a vanilla checkout builds&lt;br /&gt;
* Update benchmark results&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4339</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4339"/>
		<updated>2016-12-14T09:19:05Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Mini-Workshop #2: Wed, Dec 14, 2016, 12:00 EST */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===Loop Example===&lt;br /&gt;
&lt;br /&gt;
Let us look at a simple loop. We are going to parallelize it once with OpenMP, and once with FunHPC.&lt;br /&gt;
&lt;br /&gt;
  #include &amp;lt;funhpc/async.hpp&amp;gt;&lt;br /&gt;
  #include &amp;lt;funhpc/main.hpp&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  #include &amp;lt;algorithm&amp;gt;&lt;br /&gt;
  #include &amp;lt;cassert&amp;gt;&lt;br /&gt;
  #include &amp;lt;vector&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  // Synchronize the ghost zones (the outermost points in each direction)&lt;br /&gt;
  void sync(double *y, int n) {&lt;br /&gt;
    // we just assume periodic boundaries&lt;br /&gt;
    assert(n &amp;gt;= 2);&lt;br /&gt;
    y[0] = y[n - 2];&lt;br /&gt;
    y[n - 1] = y[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // A basic loop, calculating a first derivative&lt;br /&gt;
  void vdiff(double *y, const double *x, int n) {&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, parallelized via OpenMP: The number of iterations is&lt;br /&gt;
  // split over the available number of cores (e.g. 12 on a NUMA node of&lt;br /&gt;
  // Wheeler). The disadvantages of this are:&lt;br /&gt;
  // - There is a implicit barrier at the end of the loop, so that the&lt;br /&gt;
  //   sync afterwards cannot overlap with the loop&lt;br /&gt;
  // - A single thread might handle too much work, overflowing the cache&lt;br /&gt;
  // - A single thread might not have enough work, so that the thread&lt;br /&gt;
  //   management overhead is too large&lt;br /&gt;
  void vdiff_openmp(double *y, const double *x, int n) {&lt;br /&gt;
  #pragma omp parallel for&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, this time parallelized via FunHPC. Each thread&lt;br /&gt;
  // handles a well-defined amount of work, to be chosen based on the&lt;br /&gt;
  // complexity of the loop kernel.&lt;br /&gt;
  // - Each thread runs until its result is needed. The interior of the&lt;br /&gt;
  //   domain can be calculated overlapping with the synchronization.&lt;br /&gt;
  // - If the number of iterations is small, only a single thread is&lt;br /&gt;
  //   used. Other cores are free to do other work, e.g. analysis or&lt;br /&gt;
  //   I/O.&lt;br /&gt;
  // - If the number of iterations is large, then many threads will be&lt;br /&gt;
  //   created. The threads will be executed in some arbitrary order.&lt;br /&gt;
  //   The cost of creating a thread is small (but of not negligible) --&lt;br /&gt;
  //   there is no problem if thousands of threads are created.&lt;br /&gt;
  void vdiff_funhpc(double *y, const double *x, int n) {&lt;br /&gt;
    // number of points per thread (depending on architecture and cache size)&lt;br /&gt;
    // (the number here is much too small; this is just for testing)&lt;br /&gt;
    const int blocksize = 8;&lt;br /&gt;
  &lt;br /&gt;
    // loop over blocks, starting one thread for each&lt;br /&gt;
    std::vector&amp;lt;qthread::future&amp;lt;void&amp;gt;&amp;gt; fs;&lt;br /&gt;
    for (int i0 = 1; i0 &amp;lt; n - 1; i0 += blocksize) {&lt;br /&gt;
      fs.push_back(qthread::async(qthread::launch::async, [=]() {&lt;br /&gt;
  &lt;br /&gt;
        // loop over the work of a single thread&lt;br /&gt;
        const int imin = i0;&lt;br /&gt;
        const int imax = std::min(i0 + blocksize, n - 1);&lt;br /&gt;
        for (int i = imin; i &amp;lt; imax; ++i) {&lt;br /&gt;
          y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
        }&lt;br /&gt;
      }));&lt;br /&gt;
    }&lt;br /&gt;
  &lt;br /&gt;
    // synchronize as soon as the boundary results are available&lt;br /&gt;
    assert(!fs.empty());&lt;br /&gt;
    fs[0].wait();&lt;br /&gt;
    fs[fs.size() - 1].wait();&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  &lt;br /&gt;
    // wait for all threads to finish&lt;br /&gt;
    for (const auto &amp;amp;f : fs)&lt;br /&gt;
      f.wait();&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  int funhpc_main(int argc, char **argv) {&lt;br /&gt;
    const int n = 1000000;&lt;br /&gt;
    std::vector&amp;lt;double&amp;gt; x(n), y(n);&lt;br /&gt;
    vdiff(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_openmp(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_funhpc(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    return 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #2: Wed, Dec 14, 2016, 12:00 EST==&lt;br /&gt;
&lt;br /&gt;
(By popular request, later in the day.)&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC in Cactus&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC vs. OpenMP (recap from last week)&lt;br /&gt;
* Overview of current proof-of-concept code&lt;br /&gt;
* Look at current benchmark results&lt;br /&gt;
* How to download, build, run, benchmark&lt;br /&gt;
* Next steps&lt;br /&gt;
&lt;br /&gt;
===Source code===&lt;br /&gt;
* FunHPC and its dependencies, as described last week (see above)&lt;br /&gt;
* FunHPC arrangements, basic thorns and examples (get it at https://bitbucket.org/eschnett/funhpc.cactus.git)&lt;br /&gt;
* Cactus flesh, branch eschnett/funhpc (mostly for startup)&lt;br /&gt;
* Carpet, branch eschnett/funhpc (mostly to disable OpenMP)&lt;br /&gt;
* Kranc, branch eschnett/funhpc (to generate FunHPC-parallelized code)&lt;br /&gt;
* McLachlan, branch eschnett/funhpc (contains new FunHPC-parallelized thorns with &amp;quot;_FH&amp;quot; suffix)&lt;br /&gt;
&lt;br /&gt;
I recommend using Wheeler to build and run, as I&amp;#039;ve tested this.&lt;br /&gt;
&lt;br /&gt;
===To Do===&lt;br /&gt;
* Put example parameter file online&lt;br /&gt;
* Check whether a vanilla checkout builds&lt;br /&gt;
* Update benchmark results&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4336</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4336"/>
		<updated>2016-12-09T15:42:09Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* To-Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===Loop Example===&lt;br /&gt;
&lt;br /&gt;
Let us look at a simple loop. We are going to parallelize it once with OpenMP, and once with FunHPC.&lt;br /&gt;
&lt;br /&gt;
  #include &amp;lt;funhpc/async.hpp&amp;gt;&lt;br /&gt;
  #include &amp;lt;funhpc/main.hpp&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  #include &amp;lt;algorithm&amp;gt;&lt;br /&gt;
  #include &amp;lt;cassert&amp;gt;&lt;br /&gt;
  #include &amp;lt;vector&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  // Synchronize the ghost zones (the outermost points in each direction)&lt;br /&gt;
  void sync(double *y, int n) {&lt;br /&gt;
    // we just assume periodic boundaries&lt;br /&gt;
    assert(n &amp;gt;= 2);&lt;br /&gt;
    y[0] = y[n - 2];&lt;br /&gt;
    y[n - 1] = y[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // A basic loop, calculating a first derivative&lt;br /&gt;
  void vdiff(double *y, const double *x, int n) {&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, parallelized via OpenMP: The number of iterations is&lt;br /&gt;
  // split over the available number of cores (e.g. 12 on a NUMA node of&lt;br /&gt;
  // Wheeler). The disadvantages of this are:&lt;br /&gt;
  // - There is a implicit barrier at the end of the loop, so that the&lt;br /&gt;
  //   sync afterwards cannot overlap with the loop&lt;br /&gt;
  // - A single thread might handle too much work, overflowing the cache&lt;br /&gt;
  // - A single thread might not have enough work, so that the thread&lt;br /&gt;
  //   management overhead is too large&lt;br /&gt;
  void vdiff_openmp(double *y, const double *x, int n) {&lt;br /&gt;
  #pragma omp parallel for&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, this time parallelized via FunHPC. Each thread&lt;br /&gt;
  // handles a well-defined amount of work, to be chosen based on the&lt;br /&gt;
  // complexity of the loop kernel.&lt;br /&gt;
  // - Each thread runs until its result is needed. The interior of the&lt;br /&gt;
  //   domain can be calculated overlapping with the synchronization.&lt;br /&gt;
  // - If the number of iterations is small, only a single thread is&lt;br /&gt;
  //   used. Other cores are free to do other work, e.g. analysis or&lt;br /&gt;
  //   I/O.&lt;br /&gt;
  // - If the number of iterations is large, then many threads will be&lt;br /&gt;
  //   created. The threads will be executed in some arbitrary order.&lt;br /&gt;
  //   The cost of creating a thread is small (but of not negligible) --&lt;br /&gt;
  //   there is no problem if thousands of threads are created.&lt;br /&gt;
  void vdiff_funhpc(double *y, const double *x, int n) {&lt;br /&gt;
    // number of points per thread (depending on architecture and cache size)&lt;br /&gt;
    // (the number here is much too small; this is just for testing)&lt;br /&gt;
    const int blocksize = 8;&lt;br /&gt;
  &lt;br /&gt;
    // loop over blocks, starting one thread for each&lt;br /&gt;
    std::vector&amp;lt;qthread::future&amp;lt;void&amp;gt;&amp;gt; fs;&lt;br /&gt;
    for (int i0 = 1; i0 &amp;lt; n - 1; i0 += blocksize) {&lt;br /&gt;
      fs.push_back(qthread::async(qthread::launch::async, [=]() {&lt;br /&gt;
  &lt;br /&gt;
        // loop over the work of a single thread&lt;br /&gt;
        const int imin = i0;&lt;br /&gt;
        const int imax = std::min(i0 + blocksize, n - 1);&lt;br /&gt;
        for (int i = imin; i &amp;lt; imax; ++i) {&lt;br /&gt;
          y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
        }&lt;br /&gt;
      }));&lt;br /&gt;
    }&lt;br /&gt;
  &lt;br /&gt;
    // synchronize as soon as the boundary results are available&lt;br /&gt;
    assert(!fs.empty());&lt;br /&gt;
    fs[0].wait();&lt;br /&gt;
    fs[fs.size() - 1].wait();&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  &lt;br /&gt;
    // wait for all threads to finish&lt;br /&gt;
    for (const auto &amp;amp;f : fs)&lt;br /&gt;
      f.wait();&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  int funhpc_main(int argc, char **argv) {&lt;br /&gt;
    const int n = 1000000;&lt;br /&gt;
    std::vector&amp;lt;double&amp;gt; x(n), y(n);&lt;br /&gt;
    vdiff(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_openmp(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_funhpc(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    return 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4335</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4335"/>
		<updated>2016-12-09T15:41:54Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Running FunHPC Applications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===Loop Example===&lt;br /&gt;
&lt;br /&gt;
Let us look at a simple loop. We are going to parallelize it once with OpenMP, and once with FunHPC.&lt;br /&gt;
&lt;br /&gt;
  #include &amp;lt;funhpc/async.hpp&amp;gt;&lt;br /&gt;
  #include &amp;lt;funhpc/main.hpp&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  #include &amp;lt;algorithm&amp;gt;&lt;br /&gt;
  #include &amp;lt;cassert&amp;gt;&lt;br /&gt;
  #include &amp;lt;vector&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
  // Synchronize the ghost zones (the outermost points in each direction)&lt;br /&gt;
  void sync(double *y, int n) {&lt;br /&gt;
    // we just assume periodic boundaries&lt;br /&gt;
    assert(n &amp;gt;= 2);&lt;br /&gt;
    y[0] = y[n - 2];&lt;br /&gt;
    y[n - 1] = y[1];&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // A basic loop, calculating a first derivative&lt;br /&gt;
  void vdiff(double *y, const double *x, int n) {&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, parallelized via OpenMP: The number of iterations is&lt;br /&gt;
  // split over the available number of cores (e.g. 12 on a NUMA node of&lt;br /&gt;
  // Wheeler). The disadvantages of this are:&lt;br /&gt;
  // - There is a implicit barrier at the end of the loop, so that the&lt;br /&gt;
  //   sync afterwards cannot overlap with the loop&lt;br /&gt;
  // - A single thread might handle too much work, overflowing the cache&lt;br /&gt;
  // - A single thread might not have enough work, so that the thread&lt;br /&gt;
  //   management overhead is too large&lt;br /&gt;
  void vdiff_openmp(double *y, const double *x, int n) {&lt;br /&gt;
  #pragma omp parallel for&lt;br /&gt;
    for (int i = 1; i &amp;lt; n - 1; ++i) {&lt;br /&gt;
      y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
    }&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  // The same loop, this time parallelized via FunHPC. Each thread&lt;br /&gt;
  // handles a well-defined amount of work, to be chosen based on the&lt;br /&gt;
  // complexity of the loop kernel.&lt;br /&gt;
  // - Each thread runs until its result is needed. The interior of the&lt;br /&gt;
  //   domain can be calculated overlapping with the synchronization.&lt;br /&gt;
  // - If the number of iterations is small, only a single thread is&lt;br /&gt;
  //   used. Other cores are free to do other work, e.g. analysis or&lt;br /&gt;
  //   I/O.&lt;br /&gt;
  // - If the number of iterations is large, then many threads will be&lt;br /&gt;
  //   created. The threads will be executed in some arbitrary order.&lt;br /&gt;
  //   The cost of creating a thread is small (but of not negligible) --&lt;br /&gt;
  //   there is no problem if thousands of threads are created.&lt;br /&gt;
  void vdiff_funhpc(double *y, const double *x, int n) {&lt;br /&gt;
    // number of points per thread (depending on architecture and cache size)&lt;br /&gt;
    // (the number here is much too small; this is just for testing)&lt;br /&gt;
    const int blocksize = 8;&lt;br /&gt;
  &lt;br /&gt;
    // loop over blocks, starting one thread for each&lt;br /&gt;
    std::vector&amp;lt;qthread::future&amp;lt;void&amp;gt;&amp;gt; fs;&lt;br /&gt;
    for (int i0 = 1; i0 &amp;lt; n - 1; i0 += blocksize) {&lt;br /&gt;
      fs.push_back(qthread::async(qthread::launch::async, [=]() {&lt;br /&gt;
  &lt;br /&gt;
        // loop over the work of a single thread&lt;br /&gt;
        const int imin = i0;&lt;br /&gt;
        const int imax = std::min(i0 + blocksize, n - 1);&lt;br /&gt;
        for (int i = imin; i &amp;lt; imax; ++i) {&lt;br /&gt;
          y[i] = (x[i + 1] - x[i - 1]) / 2;&lt;br /&gt;
        }&lt;br /&gt;
      }));&lt;br /&gt;
    }&lt;br /&gt;
  &lt;br /&gt;
    // synchronize as soon as the boundary results are available&lt;br /&gt;
    assert(!fs.empty());&lt;br /&gt;
    fs[0].wait();&lt;br /&gt;
    fs[fs.size() - 1].wait();&lt;br /&gt;
    sync(y, n);&lt;br /&gt;
  &lt;br /&gt;
    // wait for all threads to finish&lt;br /&gt;
    for (const auto &amp;amp;f : fs)&lt;br /&gt;
      f.wait();&lt;br /&gt;
  }&lt;br /&gt;
  &lt;br /&gt;
  int funhpc_main(int argc, char **argv) {&lt;br /&gt;
    const int n = 1000000;&lt;br /&gt;
    std::vector&amp;lt;double&amp;gt; x(n), y(n);&lt;br /&gt;
    vdiff(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_openmp(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    vdiff_funhpc(&amp;amp;y[0], &amp;amp;x[0], n);&lt;br /&gt;
    return 0;&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4334</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4334"/>
		<updated>2016-12-09T15:03:07Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* To-Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4333</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4333"/>
		<updated>2016-12-09T15:02:37Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* To-Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* Cereal: Serializing C++ objects http://uscilab.github.io/cereal&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout http://www.open-mpi.org/projects/hwloc&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement) http://www.canonware.com/jemalloc&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library http://www.open-mpi.org&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface) http://www.cs.sandia.gov/qthreads&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
The Cereal package requires a patch. This patch makes it distinguish between regular pointers and function pointers. Regular pointers cannot be serialized since it is unclear whether they are valid, and if so, how the target should be allocated or freed. Function pointers, however, can be serialized -- we assume they point to functions, which are constants, so that no memory management issues arise. You need to apply the following patch:&lt;br /&gt;
&lt;br /&gt;
  --- old/include/cereal/types/common.hpp&lt;br /&gt;
  +++ new/include/cereal/types/common.hpp&lt;br /&gt;
  @@ -106,14 +106,16 @@&lt;br /&gt;
       t = reinterpret_cast&amp;lt;typename common_detail::is_enum&amp;lt;T&amp;gt;::type const &amp;amp;&amp;gt;( value );&lt;br /&gt;
     }&lt;br /&gt;
  &lt;br /&gt;
  +#ifndef CEREAL_ENABLE_RAW_POINTER_SERIALIZATION&lt;br /&gt;
     //! Serialization for raw pointers&lt;br /&gt;
     /*! This exists only to throw a static_assert to let users know we don&amp;#039;t support raw pointers. */&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
     void CEREAL_SERIALIZE_FUNCTION_NAME( Archive &amp;amp;, T * &amp;amp; )&lt;br /&gt;
     {&lt;br /&gt;
       static_assert(cereal::traits::detail::delay_static_assert&amp;lt;T&amp;gt;::value,&lt;br /&gt;
         &amp;quot;Cereal does not support serializing raw pointers - please use a smart pointer&amp;quot;);&lt;br /&gt;
     }&lt;br /&gt;
  +#endif&lt;br /&gt;
  &lt;br /&gt;
     //! Serialization for C style arrays&lt;br /&gt;
     template &amp;lt;class Archive, class T&amp;gt; inline&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;br /&gt;
* Add pointers to package web sites to build instructions&lt;br /&gt;
&lt;br /&gt;
====Done:====&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Describe Cereal patch&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4329</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4329"/>
		<updated>2016-12-07T09:57:09Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* To-Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Provide make wrapper for Wheeler&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4328</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4328"/>
		<updated>2016-12-07T09:56:52Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* To-Do */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* Make sure all FunHPC examples run on Wheeler&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4327</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4327"/>
		<updated>2016-12-07T09:56:24Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Running FunHPC Applications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;br /&gt;
&lt;br /&gt;
===To-Do===&lt;br /&gt;
&lt;br /&gt;
This is a wiki -- everybody should add missing items here&lt;br /&gt;
&lt;br /&gt;
* Put loop parallelization example onto wiki (and make it compile)&lt;br /&gt;
* Correct broken FunHPC grid self-test&lt;br /&gt;
* Maybe: Make FunHPC compile with Clang on Darwin&lt;br /&gt;
* Announce next meeting (Wed Dec. 14, 12:00 EST)&lt;br /&gt;
* Maybe: Set up FunHPC on Bethe or Fermi (if Frank can&amp;#039;t get access to Wheeler)&lt;br /&gt;
* Add pointers to http://cppreference.com to wiki (for async, future)&lt;br /&gt;
* Describe future, shared_future; async&amp;#039;s launch:: options&lt;br /&gt;
* If possible: look at weird performance numbers (350 ms vs. 3500 ms on Wheeler&amp;#039;s head node); run on compute node instead?&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4326</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4326"/>
		<updated>2016-12-07T09:19:45Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Running FunHPC Applications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR (have to set in Makefile)&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;br /&gt;
&lt;br /&gt;
On Wheeler:&lt;br /&gt;
  ~eschnett/src/spack-view/bin/mpirun -np 1 -x QTHREAD_NUM_SHEPHERDS=2 -x QTHREAD_NUM_WORKERS_PER_SHEPHERD=12 -x QTHREAD_STACK_SIZE=1000000 ~eschnett/src/spack-view/bin/fibonacci&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4324</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4324"/>
		<updated>2016-12-07T09:02:27Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Building and Installing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpicxx MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4323</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4323"/>
		<updated>2016-12-07T09:01:33Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Building and Installing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
  make CEREAL_DIR=... HWLOC_DIR=... JEMALLOC_DIR=... QTHREADS_DIR=... CXX=c++ MPICXX=mpic++ MPIRUN=mpirun&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4322</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4322"/>
		<updated>2016-12-07T08:05:16Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Building and Installing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
When you &amp;quot;make&amp;quot;, you need to pass certain environment variables:&lt;br /&gt;
* CEREAL_DIR&lt;br /&gt;
* HWLOC_DIR&lt;br /&gt;
* JEMALLOC_DIR&lt;br /&gt;
* QTHREADS_DIR&lt;br /&gt;
* CXX&lt;br /&gt;
* MPICXX&lt;br /&gt;
* MPIRUN&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4321</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4321"/>
		<updated>2016-12-07T07:57:11Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Venue: Google Hangouts https://hangouts.google.com/call/jjkffrrvmnbhrooiyjxhfeb2ume&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4320</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4320"/>
		<updated>2016-12-06T12:04:12Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;br /&gt;
&lt;br /&gt;
By default, Qthreads chooses a rather small stack size of 8 kByte per thread. If a thread uses more stack space, random memory will be overwritten. You can enable guard pages, which is good for debugging. This will catch many cases where the stack overflows. Finally, Qthreads can produce info output at startup that might be helpful.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4319</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4319"/>
		<updated>2016-12-06T12:02:08Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;. I am usually setting these environment variables:&lt;br /&gt;
&lt;br /&gt;
  export QTHREAD_NUM_SHEPHERDS=&amp;quot;${nshep}&amp;quot;&lt;br /&gt;
  export QTHREAD_NUM_WORKERS_PER_SHEPHERD=&amp;quot;${nwork}&amp;quot;&lt;br /&gt;
  export QTHREAD_STACK_SIZE=8388608 # Byte &lt;br /&gt;
  export QTHREAD_GUARD_PAGES=0      # 0, 1&lt;br /&gt;
  export QTHREAD_INFO=1&lt;br /&gt;
  export FUNHPC_MAIN_EVERYWHERE=1&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;nshep&amp;quot; is the number of sockets (aka NUMA nodes), and &amp;quot;nwork&amp;quot; the number of cores per socket. You can find these e.g. via &amp;quot;hwloc-info&amp;quot;. On Wheeler:&lt;br /&gt;
&lt;br /&gt;
  $ ~/src/spack-view/bin/hwloc-info&lt;br /&gt;
  depth 0:        1 Machine (type #1)&lt;br /&gt;
   depth 1:       2 NUMANode (type #2)&lt;br /&gt;
    depth 2:      2 Package (type #3)&lt;br /&gt;
     depth 3:     2 L3Cache (type #4)&lt;br /&gt;
      depth 4:    24 L2Cache (type #4)&lt;br /&gt;
       depth 5:   24 L1dCache (type #4)&lt;br /&gt;
        depth 6:  24 L1iCache (type #4)&lt;br /&gt;
         depth 7: 24 Core (type #5)&lt;br /&gt;
          depth 8:        24 PU (type #6)&lt;br /&gt;
&lt;br /&gt;
Thus I choose &amp;quot;nshep=2&amp;quot; and &amp;quot;nwork=12&amp;quot; on Wheeler.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4318</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4318"/>
		<updated>2016-12-06T11:22:29Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;br /&gt;
&lt;br /&gt;
===Building and Installing===&lt;br /&gt;
&lt;br /&gt;
FunHPC is available on BitBucket https://bitbucket.org/eschnett/funhpc.cxx . It requires several other packages to be installed as well, namely&lt;br /&gt;
* cereal: Serializing C++ objects&lt;br /&gt;
* hwloc: Determining the hardware (core, cache) layout&lt;br /&gt;
* jemalloc: Fast multi-threaded memory manager (malloc replacement)&lt;br /&gt;
* OpenMPI: FunHPC prefers this MPI library&lt;br /&gt;
* Qthreads: Fine-grained multi-threading (providing a C interface)&lt;br /&gt;
To install FunHPC from scratch, you need to install these other libraries first, and then edit FunHPC&amp;#039;s Makefile. Google Test is also required, but will be downloaded automatically. Apologies for this unprofessional setup. In the future, FunHPC should be converted to use cmake, and Google Test should be packages as part of it.&lt;br /&gt;
&lt;br /&gt;
I have installed FunHPC and all its dependencies on Wheeler (Caltech) into the directory /home/eschnett/src/spack-view . This includes a recent version of GCC that was used to build these libraries. If you want to use this, then I highly recommend using this version of GCC as well as all the other software installed in this directory (e.g. HDF5, PAPI, and many more) instead of combining these with system libraries.&lt;br /&gt;
&lt;br /&gt;
As a side note, Roland Haas says that the Simfactory configuration for Wheeler is using this directory. This is not really relevant yet since we won&amp;#039;t be using Cactus in the beginning.&lt;br /&gt;
&lt;br /&gt;
===Running FunHPC Applications===&lt;br /&gt;
&lt;br /&gt;
FunHPC is an MPI application, but we are not interested in using MPI today. We might still need to use mpirun, but only in a trivial way.&lt;br /&gt;
&lt;br /&gt;
Qthreads etc. use environment variables to change certain settings. Some settings are necessary to prevent problems. These &amp;quot;problems&amp;quot; are usually resource exhaustion (e.g. not enough stack space), which Unix helpfully all translates into &amp;quot;Segmentation fault&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4317</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4317"/>
		<updated>2016-12-06T11:06:44Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;br /&gt;
&lt;br /&gt;
Agenda:&lt;br /&gt;
* FunHPC design overview&lt;br /&gt;
* Comparison to OpenMP&lt;br /&gt;
* CPU vs. memory performance&lt;br /&gt;
* Cache and multi-threading, loop tiling&lt;br /&gt;
* How to parallelize an application via FunHPC&lt;br /&gt;
* Building and installing&lt;br /&gt;
* Examples&lt;br /&gt;
* Benchmarks&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4316</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4316"/>
		<updated>2016-12-05T09:51:01Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Mini-Workshop #1: Wed, Dec 7, 2016 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016, 9:00 EST==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4315</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4315"/>
		<updated>2016-12-05T09:50:37Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik, Christian, Ian]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik, Christian, Ian]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc [Ian]&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075 [Federico]&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation [Ian, Federico]&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016==&lt;br /&gt;
&lt;br /&gt;
Topic: FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4311</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4311"/>
		<updated>2016-11-29T16:38:47Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time. We&amp;#039;ll meet on Google Hangout (probably), details TBA here.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016==&lt;br /&gt;
&lt;br /&gt;
Topic TBD. Vote in the list above!&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4310</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4310"/>
		<updated>2016-11-29T16:38:12Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
We picked Wednesday 9:00 EST as meeting time.&lt;br /&gt;
&lt;br /&gt;
# Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
# SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
# FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik]&lt;br /&gt;
# FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik]&lt;br /&gt;
# StencilOps: more efficient finite differencing stencils in Kranc&lt;br /&gt;
# DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075&lt;br /&gt;
# The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
# Towards a Kranc implementation of a hydro formulation&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;br /&gt;
&lt;br /&gt;
==Mini-Workshop #1: Wed, Dec 7, 2016==&lt;br /&gt;
&lt;br /&gt;
Topic TBD. Vote in the list above!&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4309</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4309"/>
		<updated>2016-11-29T16:34:42Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
* Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
* SimulationIO: a new file format that&amp;#039;s easy to read https://github.com/eschnett/SimulationIO&lt;br /&gt;
* FunHPC (multi-threading with futures): overview https://bitbucket.org/eschnett/funhpc.cxx [Erik]&lt;br /&gt;
* FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik]&lt;br /&gt;
* StencilOps: more efficient finite differencing stencils in Kranc&lt;br /&gt;
* DG: Jonah and my new DG formulation that can replace FD methods https://arxiv.org/abs/1604.00075&lt;br /&gt;
* The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
* Towards a Kranc implementation of a hydro formulation&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4308</id>
		<title>Remote Mini-Workshop Series</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Remote_Mini-Workshop_Series&amp;diff=4308"/>
		<updated>2016-11-29T16:33:12Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Quite a few interesting mini-projects are being undertaken at the moment. It is worthwhile to advertise these to the larger community to invite participation. In our weekly calls we decided that we should set aside a few hours or half a day for one of these. I now suggest that we turn this into a mini-series, where we pick from the list below until we run out of interest. Maybe this will keep us busy until Christmas.&lt;br /&gt;
&lt;br /&gt;
* Spack: installing external package https://github.com/LLNL/spack [Erik]&lt;br /&gt;
* SimulationIO: a new file format that&amp;#039;s easy to read&lt;br /&gt;
* FunHPC (multi-threading with futures): overview [Erik]&lt;br /&gt;
* FunHPC (multi-threading with futures): shoehorning this into Cactus [Erik]&lt;br /&gt;
* StencilOps: more efficient finite differencing stencils in Kranc&lt;br /&gt;
* DG: Jonah and my new DG formulation that can replace FD methods&lt;br /&gt;
* The &amp;quot;distribute&amp;quot; script: testing the Einstein Toolkit on HPC systems&lt;br /&gt;
* Towards a Kranc implementation of a hydro formulation&lt;br /&gt;
&lt;br /&gt;
If you are interested in one of these topics, then add your name in square brackets after the topic.&lt;br /&gt;
&lt;br /&gt;
If you are interested in presenting a topic yourself, then add a new item to the list.&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4122</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4122"/>
		<updated>2016-05-30T11:44:46Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Testing on various machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* edison&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* qb&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* orca: running out of memory building McLachlan (2016-05-30)&lt;br /&gt;
* philip: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* smic: no access (2016-05-30)&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4121</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4121"/>
		<updated>2016-05-30T11:43:40Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Testing on various machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* edison&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* qb&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* orca: running out of memory building McLachlan (2016-05-30)&lt;br /&gt;
* philip: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* smic: no access (2016-05-30)&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4120</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4120"/>
		<updated>2016-05-30T10:32:00Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Testing on various machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* edison&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* orca&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* qb&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* philip: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* smic: no access (2016-05-30)&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4119</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4119"/>
		<updated>2016-05-30T10:28:22Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Testing on various machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* orca&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* qb&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* philip: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* smic: no access (2016-05-30)&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4118</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4118"/>
		<updated>2016-05-30T10:07:14Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Testing on various machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* orca&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* qb&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* philip: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* smic: no access (2016-05-30)&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4117</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4117"/>
		<updated>2016-05-30T10:06:35Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Testing on various machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* orca&lt;br /&gt;
* smic&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* qb&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* philip: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4116</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4116"/>
		<updated>2016-05-30T10:06:13Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* Testing on various machines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* orca&lt;br /&gt;
* qb&lt;br /&gt;
* smic&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* philip: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4115</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4115"/>
		<updated>2016-05-30T09:45:09Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* BAD */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* orca&lt;br /&gt;
* philip&lt;br /&gt;
* qb&lt;br /&gt;
* smic&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* stampede: multi-process runs are failing (2016-05-30)&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4114</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4114"/>
		<updated>2016-05-30T09:44:06Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* BAD: (something went critically wrong) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* orca&lt;br /&gt;
* philip&lt;br /&gt;
* qb&lt;br /&gt;
* smic&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD==&lt;br /&gt;
&lt;br /&gt;
Something went critically wrong. I might have some comments with a preliminary analysis.&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* stampede: allocation problems (2016-05-27); xsede portal has allocation; wait and retry&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4113</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4113"/>
		<updated>2016-05-30T09:43:40Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* NEEDS-RUNNING: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
Machines I plan to test, but I haven&amp;#039;t done anything about them yet.&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* orca&lt;br /&gt;
* philip&lt;br /&gt;
* qb&lt;br /&gt;
* smic&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD: (something went critically wrong)==&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* stampede: allocation problems (2016-05-27); xsede portal has allocation; wait and retry&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
	<entry>
		<id>https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4112</id>
		<title>Release coordination</title>
		<link rel="alternate" type="text/html" href="https://docs.einsteintoolkit.org/et-docs/index.php?title=Release_coordination&amp;diff=4112"/>
		<updated>2016-05-30T09:43:15Z</updated>

		<summary type="html">&lt;p&gt;Eschnett: /* NEEDS-ANALYSIS (results are there, but did not look at them yet) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====Release Coordination====&lt;br /&gt;
&lt;br /&gt;
There are several places where we coordinate about this release:&lt;br /&gt;
&lt;br /&gt;
* Release_Process&lt;br /&gt;
* Release_Details&lt;br /&gt;
* Release_coordination (this page)&lt;br /&gt;
* TRAC ticket&lt;br /&gt;
&lt;br /&gt;
Make sure you check all places for information.&lt;br /&gt;
&lt;br /&gt;
Once a specific issue has been identified, please create a ticket and move discussion there, and add the release milestone to the ticket.  This page is just for coordination of &amp;quot;the test failures&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=Put Stuff Here=&lt;br /&gt;
&lt;br /&gt;
Please list here, with your name, the machines which you care about.&lt;br /&gt;
&lt;br /&gt;
===Activity===&lt;br /&gt;
&lt;br /&gt;
Please describe what you are working on to avoid duplication of effort.&lt;br /&gt;
&lt;br /&gt;
=Testing on various machines=&lt;br /&gt;
&lt;br /&gt;
Erik Schnetter is running the tests on various machines. The current status is:&lt;br /&gt;
&lt;br /&gt;
==NEEDS-RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* stampede-mic&lt;br /&gt;
* titan&lt;br /&gt;
&lt;br /&gt;
==RUNNING:==&lt;br /&gt;
&lt;br /&gt;
* mp2&lt;br /&gt;
* sunnyvale&lt;br /&gt;
&lt;br /&gt;
==NEEDS-ANALYSIS==&lt;br /&gt;
&lt;br /&gt;
This means that the runs completed, but I did not look at the results yet.&lt;br /&gt;
&lt;br /&gt;
* bluewaters&lt;br /&gt;
* orca&lt;br /&gt;
* philip&lt;br /&gt;
* qb&lt;br /&gt;
* smic&lt;br /&gt;
&lt;br /&gt;
==GOOD==&lt;br /&gt;
&lt;br /&gt;
This means that test results are available on https://build.barrywardell.net/view/EinsteinToolkitMulti/job/EinsteinToolkitReport-sandbox .&lt;br /&gt;
&lt;br /&gt;
* bethe&lt;br /&gt;
* comet&lt;br /&gt;
* cori&lt;br /&gt;
* gordon&lt;br /&gt;
* gpc&lt;br /&gt;
* nvidia&lt;br /&gt;
* redshift&lt;br /&gt;
* zwicky&lt;br /&gt;
&lt;br /&gt;
==BAD: (something went critically wrong)==&lt;br /&gt;
&lt;br /&gt;
* datura: not accessible (2016-05-30)&lt;br /&gt;
* edison: password expired (2016-05-30)&lt;br /&gt;
* guillimin: build failure (2016-05-30)&lt;br /&gt;
* mike: no access (2016-05-30)&lt;br /&gt;
* shelob: cannot log in (2016-05-27); probably no LSU allocation; need to contact frank&lt;br /&gt;
* stampede: allocation problems (2016-05-27); xsede portal has allocation; wait and retry&lt;/div&gt;</summary>
		<author><name>Eschnett</name></author>
		
	</entry>
</feed>