Finally, it wasn't difficult to adapt the C OpenMP code
https://github.com/jodavies/nbody/blob/openmp/src/nbody.c#L234 and I get a ;
Pythran version with a very good scaling!
Julia is a bit faster in sequential but the scaling is better for Pythran,
which makes it slightly faster for 12 threads (figures attached). Anyway, such
differences are not really meaningful.
But it's really good to have an efficient parallel implementation in Python.
----- Mail original -----
De: "Jochen S" <cycomanic@xxxxxxxxx>
À: "pythran" <pythran@xxxxxxxxxxxxx>
Envoyé: Mardi 26 Janvier 2021 22:26:26
Objet: [pythran] Re: Preliminary results benchmark NBabel CO2 production +
OpenMP implementation?
Was about to say the same thing. Even leaving aside power consumption at
rest, embedded energy is quite significant, so you always want to try
utilise your machine to the highest degree possible.
On Tue, Jan 26, 2021 at 8:51 PM PIERRE AUGIER <
pierre.augier@xxxxxxxxxxxxxxxxxxxxxx> wrote:
Even with a quite bad scaling for elapsed time, parallel computing tends
to decrease CO2 production as measured by these types of studies because we
consider that the node is fully taken by the process. The power of the
whole node counts for this process even if only one core is used and we
could theoretically use the computer for other tasks. Moreover, the power
at rest is not at all negligible compared to the power when one core is
running. For example, with the node I used for this experiment, the power
at rest is ~100 W, the power for sequential computing is ~130 W and with 12
threads only ~220 W.
Moreover there is a cost in CO2/hour associated with the computer building.
Decreasing the elapsed time is therefore very important in term of CO2
production.
----- Mail original -----
De: "Neal Becker" <ndbecker2@xxxxxxxxx>+ OpenMP implementation?
À: "pythran" <pythran@xxxxxxxxxxxxx>
Envoyé: Mardi 26 Janvier 2021 18:02:13
Objet: [pythran] Re: Preliminary results benchmark NBabel CO2 production
I don't see how it's possible that parallel processing could reduceconsumption
total energy consumption. Now if you specify a deadline to produce
the results, it might be possible that a single computational element
power consumption might scale nonlinearly with clock speed, and in
that case running multiple cores at lower clock might produce the
result in the allotted time while consuming less energy, but I don't
think we had a time limit.
On Tue, Jan 26, 2021 at 11:53 AM Serge Guelton
<serge.guelton@xxxxxxxxxxxxxxxxxxx> wrote:
On Tue, Jan 26, 2021 at 02:07:43PM +0100, PIERRE AUGIER wrote:
Hi,
Here are some preliminary results of an experiment on energy
The goal ismeasurement at Grid'5000
(https://www.grid5000.fr/w/Energy_consumption_monitoring_tutorial).
articleto have enough to be able to submit a serious comment to a recent
using andpublished in Nature Astronomy (Zwart, 2020) which recommends to stop
https://github.com/paugier/nbabel):teaching Python because of the ecological impact of computing.
2 figures are attached (the code is here
different
1. fig_bench_nbabel.png: CO2 production versus elapsed time for
https://raw.githubusercontent.com/paugier/nbabel/master/py/fig/fig_ecolo_impact_transonic.pngimplementations. This figure can be compared with
by Zwarttaken from Zwart (2020).
The implementations labeled "nbabel.org" have been found on
https://www.nbabel.org/ and have recently been used in the article
not well(2020). Note that these C++, Fortran and Julia implementations are
Fortran andoptimized. However, I think they are representative of many C++,
(slightly slowerJulia codes written by scientists.
There is one simple Pythran implementation which is really fast
that C++!!than the fastest implementation in Julia but it is not a big deal).
Note that of course these results do not show that Python is faster
function ofWe just show here that it's easy to write in Python **very** efficient
implementations of numerically intensive problems.
2. fig_bench_nbabel_parallel_julia.png shows similar results as a
decrease elapsedthe number of threads for a parallel Julia implementation.
The scaling is not very good but parallelization can of course
of CO2time and CO2 production.
If the scaling is not very good, then is parallelization good in terms
spendproduction? Imean, if using 1 core, you spend 10s and using 2 cores you
case? If15s, then overall, what's the energy consumed in the first and second
single-threaded optionthe nergy spent is linear to the number of core, then the
is better, energy-wise?
--
Those who don't understand recursion are doomed to repeat it
Attachment:
fig_bench_nbabel_parallel_julia.png
Description: PNG image
Attachment:
fig_bench_nbabel_parallel_pythran.png
Description: PNG image