[pythran] Re: Compiler troubles on diffusion demo

  • From: Ian Ozsvald <ian@xxxxxxxxxxxxxx>
  • To: pythran <pythran@xxxxxxxxxxxxx>
  • Date: Wed, 9 Apr 2014 18:05:54 +0100

You make this look too easy :-) The bug-fix works and my code with
both methods (your original demo and a chunked version) produce the
same output. Thanks. Now for the next problem...

What is the behaviour of garbage collection with pythran? I have a
pythran function that creates a new array (np.empty) on each call. If
I run this without pythran (pure python+numpy) then I don't see a
build-up of memory, Python's garbage collector sees it goes out of
scope. When I run it with Pythran I rapidly fill 8GB as the temp array
seems to be retained. What should I be aware of? Is a reference to the
temp item kept alive somehow?

With python+numpy only I use:
def evolve(grid, dt):
    D = 1
    lap = laplacian(grid)
    new_grid = np.empty(grid.shape)
but due to the memory build-up I have to use:
def evolve(grid, dt, new_grid):  # new_grid passed in from python routine
    D = 1
    lap = laplacian(grid)

Now I'm trying to use openmp - it runs (I can see 8 threads running at
11% CPU each in htop) and memory usage is stable, but the original
(non-parallel) and parallel versions take the same execution time.
Both versions take the new_grid argument (rather than make a temporary
with np.empty) so they're performing the same way.

In the parallel version if I put a print statement in the body I can
see my chunks of work being processed, once each, with no repeated
work and they print out of order (a random order each time),
definitely not sequentially, so I'm pretty sure it is running with
omp. If I compile with -E then I can see the "pragma omp" in the .cpp
file. I compile with
$ pythran -fopenmp  -march=corei7-avx  diffusion_numpy.py

I'm copying the parallel code below, maybe I'm missing something
silly? It might simply be the case with a (2048, 2048) grid
(approximately 230MB) that the laplacian() function dominates, but I
do see 8 CPUs at 10% (versus 7 CPUs at 0% with the serial version) so
something is happening. Any thoughts? Did I mess up the omp
#pythran export evolve(float64[][], float, float64[][])
def evolve(grid, dt, new_grid):
    D = 1
    lap = laplacian(grid)

    chunk_size = grid.shape[0] // 8
    # openmp the following loop
    "omp parallel for shared(grid, new_grid, dt, D, lap)"
    for i in xrange(0, grid.shape[0], chunk_size):
        j = min(i+chunk_size, grid.shape[0])
        new_grid[i:j, :] = grid[i:j, :] + dt * D * lap[i:j, :]
    return new_grid

Cheers, i.

On 9 April 2014 14:13, serge Guelton <serge.guelton@xxxxxxxxxxxxxxxx> wrote:
>> My attempts to compile it in pythran are failing. I think roll doesn't
>> like partial grids, I know roll was added recently so maybe there's a
>> bug here?
>> This works:
>>         partial_update = grid  # using original full grid
>>         roll1 = np.roll(partial_update, +1, 0)
>>         print roll1.dtype
>> This fails at compile time:
>>         partial_update = grid[i:j, :]  # using partial grid
>>         roll1 = np.roll(partial_update, +1, 0)
>>         print roll1.dtype
> Can you try the roll-bug branch? It should be OK.

Ian Ozsvald (A.I. researcher)


Other related posts: