Hi again, On Wed, Apr 09, 2014 at 06:05:54PM +0100, Ian Ozsvald wrote: > > You make this look too easy :-) The bug-fix works and my code with > both methods (your original demo and a chunked version) produce the > same output. Thanks. Now for the next problem... That's the problem with users, they always run into bugs :-) > What is the behaviour of garbage collection with pythran? I have a > pythran function that creates a new array (np.empty) on each call. If > I run this without pythran (pure python+numpy) then I don't see a > build-up of memory, Python's garbage collector sees it goes out of > scope. When I run it with Pythran I rapidly fill 8GB as the temp array > seems to be retained. What should I be aware of? Is a reference to the > temp item kept alive somehow? That's strange. I've tried this one: #pythran export mem(int) import numpy as np def mem(n): return np.empty(n) Then ran a loop for i in range(1000000): mem.mem(1000000) And I see no memory leaks. The only situation where we leak is when we get a numpy array from Python, as in #pythran export mem(int) def mem(arr): return arr # we do a very conservatif reference increment here What's your full test case? > > With python+numpy only I use: > def evolve(grid, dt): > D = 1 > lap = laplacian(grid) > new_grid = np.empty(grid.shape) > but due to the memory build-up I have to use: > def evolve(grid, dt, new_grid): # new_grid passed in from python routine > D = 1 > lap = laplacian(grid) > > Now I'm trying to use openmp - it runs (I can see 8 threads running at > 11% CPU each in htop) and memory usage is stable, but the original > (non-parallel) and parallel versions take the same execution time. > Both versions take the new_grid argument (rather than make a temporary I need more time to investigate this one. my guess is that Pythran's reference couting is issuing memory barrier that prevent //ion, but that's just a guess.