[pythran] Re: Numpy Benchmarks

  • From: Ian Ozsvald <ian@xxxxxxxxxxxxxx>
  • To: pythran <pythran@xxxxxxxxxxxxx>
  • Date: Thu, 26 Jun 2014 14:33:09 +0100

Oops, sorry to be slow, I had to submit the manuscript for my book
(now off for editing - yay!).

I can't share the material for that course, that's a private course
I'm now running. In this case I used the Julia set example (which I
built through all the various tools, from Cython through to Numba,
both lists & numpy), we just built it and ran the demo and a few of
the students wanted to try pythran on their own code.

When it comes to interfacing with numpy functions - do you plan to
have a fall-back mode that uses a default numpy function *outside* of
pythran, so that all the numpy functions are 'always available' even
if pythran doesn't provide an optimized version? I suspect you'll say
that you've got your own GC and so this isn't possible, since I'm not
very familiar with pythran's internals I might be asking a silly
question.

On 2 June 2014 12:37, serge Guelton <sguelton@xxxxxxxxxxxxx> wrote:
> On Mon, Jun 02, 2014 at 12:40:48PM +0200, Ian Ozsvald wrote:
>> Hi Serge. I'm just back in the UK from teaching High Performance
>> Python to PhDs in Denmark last week. We covered Pythran (mainly
>> Cython, also PyPy and Numba).
>
> That's great news! Any feedback concerning Pythran? Can you share your
> materials?
>
>> For the above benchmarks what is the main reason for the speed
>> improvements over numpy? Is it because you're using locally defined
>> variants of the numpy functions? Is it because you have
>> parallelisation (there's no note about openmp/multi-core)?
>
> There are several reasons. (No parallelization involved)
>
> 1. Avoid temporary: several array expressions are computed lazily and
> merged in a single one
>
> 2. Faster Implementation of numpy functions: It's hard to say otherwise,
> but I have the feeling that some numpy functions are not as efficient as
> they could be. We have a raw C++ implementations for many of them and we
> sometime get a x2 speedup just by calling them.
>
> Just wait for the vectorized/ parallelized versions ;-)
>
>> Cheers, Ian.
> It's a pleasure to get some news! I still have some work to be done on
> the numpy-benchmarks, but I'll keep you informed once everything is
> ready for official disclosure!
>
> s.
>



-- 
Ian Ozsvald (A.I. researcher)
ian@xxxxxxxxxxxxxx

http://IanOzsvald.com
http://ModelInsight.io
http://MorConsulting.com
http://Annotate.IO
http://SocialTiesApp.com
http://TheScreencastingHandbook.com
http://FivePoundApp.com
http://twitter.com/IanOzsvald
http://ShowMeDo.com

Other related posts: