On Thu, Mar 26, 2015 at 04:13:17PM +0000, Valerio De Carolis wrote: > On 25/03/15 18:31, Pierrick Brunet wrote: > >Hi Valerio, > > > >It is great to see Pythran used that way and thanks for these kind of > >feed back. > > > >About numpy.dot, only the vec * vec implementation exists for now but it > >is planned to add this with the numpy.linalg package. > > > >To get around this problem, we will be really happy if you want to add > >this implementation in Pythran :-) > >Otherwise, I may have a look at this tomorrow for, at least, a "slow" > >implementation for compatibility. > >If you want to test it now. I suggest you to implement this function in > >you Python module (using Python) and Pythran will convert it for you. > > > >Cheers, > >Pierrick > > > Hi Pierrick, > > yes I gave it a quick try with a double for approach: > > >#pythran export mat_vec_mult(float[][], float[]) > >def mat_vec_mult(A, b): > > c = np.zeros_like(b) > > > > for i in xrange(A.shape[0]): > > for j in xrange(A.shape[1]): > > c[i] += A[i,j] * b[j] > > > > return c > > This is working well in Pythran and even for a small matrix if the code is > using the non-optimized version. I haven't benchmarked it too much though. > > I'm not sure what is the best approach for the implementation of such > feature in Pythran. Numpy internally defaults to BLAS's GEMV and GEMM style > functions. So I think the use of BLAS primitives is the natural translation > from numpy-ish code to the lower-level C++. Yes, that's the on-going direction. Pierrick explored this today.