Dirk, we need to meet one day. I indeed have blob'd into Postgres, but haven't used the 'R inside Postgres' project. My reasons are several fold. The datasets i use have historically been somewhat large. On the order of a billion timeseries, each w/ thousands of points in time and sometimes a great number more. Postgres simply couldn't handle it. That is not a naysay against postgres...a brilliant DBMS pioneering new concepts like inheritance of schemas...i went up agains Oracle at Credit Suisse, and the result was the same. Secondly, when dealing w/ timeseries of disparate frequencies, SQL has no concept of time. Yes, they have a timestamp, or date, but no ability to natively understand vectors of observations at a given frequency, nor the ability to rescale to different frequencies. For me, this has been a plain and simple must. Thirdly, dealing with large datasets means terse storage, one which no indexed RDBMS supports. disks are cheaper, but i need to get a 20 year daily timeseries instantiated from disk in sub-millisecond timings to support large model runs that would otherwise span days...for a single days run. (NOTE: all of these 'super models' are of course worthless and waisting good computing time that this group could probably put to better use! But they are the clients that pay my bills ) now...off my soap box...and thanks! i will definitely check this out. -tom ps - on gentoo, its one emerge away! ( i too am a penguin guy ...but lets not start a religious debate on distros :) Dirk Eddelbuettel wrote: >I promise that I will get off this soap box one day, but ... > > >... in this context, had you heard of Joe Conway's clever 'R inside of >PostgreSQL' project of R as a procedural language for the Postgres SQL DBMS? >On Debian, it's just one 'apt-get install' away. See > > http://www.joeconway.com/plr/ > >Dirk > > >