On Thu, Apr 24, 2008 at 10:14:01AM +0200, Rein Couperus wrote: > > We could use an email processor to extend the 'download engine' in such a way > that you > could send an email request for a document which is not available on the > server, > which would then be cached for a certain time, and deleted when it is not > being > downloaded... We need a specification for this. I will also send the present > download script > to Jeff, so he can think about how to enhance this... I think we should limit > the size of the > documents to 5k compressed when using PSk250, in order to limit download time > on the server. I have received the files from Rein and Per (thanks) and am looking forward to this weekend when I have time to dig into them. Having a repository of information on central servers makes sense, though to be honest I had never really considered the ability to use tools to return a text version of whole Web pages. I was initially thinking about information that's easy to obtain -- like the ARRL Letter, W1AW bulletins, solar propagation info, DX newsletters, Contest calendars, etc. This kind of info arrives in email from the publishers now -- so a machine could easily process and archive those for redistribution. The ability to go snag an entire Web page via wget or curl adds even more possibilities though the end user would have to know a precise URL. That's not tough for your own hometown newspaper but it might be tougher to figure out the address for a specific article in the London Times, etc. Specific Web harvesting (like the last five eHam reviews of an XXX-XXXXXX) would be trivial to setup but I like the idea of an editorial board since that specific idea may be a bad use case -- I was really just thinking out loud. Same goes for converting an RSS subscription to email for processing. Thanks again for the files and ideas. It will give me something to play with from a lonely hotel where I'll be spending my weekend. Thank goodness for shell access. ;-) -- 73, Jeff