Jamal, Are you familiar with Hadoop? That's the solution here that immediately comes to mind, as it would allow you to scale pretty efficiently for a gargantuan task like this. Basically, Hadoop is an implementation of Google's Map-Reduce, so far as I understand it. Stefik On Thu, Jul 8, 2010 at 8:37 PM, Jamal Mazrui <empower@xxxxxxxxx> wrote: > For years, I have had an interest in developing a free, open source, web > crawler and download manager that makes it convenient to launch a search for > files to download from the web, with various parameters accounting for > factors such as depth of links, file types to retrieve, domains and > directories to explore, etc. I think that a highly usable app for screen > reader users would offer significant benefit to our community because, > historically, ready access to the printed word has been an ongoing obstacle > for us, yet today, efficient access to the electronic word can give us an > equal footing or competititve advantage. > > I have already thought much about the programming issues in this area, but > have had trouble deciding on a design and development stack of technologies. > It occurs to me to invite any others who might be interested in engaging in > this endeavor. I think we would use some kind of open source license, > though that could be debated. Hopefully, alternative approaches, > contributed by multiple programmers, would help us all develop our > knowledge. > > I can say more about my current thoughts on the matter if enough people > express interest. Anyone, however, should feel free to comment on the > topic. > > Jamal > > __________ > View the list's information and change your settings at > //www.freelists.org/list/programmingblind > > __________ View the list's information and change your settings at //www.freelists.org/list/programmingblind