Christopher Smith a écrit :
I don't like these patches, they are adding complexity in what I think is the wrong place. The search routine shouldn't have to deal with caching of directories to support some file systems. Rather than replace the directory reading loop with complex inline code can we keep the same basic structure (start, iterate to end, finish) but change to use replacement functions for opendir, readdir and closedir. The logic to cache directories can then be kept separate from the search code, the search code just opens a list from somewhere and iterates over it. - Chris
hi,yes i should have seperated like the first one in two function, but the "search" name is too simple for me to choose if is the search of the files is the purpose of the function, or the callback. So I haven't separated and let this part to you.
For the "support some file systems", it's not the purpose of the patches, i only want to improve dokuwiki speed. But you are rigth our wiki is on a NFS filesystem with directories containing more than 3000 pages. last year we had 45000 modifications and 1600 pages creation. So yes every microseconds and every bytes of memory is important for me.
YoBoY -- DokuWiki mailing list - more info at http://www.dokuwiki.org/mailinglist