On Mon, Mar 22, 2010 at 02:29:06PM +0100, Stefan Waidele wrote: > Looks to me that most of these tasks are pretty fast operations on the > filesystem level - except the "backlink fixing", which in turn might be > quite expensive depending on the size of the wiki. > > Having that in mind, how about changing the order of the tasks: > > * If the redirect is created before the backlinks are fixed, then the > wiki itself should be in a workable state. > * If the redirect is not desired, it can be removed after the > backlink-fixing is complete > > The actual fixing could be done in a background process which > > * reads _all_ the pending replacements from the queue > * visits each dokuwiki-page and replaces the links to _all_ pages > moved since the last time the process started The problem with fixing all pages in one process is that you need to do that via an ajax script of some sort to guarantee that you don't hit any memory limit or php script timeouts for many pages. That in turn would require that you need to have the window open for the duration of the task. While I think that most users would like sth. like that I wouldn't make it a top priority. Also, people wouldn't notice any inconstency when you fix pages which are part of the move queue at the time they're visited - because they get repaired before the users sees the content. So there's no need to do everything in one go. I hope this makes sense ;). What one could do is provide an additional cli script which could be triggered by a cron job on the server side to clean up after page moves over night or in certain intervals. Regards, Michael -- Michael Klier www: http://www.chimeric.de jabber: chi@xxxxxxxxxxxxxxxxxx key: http://downloads.chimeric.de/chi.asc key-id: 0x8308F551