[phpa] 1.0p4, Accelerator internals and Arturs problem

  • From: "Lindridge, Nick (Exchange)" <nlindridge@xxxxxxxx>
  • To: "'phpa@xxxxxxxxxxxxx'" <phpa@xxxxxxxxxxxxx>
  • Date: Mon, 17 Sep 2001 05:50:05 -0400

Artur,

Thanks for your info. I forgot to mention and put in the release notes that
you *must* delete the .pxxx files first as the file structure has changed.
Very sorry about that!

Both the shared memory and files have structure version numbers, but whilst
the shared memory recreates itself if the structure has changed, the cached
files aren't ignored as they should be if the version numbers are old. That
isn't a bug, I just haven't coded that yet. 

The reports should be due to that and things should run smoothly having
removed the .pxxx files. Is this correct?

The issue that I need to fix for 1.0p5, and that does occur with some SM
features, is that default array arguments are not handled. So

function foo($bar = array()) { }
 
will generate an error message in the log, and may fail later.  Constant
arrays are handled in the other places that they occur, (statics and class
variable defaults), but not for args. I half fixed this last night ready for
1.0p5, and have them working when the shm cache is disabled. I'll carry on
with that tonight. I appreciate your patience while these things are
addressed.

I didn't realise that you had a problem with slow pages. This could occur if
you have scripts that take a long time to execute and have people who are
accessing scripts that are not yet cached. Could this be your situation?
And do you have scripts that are themselves generated on-the-fly, and
therefore keep getting cached? Also, are you using multi-processor boxes?
Can you explain more about your environment and the nature of peoples page
accesses.

[info about the shm cache]

The shm cache has reader and writer locks. Reader locks are held for the
duration of script *exection* because code is executed directly out of the
cache, without being copied for performance reasons, and must therefore not
change whilst being executed! There can of course be many concurrent
readers, but there can be only one writer. If there is a pending writer,
i.e. a page needs to be cached, then no new readers will be able to get a
reader lock, but the writer can not get its lock until the last active
reader has completed. For most situations I decided that this was probably
ok and seems to be.  I will add a selective caching feature, and any slow
scripts, where the benefit of the cache is less anyway, could then be marked
as non cacheable.

An alternative that I've been considering is to allow concurrent cache reads
*and* writes, permitting a new entry to be written while the old one is
still being read. This would decrease cache latency, although this is
already negligible in the general case, but would require more spare cache,
and of course complicate further the caching mechanism.  This would also
only help the case where pages were cached frequently, and such pages might
anyway be better non-cached. For sites where pages change infrequently, the
extra cache complexity would offer no benefit. 

Nick



***********************************************************************
Bear Stearns is not responsible for any recommendation, solicitation, 
offer or agreement or any information about any transaction, customer 
account or account activity contained in this communication.
***********************************************************************

------------------------------------------------------------------------
  www.php-accelerator.co.uk           Home of the free PHP Accelerator

To post, send email to phpa@xxxxxxxxxxxxx
To unsubscribe, email phpa-request@xxxxxxxxxxxxx with subject unsubscribe


Other related posts:

  • » [phpa] 1.0p4, Accelerator internals and Arturs problem