Re: Android performance drop moving from LuaJIT-2.0.1 -> LuaJIT-2.0.2 ?

  • From: "Kriss@xxxxxxxx" <Kriss@xxxxxxxx>
  • To: luajit@xxxxxxxxxxxxx
  • Date: Thu, 5 Sep 2013 04:19:01 +0100

After some more prodding it seems that It really is that memory allocation
loop failing to allocate anything.

When the range is bigger, it succeeds, with that -1 it actually falls
through after trying 32 times.

With the bigger range it loops a few times at the start, then it is OK from
then on in as it hits a good free chunk on its first guess.

Sorry failed to get a map, I think I need to root this device before I can
see proc/?/maps. I'll get on that tomorrow. Sorry, not dealt with the
android internal stuff too much I mostly rely on testing locally and hoping
it will just work :).

I suspect that part of the problem here may be that I have 5meg of native
code+data in a single .so and the function you are trying to allocate
memory near is in the middle. So it really is having trouble finding free
memory within the range and whatever restrictions android is applying.
Maybe I can convince the linker to do something clever to fix this.


Cheers,


On Thu, Sep 5, 2013 at 1:28 AM, Mike Pall <mike-1309@xxxxxxxxxx> wrote:

> Kriss@xxxxxxxx wrote:
> > By a process of just reverting files between the two versions I have
> > pinpointed exactly what is going wrong. Not really sure why it is bad for
> > me and not anyone else?
> >
> > The change is in lj_mcode.c,
> >
> >   const uintptr_t range = (1u << (LJ_TARGET_JUMPRANGE-1)) - (1u << 21);
>
> [Well, I'm relieved it's not an isue with cache syncing. But ...]
>
> Please print or log the address of lj_vm_exit_handler (e.g. in
> lua_newstate), then take a look at the memory map /proc/self/maps
> or /proc/$pid/maps after running for a while. There should be
> enough free memory around this address in the +-32 MB ARM branch
> range.
>
> The JIT-compiler tries to grab 32 KB blocks from that area and
> will do random probes until if finds one. That could be costly if
> this fails too often. You can check whether this is the problem by
> adding some logging to the end of mcode_alloc() where it punts.
>
> [There was another thread about a Tegra-based device in April.
> When I debugged that I noticed the whole memory map was full of
> tiny 4K mappings against /dev/nvmap. I'm not sure whether that's
> normal or not. From a quick search, I'm under the impression that
> device should be opened once per process and not hundreds of
> times. Maybe something is leaking graphic driver handles or such?]
>
> --Mike
>
>


-- 
Kriss

http://www.WetGenes.com/

Other related posts: