Re: High number of created consistent read blocks when inserting into a LOB column from dblink

  • From: Jure Bratina <jure.bratina@xxxxxxxxx>
  • To: Stefan Koehler <contact@xxxxxxxx>
  • Date: Thu, 12 Jan 2017 21:02:42 +0100

Hi Stefan,

How did you capture the stack traces - with perf?
Yes:
perf record -F 99 -p <pid of target process> -g -- sleep 60
perf script | ./stackcollapse-perf.pl > out.perf-folded
./flamegraph.pl out.perf-folded > perf-kernel.svg

I'm not sure whether I can send attachments to the list, so if it's of any
interest, here
https://www.dropbox.com/sh/ba4seqzw7wo7y8q/AADoCPF8zAQT89rDayXZqh6ba?dl=0&lst=
are two flame graphs captured on the problematic database. It's true that
they were not captured when the process was 90%+ on CPU (if I remember
correctly, CPU usage was around 40-50% when the samples were made), but
since the issue originally manifested as the process being mostly on CPU, I
thought to sample just that. But, based on what you wrote, they might not
be as useful as I thought :-)

However you also can sample off-CPU processes/events with perf:
http://www.brendangregg.com/blog/2015-02-26/linux-perf-
off-cpu-flame-graph.html
Thanks for the link, I'll try to run the test again on the problematic
database and see what emerges.

Thank you and regards,
Jure Bratina

Other related posts: