Re: [PATCH] Implement timekeeping for rumprun/hw (x86)

  • From: Antti Kantee <pooka@xxxxxx>
  • To: rumpkernel-users@xxxxxxxxxxxxx
  • Date: Mon, 06 Jul 2015 11:23:16 +0000

On 06/07/15 10:50, Martin Lucina wrote:

On Monday, 06.07.2015 at 10:35, Antti Kantee wrote:
While that may be a fix, how can that even in theory be a fix for
the guest clock running at 2/5 speed compared to the real world??

The problem I was trying to fix was sleeps being
way too long on x86-64 (as reported by the test program).

The problem I reported: "The clock seems to run at about 2/5 speed under no-kvm QEMU (i.e. sleep 2 takes ~5 seconds). Is that expected?" I assumed you were fixing the reported problem.

Testing confirms that the problem is still there. I also tested
32bit qemu, and I see the problem there too.

Regarding the wall clock, I see no problems on my system (2011 vintage
Sandy Bridge with invarant TSC). In fact, I've had a "php -S" unikernel
running for some time now and the wall clock is exactly the same to the
host clock.

You did mention that you're testing on a Core 2 laptop, right? What does
cat /proc/cpuinfo say on your system?

Not sure why that matters (unless you suspect it's a qemu bug), but attached anyway.

Also, as an experiment, can you try and boot either variant of
clock_test.bin directly from GRUB, while running the system on AC?

What does "boot *directly* from GRUB" mean? Build an image with grub and the test program and use "qemu img" instead of "qemu -kernel bin", or what exactly? (not sure how that would be booting *directly*)

Also, qemu now
consumes ~7% host CPU for a program which does "sleep(1000);". That
is way way too high for everything to be peachy.

Not really. Don't forget that rumpclk is trying to run at 100 Hz, so at the
moment you'll never run into really long sleeps anyway.

I tested at the cost before your clockwork, and it's ~2.5x lower (probably a coincidence). The host CPU cost before (~3%) is still higher than I expected. Then I planned to test with usleep() under Linux, but discovered that a no-kvm Linux guest consumes 50% host CPU when the guest is idle, so I gave up on that plan.
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Duo CPU T9400 @ 2.53GHz
stepping : 10
microcode : 0xa07
cpu MHz : 800.000
cache size : 6144 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm
constant_tsc arch_perfmon pebs bts nopl aperfmperf pni dtes64 monitor ds_cpl
vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm ida dtherm tpr_shadow
vnmi flexpriority
bogomips : 5054.24
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Duo CPU T9400 @ 2.53GHz
stepping : 10
microcode : 0xa07
cpu MHz : 800.000
cache size : 6144 KB
physical id : 0
siblings : 2
core id : 1
cpu cores : 2
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm
constant_tsc arch_perfmon pebs bts nopl aperfmperf pni dtes64 monitor ds_cpl
vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm ida dtherm tpr_shadow
vnmi flexpriority
bogomips : 5054.24
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

Other related posts: