[haiku-development] Re: Package buildbots
- From: Art H. <art@xxxxxxxxxxxxxx>
- To: "haiku-development@xxxxxxxxxxxxx" <haiku-development@xxxxxxxxxxxxx>
- Date: Wed, 19 Oct 2016 08:44:36 +0000
Ahoy friends,
I don't think the SSHDs are involved. How is the disk shown to the VM? Is it
virtio or some SATA emulation? If it's the latter, is it ATA or AHCI?
We are running a Generation 1 VM on Hyper-V:
- Hard disk in Hyper-V is using IDE controller. FWIW, my haiku buildslave
(Linux VM) is also on an IDE controller, without this issue.
- I initially tried putting the disk on the SCSI controller, this was not
recognized in Haiku.
- 'listdev' output from pulkomatic VM:
----
~> listdev
device Network controller (Ethernet controller) [2|0|0]
vendor 1011: Digital Equipment Corporation
device 0009: DECchip 21140 [FasterNet]
device Display controller (VGA compatible controller, VGA controller) [3|0|0]
vendor 1414: Microsoft Corporation
device 5353: Hyper-V virtual VGA
device Bridge [6|80|0]
vendor 8086: Intel Corporation
device 7113: 82371AB/EB/MB PIIX4 ACPI
device Mass storage controller (IDE interface) [1|1|80]
vendor 8086: Intel Corporation
device 7111: 82371AB/EB/MB PIIX4 IDE
device Bridge (ISA bridge) [6|1|0]
vendor 8086: Intel Corporation
device 7110: 82371AB/EB/MB PIIX4 ISA
device Bridge (Host bridge) [6|0|0]
vendor 8086: Intel Corporation
device 7192: 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled)
----
It would be nice to debug this, as the builds are quite slow otherwise.
At your service, my leige. Just let me know what needs to be done, once you
have any ideas.
Moving the disk may have resolved this, although we may want to dig deeper to
debug, if you wish (I suspect the same problem may arise if usage of the
physical drive increases).
For the record (we only discussed this over IRC), I made some quick tests on
the VM (using dd and rm):
- A call to kern_open took 27 seconds
- A call to kern_unlink took 84 seconds
Pulkomandy: could you please re-test kern_open and kern_unlink when you have a
chance?
My basic dd tests this morning did seem to indicate this is resolved (at least
for the time being).
I'm not sure how to proceed from there to investigate the lower layers (vfs,
filesystem, caches, block device).
I have been monitoring the VM host machine's resources this morning, while
testing. I can see which VMs are consuming what kind of I/O, the length of
queues, which VM drives are sweating, etc.. Unfortunately there is no way for
me to easily make this data available for you to remotely monitor.
If the above would be useful, then I can monitor in tandem to your
investigations; we can liaise on IRC for that.
If there's any other info needed, let me know.
Regards,
- Art (arfonzo)
Other related posts: