[haiku-development] Re: harddisk access

  • From: "Axel Dörfler" <axeld@xxxxxxxxxxxxxxxx>
  • To: haiku-development@xxxxxxxxxxxxx
  • Date: Wed, 25 Jul 2007 13:12:34 +0200 CEST

Marcus Overhagen <marcusoverhagen@xxxxxxxx> wrote:
> I'm trying to understand how a harddisk access in haiku works,
> and which layers are responsible for what task.

Indeed, it's pretty complex. I'll try to fill in some details, but I 
haven't looked at it since quite some time, and don't have the time 
right now to dig in.

> user-space write(int fd) / read(int fd)
>            VVVVVV
> syscall (?)

Which go to fd.c _user_write|read().

>            VVVVVV
> some magic here

No magic:
- the file descriptor is resolved, and then one of it's ops is called 
(much like a C++ class with virtuals) - in this case, the vfs.cpp 
sFileOps are used, ie. the calls resolve to file_write|read().
Since the FD is a device, the devfs functions are called from there.

>            VVVVVV
> devfs_write() / devfs_read()
>            VVVVVV
>           IORequest (wrapper class for parameter?)

Exactly. It is meant to be put into a queue later on.

>            VVVVVV
> IOSchedulder::Process() (does pass through?)

This one currently just calls the device hooks directly - exactly what 
devfs would do.

From there it goes to the new device manager stuff - you can print the 
tree of device nodes in the KDL using the command:
- first, the device is invoked - in this case, it's system/devfs/
- next comes block_io
- then scsi_dsk
- then the SCSI driver interface
- and from there to the SCSI bus manager
- since there is an underlying IDE device, the requests will get 
forwarded to the IDE "sim" and are there translated to ATA commands
- after that, the IDE drivers are invoked, probably generic_ide_pci
- and that module then used the ide_adapter to actually talk to the 

The partitions are only "link" to the raw device that come with certain 
restrictions; ie. they will translate a request for the raw device and 
will make sure its within the partition's bounds.

> missing connections:
> block cache
> file cache

Those only come into play when the user read/write() functions work on 
a file system other than devfs. And the file system is free to choose 
to use them or not.
If that happens, the vfs.cpp file_write() function will call the file 
cache directly, which will then call into devfs to actually read the 
pages to memory (just like above). Also, when a file is mapped into 
memory, the actual reading/writing will take place during a page fault, 
and then goes through the vnode_store that is attached to the mapped 


Other related posts: