Axel Dörfler wrote:
Bruno Albuquerque <bga@xxxxxxxxxxxxx> wrote:Axel Dörfler wrote:Not more than needed - its design requires to have all meta write operations sequentially, all other locking is done per inode. I would rather expect the problem for this to be in the I/O path as a whole - it is not yet optimized at all. For example, the file/block cache only reads exactly whatever you tell it (no precaching), there is no I/O scheduler, our VM always only maps in 4K on a page fault, etc.Got it. Are we smart enough to reorder access? Say that the head is at position 1 and we ask for position 5 and only a few ms later another thread requests an intermediary position 3. Are we smart enough to read 3 before reading 5? 2 -> 3 -> 5 is faster than 2 -> 5 -> 3.Well, that would be the task of the aforementioned I/O scheduler - we currently don't do this.
From what I know, today HDDs do this internally. And we don't have any clue about its internal structure, so we can't make a proper scheduler. But it can help a little, I guess.
BTW the interface speed does not really determine the speed of a hard drive. If you have a 4ms access, 10000 rpm drive, then you have a fast drive. If you have a 8ms access, 7200 rpm drive, you have a standard drive.I am aware of that but the interface was the only information I had offhand. Anyway, the HD is a Samsung SpinPoint F1 HD103UJ 1 Tb. It is only 7200 RPM but it is one of the fastest HDs around. It gets burst rates of up to 204.5 Mb/s. Random access time is high (13.7 ms), but that is offset by the high average read speed (96.8 Mb/s) and write speed (84.4 Mb/s).For compiling, almost only the access time matters, not the maximum through put. Especially if you are using a dumb OS that has no I/O scheduler :-))
I wonder how it could go on my 15k RPM Cheetah... too bad we don't have any SCSI adapter's driver :-)