Hello, I did some speed tests to at least have a glue whether to use the virtual UAE filesystem or a hardfile with SFS. These are the results. This is raw hdparm values: deepdance:/home/martin# hdparm -tT /dev/hda /dev/hda: Timing buffer-cache reads: 860 MB in 2.01 seconds = 428.57 MB/sec Timing buffered disk reads: 92 MB in 3.01 seconds = 30.55 MB/sec deepdance:/home/martin# This is some bonnie++ values (better view this with a fixed-width font;-): ------------------------------------------------------------------------- Writing intelligently...done Rewriting...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP deepdance 1G 25678 15 10460 6 21421 6 109.2 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 1362 19 +++++ +++ 1496 19 1280 20 +++++ +++ 404 5 ------------------------------------------------------------------------- When I interpret this correctly, it is creating 1362 files/second and deleted 1496 files/second in sequential create and 1280 files/second respective 404 files/second in random create. Dunno why it didn't try the read tests. Maybe cause in a standard setting it tests with 0 byte files. Now let's see what of this performance comes through to the emulated Amiga. First I tested the virtual UAE filing system on my Linux home partition with SGI's XFS as host filesystem: ------------------------------------------------------------------------- 5.Daten:Smartfilesystem> diskspeed DAT: ALL Diskspeed 1.3 (20 Apr 2003) - Programmed by John Hendrikx, updated by Jörg Strohmayer Volume Daten: 19524736 blocks (19067 MB), 1024 bytes per block. The volume has 0 buffers and the DosType is 0x444f5301. Creates 1000 files of 128 bytes : 15 files/second Scan all files in a directory : 87 files/second Lock and unlock a random file : 46 files/second Open and close a random file : 78 files/second Load a random file : 30 files/second Deletes 1000 files : 89 files/second Random seek/reads of 2048 bytes in 2097152 byte file : 46 times/second Read data using 65536 byte buffer : 5907 kB/second Write data using 65536 byte buffer : 5894 kB/second 5.Daten:Smartfilesystem> ------------------------------------------------------------------------- This isn't impressive when it comes to directory manipulation speed. Reading and writing data is best in this scenario. Now the same with a 16 MB SFS hardfile. This might have been better with a larger hardfile I admit: ------------------------------------------------------------------------- 5.Daten:Smartfilesystem> addbuffers msg_0: msg_0: has 1000 buffers 5.Daten:Smartfilesystem> diskspeed MSG_0: ALL Diskspeed 1.3 (20 Apr 2003) - Programmed by John Hendrikx, updated by Jörg Strohmayer Volume Daten: 32765 blocks (15 MB), 512 bytes per block. The volume has 0 buffers and the DosType is 0x444f5300. Creates 1000 files of 128 bytes : 19 files/second Scan all files in a directory : 22988 files/second Lock and unlock a random file : 273 files/second Open and close a random file : 202 files/second Load a random file : 57 files/second Deletes 1000 files : 28 files/second Random seek/reads of 2048 bytes in 2097152 byte file : 57 times/second Read data using 65536 byte buffer : 5349 kB/second Write data using 65536 byte buffer : 1331 kB/second ------------------------------------------------------------------------- And now this is with a faked RDB hardfile with SFS in a real partition ------------------------------------------------------------------------- 5.System:> addbuffers marc2: 950 marc2: has 1000 buffers 5.System:> diskspeed msg2: all Diskspeed 1.3 (20 Apr 2003) - Programmed by John Hendrikx, updated by Jörg Strohmayer Volume System: 1927734 blocks (941 MB), 512 bytes per block. The volume has 0 buffers and the DosType is 0x444f5300. Creates 1000 files of 128 bytes : 19 files/second Scan all files in a directory : 26679 files/second Lock and unlock a random file : 267 files/second Open and close a random file : 196 files/second Load a random file : 109 files/second Deletes 1000 files : 28 files/second Random seek/reads of 2048 bytes in 2097152 byte file : 55 times/second Read data using 65536 byte buffer : 5546 kB/second Write data using 65536 byte buffer : 5354 kB/second ------------------------------------------------------------------------- The same thing without .recycled directory: ------------------------------------------------------------------------- 5.System:> diskspeed msg2: all Diskspeed 1.3 (20 Apr 2003) - Programmed by John Hendrikx, updated by Jörg Strohmayer Volume System: 1927734 blocks (941 MB), 512 bytes per block. The volume has 0 buffers and the DosType is 0x444f5300. Creates 1000 files of 128 bytes : 19 files/second Scan all files in a directory : 25941 files/second Lock and unlock a random file : 277 files/second Open and close a random file : 207 files/second Load a random file : 80 files/second Deletes 1000 files : 77 files/second Random seek/reads of 2048 bytes in 2097152 byte file : 58 times/second Read data using 65536 byte buffer : 5385 kB/second Write data using 65536 byte buffer : 3240 kB/second ------------------------------------------------------------------------- Hmmm, deleting files is better, but writing data got worse. I repeated three times. But it might be that the 5354 kB/s was good luck. Seems that an SFS hardfile in a partition is the best choice for a YAM messagebase. Even better might be migrating the whole lots of e-mails to a Linux mail client that doesn't use one file per e-mail and be done with it ;-). I think there are no filesystems that scale well to 14000+ files in a directory. Regards, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de