>>> I don't know about OS issues, but we also stick to a max 2GB datafile size for a different reason. Almost all our servers (Linux and HPUX) are on one of our SANs and we've found that the OSs tend to give each mount point the same (on average) fraction of I/O bandwidth to the SANs. This was startlingly obvious when, at the recommendation of one of our SysAdmins, we put a Production database all under 1 mount point and its performance suffered greatly - no detailed analysis, it was just obvious. When we spread that same database across multiple mount points on the same server and SAN, performance improved dramatically. ...that is not so weird, but you didn't say what OS. There certainly have been and are OS/HBAs that limit queue depth on a per-lun basis. It depends on the OS/HBAs ... Personally, I think it would make more sense in that SPECIFIC case to have a RAID 1+0 volume of those many 2GB chunks and then put an Oracle-optimized filesystem on it and use the Oracle file sizes of your choice...as opposed to limiting Oracle's file sizes to suit the peculiarities of your SPECIFIC san setup... SPECIFIC is caps on purpose...way too much folklore out there... -- //www.freelists.org/webpage/oracle-l