>>> >>>>>> I don't know about OS issues, but we also stick to a max 2GB >>>datafile size for a different reason. Almost all our >>>servers (Linux and >>>HPUX) are on one of our SANs and we've found that the OSs >>>tend to give each mount point the same (on average) fraction >>>of I/O bandwidth to the SANs. Note, I am aware of no OS that arbitrarily limits I/O in flight on a mount basis. It is most generally a LUN basis in which case you could slice your LUN extrememly thin, create a bunch of filesystems in it, mount them and have no more throughtput than a single LUN. This is evident in a lot of HBA drivers out there ... but the number is shrinking (or at least the queue depths are getting so deep you wont notice). -- //www.freelists.org/webpage/oracle-l