About the only reason I have in the environment I work in is that we copy a lot of database files between servers for test databases and such. Some of these copies occur over a WAN and, if a 20gb file gets corrupted in the 19th gb (thereby necessitating another 20gb copy), it can start to chew up some clock time. With 10 2gb files, only another 2gb needs to be re-copied. I suppose the same would hold true when recovering a datafile/tablespace. That said, I also have many larger datafiles (10-30gb) in some DBs and have not any problems. Environment is Windows (sigh), 8i, 9i, 10g. -joe http://www.peaceaday.com > > I don't see any need to limit the datafile size to 2GB anymore. Anyone > else? > > TIA, > Rich > -- > //www.freelists.org/webpage/oracle-l > -- //www.freelists.org/webpage/oracle-l