Re: 2GB or not 2GB (datafile limit)? That is the question.

  • From: Nuno Souto <dbvision@xxxxxxxxxxxx>
  • Date: Sun, 10 Sep 2006 03:04:46 +1000

Kevin Closson wrote,on my timestamp of 10/09/2006 1:16 AM:

The "largest number of Oracle blocks per file" limit still applies. That's around 30GB with 8K block sizes.

^^^^^^^^^^^^ might as well be precise. It is (2^21) - 1 Oracle blocks per datafile which with an 8KB block is 8K short of 16GB.

Actually, like I said in a precise fashion it is around
30GB.  Either that or it's pure imagination that my 9i
dbs are running on multiple 28GB files!

See here:

The limit is 2^22 -1 , not 2^21 - 1.
The reason I say "around" is that I'm not game to
argue the last GB with Oracle calculations or be caught
in a boundary bug of some undocumented sort.

Just like the many we had with the 2GB limit a few years
ago, where it was pot luck which bug you'd hit first:
a doco one or a software one...

A 10g BIGFILE tablespace supports 2^32 blocks which is 32TB
for a single file tablespace...they are limited to 1 file per

That I think is its only problem. There may well be constraints in backups of files of that size. I'd much have preferred that Oracle removed the "one file per tablespace" restriction.

The whole topic is noise really, modern Oses handle axtremely large
numbers of file descriptors without issue... so choose what makes
sense...  our QA tests here include OLTP stress tests with databases
that consist of 16383 datafiles "evenly" distributed throughout
512 filesystems...just for bounds testing...

Not really.  The number of files can still be very much an issue.
Either one goes for one file per tablespace in 10g and one better
have the right hardware/software combo to backup multi-TB single
files to tape, or one better be prepared for nurturing a bucket
load of files and their naming. Again: only for very large dbs.
There is no mid-term, which I'd much prefer.

Nuno Souto
in rainy Sydney, Australia

Other related posts: