Re: 20 TB Bigfile

  • From: "Mladen Gogala" <dmarc-noreply@xxxxxxxxxxxxx> (Redacted sender "mgogala@xxxxxxxxx" for DMARC)
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Tue, 10 Mar 2015 10:58:41 -0400

On 03/10/2015 08:23 AM, Keith Moore wrote:
We have a client who is moving a SAP database from Solaris (non-ASM) to Linux
(ASM). This database is around 21 TB but 20 TB of that is a single tablespace
with 1023 data files (yes, the maximum limit).

On the new architecture we are considering using a single 20 TB Bigfile
tablespace. Does anyone know of any negatives to doing that? Bugs?
Performance? Other?

Moving the data into multiple tablespaces will not be an option.

Thanks
Keith Moore

--
//www.freelists.org/webpage/oracle-l



Well, there is a completely unimportant matter of backup. A single big file tablespace means that if anything goes wrong, you have to restore 20TB. If a file goes bonkers in the tablespace with multiple 32GB files (assuming 8KB db block size), you only have to restore 32GB. And there is a slight difference in speed in restoring 20TB and 32GB. You can achieve the speed nevertheless, by allocating a single LUN for all of the DB files, and then use snapshots to backup that gigantic LUN. In my opinion, it would be the best to create 16K block size tablespace, which would have 64GB per file and thus 64TB maximum size, and then move the tables into that big tablespace. I would avoid bigfile like a plague. I do backup for living and the fastest speed that I have seen so far is 4TB/hr, which we could mark with "c". Assuming you can match "c" with your backup speed, it would still take you 4 hours to back it up and 4 hours to restore. It all depends on the cost of 1 hour of the downtime. And matching "c" is no small feat. The "c" was matched by a company that has a full rack Exadata X5 and every db node has 10GB connection to the world.
Regards,

--
Mladen Gogala
Oracle DBA
http://mgogala.freehostia.com

--
//www.freelists.org/webpage/oracle-l


Other related posts: