Re: 20 TB Bigfile

  • From: "Mladen Gogala" <dmarc-noreply@xxxxxxxxxxxxx> (Redacted sender "mgogala@xxxxxxxxx" for DMARC)
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Tue, 10 Mar 2015 15:47:04 -0400

Hi Keith,
Stefan has mentioned multi-section backups, which will speed up you backup, but not above what's possible on the channel. Better question would be why would you want to go with big file tablespaces? What would be the advantage? By increasing block size to 16K, you can have 1023 64GB disks in a single ASM disk group. Do you really need to go above that?



On 03/10/2015 03:26 PM, Keith Moore wrote:
Hi,

Sorry, I meant to include the version. It is version 11.2.0.3.6.

The strategy is to never restore from backups except as a last resort. We will
failover to the DR database if possible. If not, then restore from a storage
level snapshot of production and archive logs.

If the failure scenario is such that neither of those is feasible, we would
restore from the RMAN backup. In that case, a full restore will be slow
whether it's a single big file or many smaller files. The only case where it
would make a difference is if we had a small file tablespace and only needed
to restore a single data file.

Keith


        Hi Keith,           unfortunately you have not mentioned the
database version. However as this database is related to SAP,
bigfile tablespaces are supported with&#160;    11.2.0.2 (or newer)
and BR*Tools&#160;7.20 Patch Level 20 (or newer). Please check
SAPnote #1644762 for more details.&#160;    The max size limit with
1023 data files would be&#160;32 TB with the Oracle/SAP 8 kb
requirement - so you still have plenty of space to go in your
scenario.&#160;           &#160;           You have to export/import
with R3load (and/or Distribution Monitor) in the SAP standard
scenario as you migrate from Solaris (assuming SPARC and Big Endian)
to Linux (Little Endian). In this scenario it is very easy to split
the objects into several tablespaces due to modifications in R3load
files.           &#160;           However the backup &#38; restore
scenario is critical, if you still want to go with bigfile
tablespaces. You can use RMAN (or have to in case of ASM)
with&#160;multi section backups to parallelize the backups.
  &#160;           Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: @OracleSK
&#62; Keith Moore &#60;kmoore@xxxxxxxxxxxx&#62; hat am 10. M&#228;rz 2015 um
13:23 geschrieben:
&#62;
&#62; We have a client who is moving a SAP database from Solaris (non-ASM) to
Linux
&#62; (ASM). This database is around 21 TB but 20 TB of that is a single
tablespace
&#62; with 1023 data files (yes, the maximum limit).
&#62;
&#62; On the new architecture we are considering using a single 20 TB Bigfile
&#62; tablespace. Does anyone know of any negatives to doing that? Bugs?
&#62; Performance? Other?
&#62;
&#62; Moving the data into multiple tablespaces will not be an option.
&#62;
&#62; Thanks
&#62; Keith Moore       -- //www.freelists.org/webpage/oracle-l

--
//www.freelists.org/webpage/oracle-l




--
Mladen Gogala
Oracle DBA
http://mgogala.freehostia.com

--
//www.freelists.org/webpage/oracle-l


Other related posts: