RE: Choosing data file size for a multi TB database?

  • From: "Hameed, Amir" <Amir.Hameed@xxxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Fri, 2 Sep 2005 19:47:51 -0400

On very large data files running on buffered filesystem, wouldn't the
single-writer lock cause foreground processes (that are trying to read
data) to wait when the DBWR is check pointing?
 
________________________________

From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Branimir Petrovic
Sent: Friday, September 02, 2005 6:54 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: RE: Choosing data file size for a multi TB database?



        What about checkpoint against tens of thousands of data files,
surely 
        more-merrier rule holds? For that reason (or due to a fear
factor) I 
        was under may be false impression that smaller number (in
hundreds) 
        of relatively larger data files (20 GB or so) might be better
choice. 
         
        Other very real problem with 10TB database I can easily foresee,
but 
        for which I do not know proper solution, is how would one go
about the
        business of regular verification of taped backup sets? Have
another 
        humongous hardware just for that purpose? Fully trust the rust?
(i.e. 
        examine backup logs and never try restoring, or...) What do
people 
        do to ensure multi TB monster databases are surely and truly
safe 
        and restorable/rebuildable?
         
         
        Branimir

                -----Original Message-----
                From: Tim Gorman [mailto:tim@xxxxxxxxx]
                Sent: Friday, September 02, 2005 5:59 PM
                To: oracle-l@xxxxxxxxxxxxx
                Subject: Re: Choosing data file size for a multi TB
database?
                
                
                Datafile sizing has the greatest regular impact on
backups and restores.  Given a large multi-processor server with 16 tape
drives available, which would do a full backup or full restore fastest?
                
                

                *       a 10-Tbyte database comprised of two 5-Tbyte
datafiles 
                *       a 10-Tbyte database comprised of ten 1-Tbyte
datafiles 
                *       a 10-Tbyte database comprised of two-hundred
50-Gbyte datafiles? 
                *       a 10-Tbyte database comprised of two-thousand
5-Gbyte datafiles?
                        

                
                Be sure to consider what type of backup media are you
using, how much concurrency will you be using, and the throughput of
each device?
                
                There is nothing "unmanageable" about hundreds or
thousands of datafiles;  don't know why that's cited as a concern.
Oracle8.0 and above has a limitation on 65,535 datafiles per tablespace,
but otherwise large numbers of files are not something to be concerned
about.  Heck, the average distribution of a Java-based application is
comprised of 42 million directories and files and nobody ever worries
about it...
                
                

Other related posts: