RE: Choosing data file size for a multi TB database?

  • From: Zoran Martic <zoran_martic@xxxxxxxxx>
  • To: BranimirP@xxxxxxxx, oracle-l@xxxxxxxxxxxxx
  • Date: Sat, 3 Sep 2005 07:46:32 -0700 (PDT)

The checkpointing heavily depends on the multiple DBWR
having the good I/O subsystem behind that can survive
the I/O throughput usually measured in random I/O
operations then throughput. Usually does not matter is
it 50 or 1000 files, it I believe more depends on the
physical I/O layout and how the disk I/O is handling
fast delivery of I/O operations at the backend, then
is it 100 or 1000 files in te database.

The huge database are using various ways to protect
itself. Who has money usually have some crazy disk
based backup strategy that is anyway the fastest
backup and recovery. Usually all very important
databases are also covered by good disaster standby or
somekind of replication.

Regards,
Zoran




--- Branimir Petrovic <BranimirP@xxxxxxxx> wrote:

> What about checkpoint against tens of thousands of
> data files, surely 
> more-merrier rule holds? For that reason (or due to
> a fear factor) I 
> was under may be false impression that smaller
> number (in hundreds) 
> of relatively larger data files (20 GB or so) might
> be better choice. 
>  
> Other very real problem with 10TB database I can
> easily foresee, but 
> for which I do not know proper solution, is how
> would one go about the
> business of regular verification of taped backup
> sets? Have another 
> humongous hardware just for that purpose? Fully
> trust the rust? (i.e. 
> examine backup logs and never try restoring, or...)
> What do people 
> do to ensure multi TB monster databases are surely
> and truly safe 
> and restorable/rebuildable?
>  
>  
> Branimir
> 
> -----Original Message-----
> From: Tim Gorman [mailto:tim@xxxxxxxxx]
> Sent: Friday, September 02, 2005 5:59 PM
> To: oracle-l@xxxxxxxxxxxxx
> Subject: Re: Choosing data file size for a multi TB
> database?
> 
> 
> Datafile sizing has the greatest regular impact on
> backups and restores.
> Given a large multi-processor server with 16 tape
> drives available, which
> would do a full backup or full restore fastest?
> 
> 
> 
> *     a 10-Tbyte database comprised of two 5-Tbyte
> datafiles 
> 
> *     a 10-Tbyte database comprised of ten 1-Tbyte
> datafiles 
> 
> *     a 10-Tbyte database comprised of two-hundred
> 50-Gbyte datafiles? 
> 
> *     a 10-Tbyte database comprised of two-thousand
> 5-Gbyte datafiles?
> 
> 
> 
> Be sure to consider what type of backup media are
> you using, how much
> concurrency will you be using, and the throughput of
> each device?
> 
> There is nothing "unmanageable" about hundreds or
> thousands of datafiles;
> don't know why that's cited as a concern.  Oracle8.0
> and above has a
> limitation on 65,535 datafiles per tablespace, but
> otherwise large numbers
> of files are not something to be concerned about. 
> Heck, the average
> distribution of a Java-based application is
> comprised of 42 million
> directories and files and nobody ever worries about
> it...
> 
> 
> 
> 


__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
--
//www.freelists.org/webpage/oracle-l

Other related posts: