Re: Dimensioning datafiles (smaller pieces, medium big ones ?)

  • From: "Peter Hitchman" <pjhoraclel@xxxxxxxxx>
  • To: oracle-l <oracle-l@xxxxxxxxxxxxx>
  • Date: Sat, 16 Feb 2008 16:39:03 +0000

Hi,
This is a question I am thinking about to.
My set-up is Oracle 11.1.0.6.0, OEL 4 64bit, RAC, Storage on EMC SAN mostly
hardware RAID 5.
One of the things that bothers me with going with large files (and given the
size you mention this means a bigfile tablespace), is will the backup
software solution be able to handle it? In my case this is Tivoli Storage
Manager. I will be backing up using RMAN most of the time, so from that
respect it does not matter, but what if for some reason I need to do a none
RMAN backup and then found that TSM does not handle files over 2Gb properly
even on a 64 bit system!
On the other hand, I have experience of a 8.1.7.4 database that had nearly
6000 datafiles and that had a noted impact of the efficiency  of certain
v$views.

Some how I think I will be taking a middle path, given that I will be
partitioning most of the tables in the schema using hash and range/interval
partitioning. Also I am using ASM, so I have also thought that having a
small number of large files might prevent it from being able to spread the
data in the most effective way, but at the moment I know very little about
ASM.

Regards

Pete

On Feb 16, 2008 4:03 PM, Finn Jorgensen <finn.oracledba@xxxxxxxxx> wrote:

> Ricardo,
>
> Personally I like to split tables and indexes into different tablespaces
> and also split large objects from small objects into seperate tablespaces.
> For the large objects I might go to a uniform extent allocation policy and
> for the small I would typically use autoallocate.
>
> Finn
>
> On Feb 15, 2008 10:00 AM, Ricardo Santos <saints.richard@xxxxxxxxx> wrote:
>
> >
> > Hello to you all,
> >
> > I would like to ask some advices on I should dimension datafiles, when I
> > already know how much space is going to be occupied by the objects on the
> > tablespace to which the datafile(s) belong.
> >
> >
> >
> > I'm going to create a new tablespace for table objects that I already
> > know that are going to occupy 24.6 Gb and with a tendency to grow
> > relatively fast (0,5 Gb per month). These objects are going to be imported
> > to the new system.
> >
> > My question is: Should I create one bigger datafile (let's say 30Gb) to
> > contain all the tables or should I organize things in smaller datafiles ?
> > What would be an optimal size, if there is an answer for this question ?
> >
> >
> >
> > My preference and felling goes to have less pieces to manage and handle,
> > but I don't want to go to performance problems due to the operating system
> > handling large files.
> >
> >
> >
> > Here's some technical information about the environment:
> >
> > Datbabase version: 10.2.0.3 – 64 bits
> >
> > SO: RedHat 4 Update 6 – 64 bits
> >
> > Disks: Internal Disks with a total size of 400Gb, formatted as RAID 10
> > and organized in a LVM Group, with several LVM's.
> >
> >
> >
> > Thanks for all your attention.
> >
> >
> >
> > Best regards,
> > Ricardo Santos.
> >
> >
>
>


-- 
Regards

Pete

Other related posts: