RE: BINARIES - San or Local Storage

  • From: Paul Drake <discgolfdba@xxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Fri, 27 Aug 2004 11:03:30 -0700 (PDT)

--- "Gogala, Mladen" <Mladen.Gogala@xxxxxxxx> wrote:

> It's the same with disk drives: not much
> new technology there. Density is 
> increased, disks are rotating faster, but the seek
> time is still the same.
> 
> --
> Mladen Gogala

http://www.storagereview.com
http://storagereview.com/guide/guide_index.html
http://storagereview.com/articles/200406/20040625TCQ_1.html

 4200 rpm
 5400 rpm
 7200 rpm
10000 rpm
15000 rpm

I'd say that average seek time has decreased, wouldn't
you, as that would include on average 1/2 revolution
(besides the track to track positioning time)

Even track to track times have decreased somewhat due
to better mechanics (actuator, dampening, etc).

If a single hard drive is able to produce as many as
237 IOPS in a benchmark (sited from the above link on
StorageReview) its average access time is well under 5
msec.

here is a little snipped from a statspack report.
I don't think that v$filestat is going to work well in
this format.

db_name changed to protect ...


            Snap Id     Snap Time      Sessions
Curs/Sess Comment
            ------- ------------------ --------
--------- -------------------
Begin Snap:                         1453 30-Sep-03
11:11:36       56       3.7

  End Snap:                         1454 30-Sep-03
11:27:43       86       4.0

   Elapsed:                               16.12 (mins)

Tablespace IO Stats for DB: mydb  Instance: mydb 
Snaps: 1453 -1454
->ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
                 Av      Av     Av                   
Av        Buffer Av Buf
         Reads Reads/s Rd(ms) Blks/Rd       Writes
Writes/s      Waits Wt(ms)
-------------- ------- ------ ------- ------------
-------- ---------- ------
USER_DATA_LARGE
        37,122      38    1.6     1.0            0    
   0          0    0.0
INDEX_DATA_LARGE
         7,467       8    1.2     1.0            0    
   0          0    0.0
USER_DATA
         3,527       4    3.0     1.9            0    
   0          0    0.0
INDEX_DATA
         2,480       3    4.0     1.1            0    
   0          0    0.0

Mladen, I'd be happy to provide you with the complete
layout and v$filestat history of this database and
server offline.

The datafiles were spread out over 4 mount points,
each a 4 drive RAID 10 volume. Each tablespace had 4
datafiles, one per mount point. Stripe sizes were 256
KB, which matched the db_file_multiblock_read_count (8
KB block size).

Yes, I do see lower average access times on a server
that has a Dell | EMC Clarion CX200 connected via a
dual ported 2 Gbps FCHBA - than the one above that had
2 Dell PV220S SCSI units mounted over a quad channel
U2W SCSI RAID controller.

Seeing that the power edge raid controller only hd 128
MB of cache, and the CX200 had 1 GB of cache, it
wasn't really a fair comparison.

Would I rather use a CX300 unit than 2 PV220S units
(FibreChannel vs SCSI RAID)? Of course. But they are
at completely different price points.

One can get a 14 drive, split backplane PV220S with
add-in RAID Controller for around $10K USD, and you
don't need an EMC tech to configure the LUNs.

But be prepared for a long burn-in time of the RAID
containers. This reminds me, that I have to configure
a PV220S unit for a 10g / RHEL 3.0 ES db today
(hanging off of a Dell PE2650). one channel will be
used for ASM, one channel will be cooked filesystems
on controller-managed RAID "containers". Too bad they
aren't the same drives, I'd actually be able to
perform some meaningful comparisons.

Pd

----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

Other related posts: