RE: Orion, NFS, & File Size

  • From: "CRISLER, JON A" <JC1706@xxxxxxx>
  • To: Austin Hackett <hacketta_57@xxxxxx>
  • Date: Tue, 19 Jun 2012 20:27:28 +0000

Understood.  I would still recommend 5 to 10 orion fake datafiles, so you get a 
better understanding of the performance capabilities of your storage.  Try 
creating 5 data files, then run it a few times using 1 datafile to start, then 
5 (or any number you decide on).  You will see your MBPS and IOPS go up until 
you hit the saturation point.  What limits your capacity could be the OS, NAS 
network interface(s) or even the NetApp storage.  With only a few hosts (or 1 
RAC cluster) your bottleneck is probably going to be something other than the 
storage.   Also check your latency- ideally you should see consistant 2-3 ms 
access times until you start hitting some queuing limit.  Are you using 10 gb 
Ethernet ?

-----Original Message-----
From: Austin Hackett [mailto:hacketta_57@xxxxxx] 
Sent: Tuesday, June 19, 2012 3:49 PM
To: CRISLER, JON A
Cc: Oracle-L@xxxxxxxxxxxxx
Subject: Re: Orion, NFS, & File Size

Hi Jon

Thanks. I maybe didn't ask my question about the number and size of data files 
very clearly. I was referring to the "fake" data files I'll create using dd and 
then list  in the Orion mytest.lun file. I was just curious about the reasoning 
behind you recommending 5 - 10 of these files, and wanted to check that my 1 
file per spindle, sized according to the planned final data file size, and 
reflected in num_disks parameter was a sound methodology.

The database size and file size I mentioned initially were just for 
simplicities' sake, to illustrate what I thought the 2 possible interpretations 
of the paragraph in the manual were. The size of the actual DB in question and 
it's data files are different. The new RAC cluster and filer  will host an 
existing 11.2.0.3.0 RAC DB which is currently on a filer shared with other DBs. 
The number of RAC nodes on the old and new cluster are identical. The new filer 
has more spindles, faster disks, a bigger cache etc. and will be dedicated to 
just this DB, so I'm happy it's not been undersized. The current config is 
consistent with the best practices described in the NetApp 11g best practices 
TR and that config will be transferred to the new filer/nodes. Really, this is 
just a storage and server hardware refresh exercise, and I wanted to use Orion 
to check that the storage performs as expected before going any further. Maybe 
in trying to keep my question simple, I unintentionally made things more 
complicated! My apologies.

Thanks

Austin

On 19 Jun 2012, at 18:50, CRISLER, JON A wrote:

> The main advantage that I see (and I might be overlooking something) 
> with higher numbers of data files comes down to RMAN backup and 
> possible parallel query.  More data files and more rman streams 
> generally gives you better performance, compared to a few very large 
> data files.  Using the "segment size" feature in RMAN can help in 11g 
> if you have very large data files.
>
> I am concerned that you might not have enough disks overall, but it  
> depends on your NetApp model, disk type, cache size etc.   If your  
> using some NetApp Snapshot product, make sure you follow the best 
> practices on where to locate control files, temp files, redo etc., so 
> your snapshots are most efficient.
>
> -----Original Message-----
> From: Austin Hackett [mailto:hacketta_57@xxxxxxx]
> Sent: Tuesday, June 19, 2012 12:48 PM
> To: CRISLER, JON A; Oracle-L@xxxxxxxxxxxxx
> Subject: Re: Orion, NFS, & File Size
>
> Hi Jon
>
> Many thanks for your response. We're an existing NetApp shop here 
> (although I'm pretty new to the organisation), and are currently doing 
> what you suggest. The failover of the controller wasn't something in 
> my HA testing plan, so thanks for the heads up.
>
> In terms of Orion testing, the current plan after some further 
> research today is to create 28 x 30 gb files, and then run the test 
> with num_disks = 28. The dedicated data file volume will be on an 
> aggregate that consists of 2 raid dp groups , each with 16 disks e.g. 
> 2 x 16 -  4 parity = 28. Does that sound like a plan, or do you tend 
> to see little value in file counts greater than 10?
>
> Thanks
>
> Austin
>
> On 19 Jun 2012, at 16:20, "CRISLER, JON A" <JC1706@xxxxxxx> wrote:
>
>> You should create a number of datafiles at 30gb size- perhaps 5 or 
>> 10.  Orion will kick off multiple threads (you can control the 
>> number).  Once you have a db set up you can also use the calibrate 
>> i/o package for some easy benchmarks.
>>
>> The Netapp aggregates need to be planned out: you will find that REDO 
>> needs to be segregated from other datafiles, so the "aggregates" on 
>> the Netapp storage backend need to be well thought out.  If you have 
>> multiple systems (dev, test, etc) - do not allow them to share 
>> aggregates with your prod system as they will step on each other.  In 
>> other words- keep your prod aggregates segregated from non-prod 
>> aggregates.
>>
>> I would not create a single NFS volume:  I would use at least 4:   
>> datafiles, redo, flash recovery area, OCR-VOTE.  This gives you a  
>> bit more parallelism at the OS level for I/O.   When you get into  
>> testing, you should thoroughly test the Netapp controller failover
>> - failback (switchover - giveback ?) - we have found that this is 
>> frequently misconfigured or problematic, so you need to test it to 
>> make sure the config is correct.  If correctly configured it works 
>> fine, and you need this feature for NetApp maintenance like OnTape 
>> patches, etc.
>>
>> A lot of this is NetApp terminology- share this with your NetApp 
>> storage administrator and they will understand.
>>
>> -----Original Message-----
>> From: oracle-l-bounce@xxxxxxxxxxxxx 
>> [mailto:oracle-l-bounce@xxxxxxxxxxxxx
>> ] On Behalf Of Austin Hackett
>> Sent: Monday, June 18, 2012 3:49 PM
>> To: Oracle-L@xxxxxxxxxxxxx
>> Subject: Orion, NFS, & File Size
>>
>> Hello List
>>
>> I'm preparing to build a new 11.2.0.3.2 RAC cluster on OEL 5.4 (the 
>> latter isn't something I can change at the moment). The shared 
>> storage is a NetApp filer via NFS. Prior to Oracle installation, I 
>> plan to use Orion to check the storage is performing as expected 
>> (I'll also use SLOB post-install).
>>
>> According to section "8.4.6 Orion Troubleshooting" 
>> (http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#B
>> ABBEJIH
>> ) in the 11.2 Performance Tuning manual:
>>
>> "If you run on NAS storage ... the mytest.lun file should contain one 
>> or more paths of existing files ... The file has to be large enough 
>> for a meaningful test. The size of this file should represent the 
>> eventual expected size of your datafiles (say, after a few years of 
>> use)"
>>
>> Assume the following about what the DB will look like in a few years:
>>
>> - All my datafiles will be on a single NFS volume
>> - The datafiles will total 1TB in size
>> - No individual  datafile will be larger than, say 30GB
>>
>> Does the statement in the manual mean that:
>>
>> I should use dd to create 1 x 30G file on the volume i'll be using 
>> for the datafiles
>>
>> or
>>
>> I should use dd to create a number of files on the volume i'll be 
>> using for the datafiles, 30GB in size, totaling 1TB
>>
>> I'm interpreting it as meaning the former, but had hoped to sanity 
>> check my thinking
>>
>> If anyone could offer any help, it would be much appreciated...
>>
>> Thanks
>>
>> Austin
>>
>>
>>
>>
>>
>>
>> --
>> //www.freelists.org/webpage/oracle-l
>>
>>

--
//www.freelists.org/webpage/oracle-l


Other related posts: