RE: NFS and direct IO

  • From: D'Hooge Freek <Freek.DHooge@xxxxxxxxx>
  • To: "kylelf@xxxxxxxxx" <kylelf@xxxxxxxxx>, ORACLE-L <Oracle-L@xxxxxxxxxxxxx>
  • Date: Sat, 16 Apr 2011 19:45:43 +0200

Directio is only required on linux when working with rac.
See note Mount Options for Oracle files when used with NAS devices [ID 359515.1]

Regards,


Freek D'Hooge
Uptime
Oracle Database Administrator
email: freek.dhooge@xxxxxxxxx
tel +32(0)3 451 23 82
http://www.uptime.be
disclaimer: www.uptime.be/disclaimer
________________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of kyle Hailey
Sent: zaterdag 16 april 2011 18:31
To: ORACLE-L
Subject: NFS and direct IO

Is there any requirement to use direct I/O with NFS? (besides performance)

from metalink note: Enterprise Linux: Linux, Filesystem & I/O Type 
Supportability [ID 279069.1]

    DIRECTIO is required for database files on NAS (network attached storage).

Is it really  required? I can't find any other reference that supports this.

I have a customer comparing a copy of a database running  on non-ASM, non-raw, 
non-NFS database where much of the data is being buffered in the UNIX file 
system cache  and they are comparing this setup to a copy of the same database 
running  over NFS mounted datafiles.
The NFS database is using direct-IO so there is no longer the buffering from 
the UNIX.
Customer doesn't want to change production which is working fine and they don't 
want to have to modify the configuration of the copy on NFS  either. They want 
the copy to run just like the original. We could try and change the SGA on the 
NFS copy, but the customer isn't keen on this.  The easiest thing it seems to 
me is just to turn off direct I/O so that customer then uses NFS caching on the 
client.

Even if I was going to convince the customer to increase the buffer cache on 
the NFS copy of the database, the question arises, how big should we make the 
buffer cache? The buffer cache could be made as big as there is free memory 
which might be somewhat tractable on a system where there is one database on 
the machine, but in this case it's a VM on a piece of metal running numerous 
VMs so that calculation becomes less obvious.
Other calculations could be look at the db cache advisory, but the db cache 
advisory typically on has stats covering up to twice the size of the buffer 
cache and in these situations we are talking about increasing the buffer cache 
to serveral times the size of the buffer cache.
Looking at physical IO stats or IO wait events is a bit misleading as many of 
these stats will represent re-reads of the same data.
The final option is to look at physical reads by top SQL and summing the reads 
per execute of all the top SQL. That seems reasonable to me but in some cases 
this number has come out much lower than what I'd expect.


Then even if I do increase the buffer cache size, things like direct path reads 
won't be cached at all, not to mention the fact that an important full table 
scan could easily not be cached either if the table is over 25% of the size of 
the buffer cache.  Sure, we could try and figure out which table(s) should be 
cached and alter them cache, but  that's more work the customer doesn't want to 
do.

- Kyle
http://dboptimizer.com
--
//www.freelists.org/webpage/oracle-l


Other related posts: