RE: ASM Layout: Need some IO distribution input

  • From: Herring Dave - dherri <Dave.Herring@xxxxxxxxxx>
  • To: "Christopher.Taylor2@xxxxxxxxxxxx" <Christopher.Taylor2@xxxxxxxxxxxx>, "andrew.kerber@xxxxxxxxx" <andrew.kerber@xxxxxxxxx>
  • Date: Sat, 2 Feb 2013 14:47:24 +0000

Chris,

Be VERY careful with diskgroups having LUNs of multiple sizes.  Obviously all 
literature says don't use multiple sizes but I don't believe they really 
explain all the implications.  One we ran into was space shortage.  The 
diskgroup is limited in space to that of the smallest LUN, due to ASM striping 
all files across every LUN.  If one LUN is full (smallest LUN) then Oracle sees 
the diskgroup as full, even though you could have hundreds of GB free.  

This happened to us when a 118 GB LUN for DBDATA was accidentally added to 
DBFLASH where all LUNs are 278 GB.  A couple of weeks later we started getting 
failures writing to DBFLASH due to lack of space yet every check at the 
diskgroup showed hundreds of GB being available.

I know this doesn't answer your question but thought I'd throw this out as a 
caution.

DAVID HERRING
DBA
Acxiom Corporation
EML   dave.herring@xxxxxxxxxx
TEL    630.944.4762
MBL   630.430.5988 
1501 Opus Pl, Downers Grove, IL 60515, USA
WWW.ACXIOM.COM  

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Christopher.Taylor2@xxxxxxxxxxxx
Sent: Thursday, January 31, 2013 8:00 AM
To: andrew.kerber@xxxxxxxxx
Cc: oracle-l@xxxxxxxxxxxxx
Subject: RE: ASM Layout: Need some IO distribution input

Yep I've read up on some of that too - we're in the process of cutting over our 
67GB luns to 134 (hence the mix & match currently).
I have about 10 134 GB candidates waiting in the wings now.

Chris


From: Andrew Kerber [mailto:andrew.kerber@xxxxxxxxx]
Sent: Thursday, January 31, 2013 7:57 AM
To: Taylor Christopher - Nashville
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: ASM Layout: Need some IO distribution input

Well, I do understand that using multiple lun sizes in the same diskgroup will 
hurt your performance.  I keep on meaning to experiment with this in order to 
quantify how much, I suspect its not a large hit, but Oracle does say it is a 
bad idea.
On Thu, Jan 31, 2013 at 6:49 AM, 
<Christopher.Taylor2@xxxxxxxxxxxx<mailto:Christopher.Taylor2@xxxxxxxxxxxx>> 
wrote:
Currently our part of our ASM disk layout looks like the below.
Since I'm new to ASM and RAC, I'm wondering if I could improve this layout 
(that I inherited) - I'm not sure how many physical devices make up each 67 / 
134 GB RAW device in the EMC array.  I believe they are striped in array over 
many disks but not positive on that.

I was wondering if it would make more sense to combine any of these groups, or 
to definitely *not* combine any of these groups?

Currently we do see write latencies to the redo log groups and I was thinking 
about 1 of 2 options:
a.)     adding a 67 GB lun to each redo log disk group OR
b.)     combining the redo log group with the larger data group luns

It seems [to me] that combining the disk groups with the larger DATA disk group 
would reduce the IO sizes to each RAW device inside the group and perhaps 
improve write times.

A lot of my theory is based on older SAN technology so I'm not sure it applies 
here.

Any issues/suggestions come to mind when seeing a layout like this?

Chris


(Formatted version)
http://codepaste.net/gby4zb

Disk Group Name          Path               File Name                      File 
Size (MB) Used Size (MB) Pct. Used
------------------------ ------------------ ------------------------------ 
-------------- -------------- ---------
DG_CCMNASP1_ARCHIVE_01   /dev/raw/raw70     DG_CCMNASP1_ARCHIVE_01_0050         
  138,097        115,726     83.80
                         /dev/raw/raw71     DG_CCMNASP1_ARCHIVE_01_0051         
  138,097        115,739     83.81
                         /dev/raw/raw72     DG_CCMNASP1_ARCHIVE_01_0052         
  138,097        115,735     83.81
                         /dev/raw/raw73     DG_CCMNASP1_ARCHIVE_01_0053         
  138,097        115,784     83.84
                         /dev/raw/raw79     DG_CCMNASP1_ARCHIVE_01_0054         
  138,097        116,143     84.10
************************                                                   
-------------- --------------
                                                                                
  690,485        579,127

DG_CCMNASP1_CONTROL1_01  /dev/raw/raw98     DG_CCMNASP1_CONTROL1_01_0050        
    2,155            209      9.70
************************                                                   
-------------- --------------
                                                                                
    2,155            209

DG_CCMNASP1_CONTROL2_01  /dev/raw/raw99     DG_CCMNASP1_CONTROL2_01_0050        
    2,155            209      9.70
************************                                                   
-------------- --------------
                                                                                
    2,155            209

DG_CCMNASP1_DATA_01      /dev/raw/raw83     DG_CCMNASP1_DATA_01_0052            
   69,044         49,001     70.97
                         /dev/raw/raw84     DG_CCMNASP1_DATA_01_0053            
   69,044         49,002     70.97
                         /dev/raw/raw85     DG_CCMNASP1_DATA_01_0054            
   69,044         49,001     70.97
                         /dev/raw/raw86     DG_CCMNASP1_DATA_01_0055            
   69,044         49,001     70.97
                         /dev/raw/raw87     DG_CCMNASP1_DATA_01_0056            
   69,044         49,001     70.97
                         /dev/raw/raw76     DG_CCMNASP1_DATA_01_0057            
  138,097         98,008     70.97
                         /dev/raw/raw96     DG_CCMNASP1_DATA_01_0058            
   69,044         49,002     70.97
                         /dev/raw/raw97     DG_CCMNASP1_DATA_01_0059            
   69,044         49,001     70.97
                         /dev/raw/raw74     DG_CCMNASP1_DATA_01_0060            
  138,097         98,007     70.97
                         /dev/raw/raw75     DG_CCMNASP1_DATA_01_0061            
  138,097         98,009     70.97
                         /dev/raw/raw77     DG_CCMNASP1_DATA_01_0062            
  138,097         98,008     70.97
                         /dev/raw/raw78     DG_CCMNASP1_DATA_01_0063            
  138,097         98,008     70.97
                         /dev/raw/raw102    DG_CCMNASP1_DATA_01_0064            
  138,097         98,006     70.97
                         /dev/raw/raw103    DG_CCMNASP1_DATA_01_0065            
  138,097         98,008     70.97
************************                                                   
-------------- --------------
                                                                                
1,449,987      1,029,063

DG_CCMNASP1_REDO_01      /dev/raw/raw81     DG_CCMNASP1_REDO_01_0051            
   69,044         46,137     66.82
************************                                                   
-------------- --------------
                                                                                
   69,044         46,137

DG_CCMNASP1_REDO_02      /dev/raw/raw82     DG_CCMNASP1_REDO_02_0051            
   69,044         45,833     66.38
************************                                                   
-------------- --------------
                                                                                
   69,044         45,833

DG_CCMNASP1_UNDO_TEMP_01 /dev/raw/raw88     DG_CCMNASP1_UNDO_TEMP_01_0050       
   69,044         51,222     74.19
                         /dev/raw/raw89     DG_CCMNASP1_UNDO_TEMP_01_0051       
   69,044         51,222     74.19
                         /dev/raw/raw90     DG_CCMNASP1_UNDO_TEMP_01_0052       
   69,044         51,222     74.19
                         /dev/raw/raw91     DG_CCMNASP1_UNDO_TEMP_01_0053       
   69,044         51,223     74.19
                         /dev/raw/raw92     DG_CCMNASP1_UNDO_TEMP_01_0054       
   69,044         51,223     74.19
                         /dev/raw/raw93     DG_CCMNASP1_UNDO_TEMP_01_0055       
   69,044         51,223     74.19
                         /dev/raw/raw94     DG_CCMNASP1_UNDO_TEMP_01_0056       
   69,044         51,223     74.19
                         /dev/raw/raw95     DG_CCMNASP1_UNDO_TEMP_01_0057       
   69,044         51,224     74.19
************************                                                   
-------------- --------------
                                                                                
  552,352        409,782


Chris Taylor
Oracle DBA
Parallon IT&S



--
//www.freelists.org/webpage/oracle-l




--
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'

--
//www.freelists.org/webpage/oracle-l


***************************************************************************
The information contained in this communication is confidential, is
intended only for the use of the recipient named above, and may be legally
privileged.

If the reader of this message is not the intended recipient, you are
hereby notified that any dissemination, distribution or copying of this
communication is strictly prohibited.

If you have received this communication in error, please resend this
communication to the sender and delete the original message or any copy
of it from your computer system.

Thank You.
****************************************************************************

--
//www.freelists.org/webpage/oracle-l


Other related posts: