Cary, Thanks for your response. I ahve enclosed all the details here. Generated extended tarce with level12. SELECT DISTINCT P.PROFILEDUSERID PROFILEDUSERID, SEARCH_LAST_NAME, SEARCH_FIRST_NAME FROM PROFILEDUSER P , EXTENDEDATTRIBUTES E WHERE P.PROFILEDUSERID = E.PROFILEDUSERID AND P.SEARCH_COMPANY_NAME LIKE 'ACME%' ESCAPE '/' AND E.CUSTOMERID = 'ABCDEFGH' AND HSBC_USER_CATEGORY IN ('VAL1','VAL2') AND ROWNUM < 150 ORDER BY SEARCH_LAST_NAME,SEARCH_FIRST_NAME call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.01 0.01 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 11.39 41.98 6126 18805 0 1 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 4 11.40 41.99 6126 18805 0 1 Misses in library cache during parse: 0 Optimizer goal: CHOOSE Parsing user id: 180 Rows Row Source Operation ------- --------------------------------------------------- 1 SORT UNIQUE 1 COUNT STOPKEY 1 NESTED LOOPS 4766 TABLE ACCESS BY INDEX ROWID PROFILEDUSER (it has 450,000 rows) 4767 INDEX RANGE SCAN (PROFILEDUSER_IX03) 1 TABLE ACCESS BY INDEX ROWID EXTENDEDATTRIBUTES (it has 15,000 rows) 9530 INDEX UNIQUE SCAN (ATTRIBUTES_PK) SQL> select column_name from dba_ind_columns where index_name='PROFILEDUSER_IX03'; COLUMN_NAME -------------------------------------------------------------------------------- SEARCH_COMPANY_NAME SQL> select column_name from dba_ind_columns where index_name='ATTRIBUTES_PK'; COLUMN_NAME -------------------------------------------------------------------------------- PROFILEDUSERID ----- Original Message ----- From: Cary Millsap <cary.millsap@xxxxxxxxxx> Date: Friday, August 6, 2004 10:41 am Subject: RE: Time to read 6000 (block size 2k) blocks > Yes, this is actually 0.003568s/read, which is pretty good. > > A better question, though, is, "Does the application really need > to make > 18,805 visits to the database buffer cache to return just one row?" > > Unless your query uses some kind of aggregation function to return the > single row (count, sum, etc.), then you should be able to make > this SQL = > do > its job with 10-20 LIOs instead of 18,805. If you can do that, you = > should be > able to reduce response time from 41.99s to about 0.04s. > > I can't see your SQL here, but because there were 2 fetch calls, > I'll = > bet > that you're not aggregating the result, and that you should be > able to = > get > to the 0.04s response time target. It might be as simple as a > missing = > index, > or SQL that debilitates the use of an index. > > > Cary Millsap > Hotsos Enterprises, Ltd. > http://www.hotsos.com > * Nullius in verba * > > Upcoming events: > - Performance Diagnosis 101: 8/10 Boston, 9/14 San Francisco, 10/5 = > Charlotte > - SQL Optimization 101: 7/26 Washington DC, 8/16 Minneapolis, 9/20 = > Hartford > - Hotsos Symposium 2005: March 6-10 Dallas > - Visit www.hotsos.com for schedule details... > > > -----Original Message----- > From: oracle-l-bounce@xxxxxxxxxxxxx = > [oracle-l-bounce@xxxxxxxxxxxxx] > On Behalf Of Khedr, Waleed > Sent: Friday, August 06, 2004 9:19 AM > To: oracle-l@xxxxxxxxxxxxx > Subject: RE: Time to read 6000 (block size 2k) blocks > > Five millisecond is not bad for single block sequential read. > > Waleed > > -----Original Message----- > From: jaysingh1@xxxxxxxxxxxxx [jaysingh1@xxxxxxxxxxxxx]=3D20 > Sent: Friday, August 06, 2004 9:49 AM > To: oracle-l@xxxxxxxxxxxxx > Subject: Time to read 6000 (block size 2k) blocks > > > Hi All, > > The question may be wispy. > We have 14 CPU sun box,8i 2 node OPS. Not under heavy load. > > In our case it is taking 21.86 sec for 6126 blocks (from disk) > > db file sequential read 6126 0.29 > 21.86 > > Approximately how long it should take to read 6000 blocks? > > > call count cpu elapsed disk query current > rows > ------- ------ -------- ---------- ---------- ---------- --------- > - > ---------- > Parse 1 0.01 0.01 0 0 0 > 0 > Execute 1 0.00 0.00 0 0 0 > 0 > Fetch 2 11.39 41.98 6126 18805 0 > 1 > ------- ------ -------- ---------- ---------- ---------- --------- > - > ---------- > total 4 11.40 41.99 6126 18805 0 > 1 > > > > db file sequential read 6126 0.29 > 21.86 > > Thanks > Sami > > > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request@xxxxxxxxxxxxx > put 'unsubscribe' in the subject line. > -- > Archives are at //www.freelists.org/archives/oracle-l/ > FAQ is at //www.freelists.org/help/fom-serve/cache/1.html > ----------------------------------------------------------------- > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request@xxxxxxxxxxxxx > put 'unsubscribe' in the subject line. > -- > Archives are at //www.freelists.org/archives/oracle-l/ > FAQ is at //www.freelists.org/help/fom-serve/cache/1.html > ----------------------------------------------------------------- > > ---------------------------------------------------------------- > Please see the official ORACLE-L FAQ: http://www.orafaq.com > ---------------------------------------------------------------- > To unsubscribe send email to: oracle-l-request@xxxxxxxxxxxxx > put 'unsubscribe' in the subject line. > -- > Archives are at //www.freelists.org/archives/oracle-l/ > FAQ is at //www.freelists.org/help/fom-serve/cache/1.html > ----------------------------------------------------------------- > ---------------------------------------------------------------- Please see the official ORACLE-L FAQ: http://www.orafaq.com ---------------------------------------------------------------- To unsubscribe send email to: oracle-l-request@xxxxxxxxxxxxx put 'unsubscribe' in the subject line. -- Archives are at //www.freelists.org/archives/oracle-l/ FAQ is at //www.freelists.org/help/fom-serve/cache/1.html -----------------------------------------------------------------