All, Thanks for the explanation. But, if you look at my original post's tkprof output, I don't know the formula used to calculate buffer sort. Even though line (d) returns 1.6M, why does Buffer Sort (c) report 127M ? Does it actually read 127M? It seems like it does because the query takes ~3 Hrs to complete. The original query is 1 FACT table joing to 3 DIMS. This plan shows for one of the dimensions. If I write the query to use only 2 dimensions and explicitly give the values on the FACT for the 3rd dimension (this one), the query comes back in <1 min. So, this definetly is a bottle-neck, and I am trying to understand how this value for buffer sort is calculated. I also tried setting _optimizer_cost_model to 'choose', 'io' and 'cpu', but still get the same plan containing 'buffer sort' step. Rows Operation ========= ======================================= a) 105 BITMAP MERGE b) 351549 BITMAP KEY ITERATION c) 127009880 BUFFER SORT d) 1607720 TABLE ACCESS FULL SYS_TEMP_4254956840 e) 351549 BITMAP INDEX RANGE SCAN OBJ#(3441108) --- Karen Morton <Karen.Morton@xxxxxxxxxx> wrote: > Here's a test I did to try and understand this > BUFFER SORT behavior. ____________________________________________________ Start your day with Yahoo! - make it your home page http://www.yahoo.com/r/hs -- //www.freelists.org/webpage/oracle-l