You might be interrested in analysing hash join statistics on some test data for your purpose. Please, refer to the post on ixora.com http://www.ixora.com.au/q+a/sqlopt.htm On 11/14/06, ryan_gaffuri@xxxxxxxxxxx <ryan_gaffuri@xxxxxxxxxxx> wrote:
I am really just looking for a way to predictively forecast how much hash space I need for a given query. Assuming the plan is the same. So as I add data I can know in advance what I will need instead of a try and find out. The system is in development. I don't have production data. So we are testing with smaller data sets. -------------- Original message -------------- From: Peter Sylvester <peters@xxxxxxxxx> > I don't have all the answers, but I did run into some issues with hash > joins degenerating into nested-loop joins when the record count > increased. I ended up cranking up the PGA_AGGREGATE_TARGET since I am > doing the auto-PGA thing (10.2.0.2/Solaris10), and it seemed to solve my > problem. I also added USE_HASH hints, but I *think* it will still do > something else if it does not have enough memory for the hash table. > Unfortunately I do not have a representative test system for this > warehouse style implementation. > > Note that if you use the USE_HASH hint the smaller table should be the > first parameter. > > I also recall reading that there are some upper limits for the hash_area > and s ort_area memory sizes (200mb?) when using the auto-PGA, and some > undocumented parameters may be required for getting larger sizes. In my > case I was able to get away with adjusting PGA_AGGREGATE_TARGET up so > did not get into that. > > There is some interested reading on hash joins (as well as sorting > costs, and lots of other things...) in Jonathan Lewis' book "Cost-Based > Oracle Fundamentals", which you may want to check out. > > --Peter > > -- > //www.freelists.org/webpage/oracle-l > >