where clause filter from an array or use temp table?

  • From: Jeff C <backseatdba@xxxxxxxxx>
  • To: "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 18 Dec 2014 09:59:27 -0800

So I have procedure set up with a string parameter that is intended to be a
comma separated list of numbers and returns a cursor. Sometimes they pass
in 1 value and sometimes 300 values.  I then take this string and put it
into an table of numbers array.



Then in the query uses the array like so in the where clause:

where emp_id in (select column_value from table(cast(t_numbers as num_tab)))



I think I am running into bind peaking issues where the first time it gets
called they may pass in only 1 values and the optimizer will decide to use
an index, then the next time they pass in 300 where a full table scan might
be better but because it is a bind variable it uses the first plan.



Now I was thinking of changing this process to load the values into a
global temporary table (preserving rows on commit) and then I can join to
the temp table instead.



I have tested this method and it proves to be a lot faster. But I am
wondering if this is a good idea to do?  Now I will have a bunch of redo
being generated for all the deleting and inserting into the global temp
table. I have a lot of procedures that use this method with the array. Not
all have problems but I would probably eventually change them all over.



What do you think about this approach?

We also have some procedures where if they pass in a NULL to that parameter
instead that means not filter on that column. So there where clause looks
something like this.

where (p_values is null or (emp_id in (select column_value from
table(cast(t_numbers as num_tab))))





Thanks for any input.

Other related posts: