Re: What is the best way to do this?

  • From: Michael Moore <michaeljmoore@xxxxxxxxx>
  • To: "Mark W. Farnham" <mwf@xxxxxxxx>
  • Date: Mon, 4 Apr 2011 13:51:41 -0700

To all respondents,
Based on the questions I'm being ask, it is now clear to me that I've really
screwed the pooch on clearly describing my problem. So, I'm going to start a
new thread when time allows and this time give complete details.

Thanks for all of your input.

Regards,
Mike

On Mon, Apr 4, 2011 at 12:35 PM, Mark W. Farnham <mwf@xxxxxxxx> wrote:

> It doesn’t matter much about fetching a few (I think from what you’ve
> written you have 15 total logical columns, or maybe it was 5) extra columns.
>
> The point is to not dynamically parse all the time. In fact if you put a
> trigger on data map to only rebuild them and recompile your function when
> data map actually changes you’ll be on the right track if data map changes
> infrequently.
>
>
>
> Just use the virtual column name to pick which of your columns to stuff
> into the return function value.
>
>
>
> Unless they prohibit you from reading data_map, you can still map out all
> the attributes for each map_group and vastly reduce the amount of dynamic
> sql and parsing at runtime. If you want the function to be as fast as it can
> be at run time, that is how you do it.
>
>
>
> Of course if you’re wrong about the rate of change of data_map, then you’ll
> be compiling new versions of the function frequently. Then I’d back up to
> Yechiel’s idea, so you still have to parse it dynamically, and then the
> trigger on data_map would just update the row in Yechiel’s hypothetical
> table for the data_map that had changed.
>
>
>
> So how many distinct data_maps do you have?
>
>
>
> *From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
> oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *Michael Moore
> *Sent:* Monday, April 04, 2011 1:20 PM
> *To:* adar666@xxxxxxxxxxxx
> *Cc:* oracle-l@xxxxxxxxxxxxx
> *Subject:* Re: What is the best way to do this?
>
>
>
> Hi Yechiel,
>
> I can't really do that because I am limited to modification only within my
> PLSQL FUNCTION. I can not change the program that calls my function.
>
>
>
> My task is to optimize/rewrite an *existing function* which looks like:
>
> FUNCTION get_attribute_value (customer_number     IN NUMBER,
>                               virtual_column_name IN VARCHAR2)
>
> returns varchar2;
>
>  -- where the *return value* is the value of *one* of ATTR001 thru
> ATTR201.
>
> My function does not "know"  *all* of the ATTRibute names, it only know
> the ONE that it is currently processing.
>
> There might be, for example,  31, 37 or 121 attributes that have a mapping
> defined, but maybe only 1 of those attributes is required for pricing. For
> example:
>
> abc_key = 778899
> customer # 12345
> attribute07 = 3.5  -- student GPA
> attribute121 = 95050  -- zip code
> attribute129 = Agriculture -- program of interest
> attribute134 = 25 -- age
>
> Now lets assume that for this customer we have the  business rule that
> says:
>    If the student's age > 23 charge the customer $21
>    If the student's age <= 23 charge the customer $29
>
> My function will be passed (778899,'student_age')
> My function will SELECT customer# from ABC where abc_key = 778899;
> My function will figure out that for customer # 12345, 'student_age' is
> stored in attribute134.
> My function will select attribute134 from ABC where abc_key = 778899;
> My function will return '25'
>
> attribute07, 121 and 129 are completely irrelevant to my process as they
> have no affect on the price.
>
> Thanks for your help,
>
> Regards,
> Mike
>
>
>
>
>
>
>
> On Sun, Apr 3, 2011 at 11:02 PM, Yechiel Adar <adar666@xxxxxxxxxxxx>
> wrote:
>
> Based on your answers I think I will do this:
>
> 1) Create a new table that contain the customer number and the 5 attributes
> that you need for each customer.
>      I mean, do a pivot on the data in data map and create one row for each
> customer containing all the 5 fields you need.
>      You will need to recreate, or update, this table whenever the data map
> table changed or when you add or delete a customer.
>      If you have, in data map,  old customers that has no data in ABC then
> you can filter out their mapping in this step.
>
> 2) Read the new table. ordered by customer id, and construct dynamic sql to
> get only the fields you need from ABC.
>
> This way, you do not read the big table and you accessed the fact table
> only once for each customer and you save, big time, on network and database
> work to put all the fields in your buffer.
> This method also allow you to use bulk read for data map, saving more time.
>
> You did not mentioned what partitioning you have, but if you partition data
> map according to the field meaning,
> I mean a partition for all "credit ratings", a partition for all "last
> order" etc,
> You can replace the the table in step one with select the will bring you
> efficiently only the rows you need from the big table.
> If you have old data map for customers that has no rows in ABC, you can do:
> select * from data_map
> where field_description in ('credit rating','last order')
> and customer_id in (select customer_pk from ABC);
>
> Yechiel Adar
>
> Israel
>
>
> On 02/04/2011 02:03, Michael Moore wrote:
>
> Six questions:
>
> 1)      How many customers?
>
>       26,000
>
> 2)      Once established, can a customer’s data_map change?
>
>       it's very rare
>
> 3)      If the answer to #2 is no, or only vary occasionally and with a
> defined maintenance event to reorganize that customer’s data, how many
> different data_map rows do you have?
>
> 1,700,000
>
> 5)      Do you have partitioning?
>
> yes
>
>
>
>
>

Other related posts: