Design question: too many SQLs differ by updated columns

  • From: Yong Huang <yong321@xxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Sat, 31 May 2008 16:30:13 -0700 (PDT)

One schema in our 10.2.0.4 database have 40,000 SQLs in v$sql taking 1.5GB
sharable memory. A small number of them are due to not using bind variables.
But many are insert and update statements like this:

UPDATE PATIENT SET MRN=:MRN,Last_Name=:Last_Name,First_Name=:First_Name,...
Occupation=:Occupation,Original_Reg_Date=:Original_Reg_Date,AD_Living_Will=:AD_Living_Will...

Thousands of these SQLs differ by different columns in the SET clause. The
columns seem to be listed in their order in the patient table. Since one SQL
updates one set of columns, another updates another set, the permutation is
huge.

I told the developer to consider changing the insert, listing all columns and
setting the inserted value to null if that column is missing. But for these
update statements we can't think of a good way to corect them. We either have
to redesign the data model, normalizing some columns into their own tables
(e.e. occupation shouldn't belong in patient table), or query the row first and
list all columns in the update set clause filling in the selected value for
those columns that don't need to be updated. The first method is not doable
financially and administratively. The second method raises buffer gets and
probably causes worse overall performance. I also thought of MERGE. But the app
already knows whether to insert or update rather than let the database decide.
Besides, it still has the column list problem in the update part of merge.

Any comments are welcome.

Yong Huang


      
--
//www.freelists.org/webpage/oracle-l


Other related posts: