RE: FW: Single block reads instead of multiblock reads

  • From: "Kenneth Naim" <kennethnaim@xxxxxxxxx>
  • To: "'Martin Berger'" <martin.a.berger@xxxxxxxxx>
  • Date: Thu, 19 Aug 2010 08:25:56 -0400

No I hadn't. I haven't used them in that way before. I wonder what would
happen if the user tried to write to one of them which a possibility with
this application. Good idea though, I'll have to test it out.

Ken   

-----Original Message-----
From: Martin Berger [mailto:martin.a.berger@xxxxxxxxx] 
Sent: Thursday, August 19, 2010 3:11 AM
To: kennethnaim
Cc: oracle-l
Subject: Re: FW: Single block reads instead of multiblock reads

Ken,
as you are on 11g, have you considered to use virtual columns for all
the columns that are not used

On Thu, Aug 19, 2010 at 00:56, Kenneth Naim <kennethnaim@xxxxxxxxx> wrote:
> Thanks. I had intended to rebuild the table due to the 600 columns
> reordering the columns from most populated to least populated when i
noticed
> this issue while attempting to query the table to figure out which columns
> were used. I cannot drop columns that are not used since the application
> dynamically references them on occassion. I have done the column reorg
> before and have seen up to 80% reduction in storage due to the unused
> columns being at the end of the row and oracle not writing nulls when they
> are at the end of the table. I used the latest stats run and and some
client
> input on which columns were used and kicked off a rebuild.
>
> I didn't know that the database did single block reads for chained rows. I
> started an analyze to count the chained rows but it would have run for
days
> so after a half hour i killed it and checked the size of the chained_rows
> table which was about 50mb even though the table scan was less than .2%
> complete indicating significant chaining which i believe was the root
cause.
>
> After rebuilding the table the size was reduced by 67.8%, and the index
> builds took between 1 and 11 minutes each, performing full scans using
> scattered reads, direct path reads. Previously full scans were taking over
4
> hours.
>
> Thanks to everyone for thier input.
>
> Ken
>
> On Wed, Aug 18, 2010 at 10:14 AM, Mark W. Farnham <mwf@xxxxxxxx> wrote:
>>
>> <snipped to fit>
>>
>> To repeat myself, this is probably due to having 600 columns. Tanel
>> suggested examining checking the increase in continued reads, which will
>> provide you evidence.
>>
>>
>>
>> As a practical manner, you can also check:
>>
>>
>>
>> select c1,c2,?,cn from tab
>>
>>
>>
>> where c# is the column name for column position left to right with n
large
>> enough so that you have an unindexed column set (so that you still get a
>> full table scan).
>>
>>
>>
>> Then, if you don?t really need all the columns from the table, and none
of
>> the columns you need for a particular table are beyond column 254
>>
>> or reach beyond row column values not stored within the first block piece
>> of an individual row, you could restore multiblock reading for a
particular
>> query. You may provide a view that retrieves columns that most often fit
in
>> the first row piece. Of course updates may move things around and Oracle
>> posts no guarantee that column values are stored left to right in the
block
>> when row chaining is required.
>>
>>
>>
>> (I wrote 254 rather than 255, because if I remember correctly when a row
>> is continued because of being over 255 columns you only get 254 in the
first
>> block and the row continuation location is in the array slot for the
255th
>> column. I could be wrong about that. Since you already have 600 columns,
it
>> is moot. You?ve got three row pieces per row unless you have a pile of
>> trailing null column values, and even then I?m not sure and I?d have to
test
>> it.)
>>
>>
>>
>> After you do the entity relationship requirements for a logical schema
>> design, you also need to do physical schema design if performance
>> requirements are among your considerations. One of the bits of physical
>> design for entities that require more columns than the RDBMS in question
>> stores conveniently in the first block (for block oriented storage
models)
>> is putting the most commonly required column values left most in the
table
>> definition and putting most likely to be unreferenced and null right most
if
>> a single row is unlikely to fit in the first storage block. While this is
no
>> guarantee, since updates may force a column value into a later rowpiece
>> block, it is useful statistically. For short rows (in columns or actual
sum
>> of vsize of columns) this only costs you the extra storage from having
null
>> values non-last in a row. When there is mnemonic value to having certain
>> columns in a certain order and grouped together in the table definition
you
>> may face a trade-off, but that is rarely a practical consideration and I
>> would tend to group columns favoring any such mnemonic value.
>>
>>
>>
>> For a particular case, you can determine whether a rebuild rearranging
the
>> column order might be worth the effort. But you need to know the details
of
>> the functional use of the object. Sometimes it may even be worthwhile to
>> divide a logically cohesive entity?s row definition into multiple
physical
>> table pieces linked by a common unique id column set. Existing select *
>> queries can be serviced by a view while the rest of the queries are
>> repaired, and you can make sure you follow all the rules so that it is an
>> updatable view for existing dml queries. I?m always a bit skeptical about
>> the logical schema design when a table has this many columns, but of
course
>> that ship may have sailed before you arrived and it is not my intention
to
>> address the question of whether an entity really needs 600 columns, but
>> rather, to help you with remediation possibilites.
>>
>>
>>
>> If
>>
>>
>>
>> select c1, c2, ?, cn (etc) restores multiblock read, please be sure to
>> compare the total read time required versus the single block read time
for
>> the same using complete rows before embarking on remediation efforts.
>> Depending on the hierarchy of your cache structures down to the ?spinning
>> rust? your mileage may vary. Of course if you mostly only need column
values
>> that are stored most often in the first row piece, the big value of not
>> reading the other row piece blocks is avoiding the LIO work. Right Kevin?
So
>> if you reimplement on exadata and convert your queries to only use the
>> columns you need, that would be another way to go. Quite a bit more
>> expensive experiment though.
>>
>>
>>
>> Good luck.
>>
>> mwf
>>
>> <snip>
>>
>>
>



-- 
Martin Berger           martin.a.berger@xxxxxxxxx
Lederergasse 27/2/14           +43 660 660 83306
1080 Wien                                       http://berx.at/

--
//www.freelists.org/webpage/oracle-l


Other related posts: