Re: Chained Rows - Inter block chaining

  • From: Kenny Payton <k3nnyp@xxxxxxxxx>
  • To: dmarc-noreply@xxxxxxxxxxxxx
  • Date: Thu, 8 Jan 2015 21:16:06 -0500

I can’t believe I’ve put this off since April, although not a coincidence that 
my company was acquired in May.  

I was finally able to test the hidden parameter and it works ( mostly ).  I 
used imp, trying to keep it simple, to load 100k rows that were plagued with 
populated column ID’s > 255.  With the parameter set to 0, simulating 
pre-parameter, the data loaded with 3,420 chained rows.  After setting the 
parameter to 8000, with a 8k block size database, I reloaded the data and 
although I expected the chained rows to be 0 it left 29.  I have yet to explain 
the 29 since they are not too large to fit in a block, average row size of 852, 
and were less than the 8000 I set the parameter to.  After a CTAS of the 
previously loaded table the new resulting table with the same 100k rows 
returned 0 chained rows.  I ran the load process a number of times consistently 
with the same results.

Nonetheless, although not perfect it seems to make a significant dent in the 
number of inter block chained rows and after some additional testing we will be 
excited to get this into production.  If I can find some time to dig deeper in 
this I’ll try to dump the block and see what is going on.

In case you were wondering, here is the parameter.  
_rowlen_for_chaining_threshold

Thanks again Gaja for pointing out the bug ( way back in April ). 

Thanks,
Kenny





> On Apr 29, 2014, at 8:39 AM, Kenny Payton <K3nnyp@xxxxxxxxx> wrote:
> 
> 
> Bug #9373758 seems like a silver bullet for what I’ve been looking for.  
> Thanks.  We will be testing this soon and hopefully roll it into production 
> in the upcoming weeks.
> 
> Kenny
> 
> 
> 
> On Apr 27, 2014, at 6:29 PM, Kenny Payton <k3nnyp@xxxxxxxxx 
> <mailto:k3nnyp@xxxxxxxxx>> wrote:
> 
>> Thanks for the note.  I will dig into it tomorrow.   In my mind, splitting 
>> the table into two doesn't help when you need the 2nd half of the row. You 
>> still end up with multiple block reads unless they are clustered and also 
>> have to deal with another index. 
>> 
>> 
>> On Apr 27, 2014 2:01 PM, "Gaja Krishna Vaidyanatha" 
>> <dmarc-noreply@xxxxxxxxxxxxx <mailto:dmarc-noreply@xxxxxxxxxxxxx>> wrote:
>> Hi Kenny,
>> 
>> In addition to everything the others have said, if you are on 11.2.0.3, you 
>> may be running into Bug#9373758 on tables that have more than 255 columns. 
>> The flavor of this bug we encountered was that any DML to the 
>> table/partitions of the table, caused the second row piece which was 
>> originally in the same block, to be migrated to a different block. So we 
>> went from 2 row pieces in the same block to 2 row pieces in two different 
>> blocks. Issuing a "move partition" fixed the issue and was the workaround 
>> until 11.2.0.4 was available. But we unfortunately also hit another flavor 
>> of the bug in 11.2.0.4.
>> 
>> So bottom line - we engaged in a table re-design to get the number of 
>> columns < 255. This was the only surefire way to avoid this issue. Hope this 
>> helps!
>> 
>> Cheers,
>> 
>> Gaja
>> 
>> Gaja Krishna Vaidyanatha,
>> CEO & Founder, DBPerfMan LLC
>> http://www.dbperfman.com <http://www.dbperfman.com/>
>> http://www.dbcloudman.com <http://www.dbperfman.com/>
>> Phone - +1 (650) 743-6060 <tel:%2B1%20%28650%29%20743-6060>
>> LinkedIn - http://www.linkedin.com/in/gajakrishnavaidyanatha 
>> <http://www.linkedin.com/in/gajakrishnavaidyanatha>
>> Co-author: Oracle Insights:Tales of the Oak Table - 
>> http://www.apress.com/9781590593875 <http://www.apress.com/9781590593875>
>> Primary Author: Oracle Performance Tuning 101 - 
>> http://www.amzn.com/0072131454 <http://www.amzn.com/0072131454>
>> Enabling Exadata, Big Data and Cloud Deployment & Management for Oracle
>> 
>> From: Tim Gorman <tim@xxxxxxxxx <mailto:tim@xxxxxxxxx>>
>> To: oracle-l@xxxxxxxxxxxxx <mailto:oracle-l@xxxxxxxxxxxxx> 
>> Sent: Thursday, April 24, 2014 4:52 AM
>> Subject: Re: Chained Rows
>> 
>> Ken,
>> 
>> Coming into this thread late, so I've not seen your original post in the 
>> thread.
>> 
>> Since the table has so many columns, and if there is the possibility that 
>> not all of those columns are used, have you tried re-organizing the table so 
>> that the most likely NULL columns trail, and the most likely NOT NULL 
>> columns lead?  I have a PL/SQL stored procedure called CARL (file "carl.sql" 
>> at "http://evdbt.com/scripts/"; <http://evdbt.com/scripts/>) which is 
>> designed to answer the "what if" question about using "trailing NULL 
>> columns" compression, much the same as the DBMS_COMPRESSION package is 
>> designed to answer the "what if" question about whether or not to use 
>> different types of compression (i.e. basic, oltp, hcc, etc).  The big 
>> difference is that CARL does not temporarily create a compressed table as 
>> DBMS_COMPRESSION does, but calculates everything using gathered statistics 
>> residing in the DBA_TAB_COLUMNS view, so you'd want       good statistics.
>> 
>> It may not help your problem, but it'll tell you pretty quickly whether it 
>> is worth trying.
>> 
>> Hope this helps.
>> 
>> Thanks!
>> 
>> -Tim
>> 
>> On 4/24/2014 3:34 AM, Kenny Payton wrote:
>>> Yes.   We are at 8k and I suspect a 16k block size would reduce the rate by 
>>> 1/2.  We have talked about doing this for some time.   Unfortunately the 
>>> most widely affected areas of the database is 20T of data out of 160T of 
>>> databases.   Also efficiently managing multiple buffer pools becomes 
>>> difficult.
>>> What I find challenging is that I don't see a way to avoid it all together. 
>>>  If you have a table with more than 255 columns and you use any of the 
>>> columns beyond 255 this is going to happen at some difficult to predict 
>>> rate.
>>> Ideally I would want Oracle to put all row pieces in a single block if they 
>>> would fit. If not then consider them chained and break it up.   It seems to 
>>> treat each row piece independently from the start. 
>>> On Apr 24, 2014 1:42 AM, "Hans Forbrich" <fuzzy.graybeard@xxxxxxxxx 
>>> <mailto:fuzzy.graybeard@xxxxxxxxx>> wrote:
>>> Is this possibly a valid use case for larger tablespace block size?
>>> 
>>> On 23/04/2014 4:29 PM, Kenny Payton wrote:
>>>> Thanks.
>>>> Unfortunately the data and access patterns change pretty frequently.   
>>>> Another reason we find ourselves in this situation.
>>>> On Apr 23, 2014 6:02 PM, "Sayan Malakshinov" <xt.and.r@xxxxxxxxx 
>>>> <mailto:xt.and.r@xxxxxxxxx>> wrote:
>>>> 
>>>> On Thu, Apr 24, 2014 at 12:56 AM, Kenny Payton <k3nnyp@xxxxxxxxx 
>>>> <mailto:k3nnyp@xxxxxxxxx>> wrote:
>>>> I know a lot of the tricks for avoiding chained rows and migrated rows but 
>>>> the only trick I know of to element these intra or inter block chained 
>>>> rows is to break the table into multiple tables or the row into multiple 
>>>> rows. 
>>>> 
>>>> Sometimes might be helpful to redefine table with moving less-used columns 
>>>> to the ends of rows. It allows to reduce extra-work if most queries use 
>>>> only the first columns
>>>> 
>> 
>> 
>> 
> 

Other related posts:

  • » Re: Chained Rows - Inter block chaining - Kenny Payton