Re: Calculations in maximizing rows per block

  • From: Tim Gorman <tim.evdbt@xxxxxxxxx>
  • To: mwf@xxxxxxxx, oracle-l@xxxxxxxxxxxxx
  • Date: Tue, 15 Aug 2017 07:05:33 -0600

Rich,

Not sure if your table of 268 columns might be sparsely-populated or not, but if the possibility exists, you might be interested in verifying using the "carl.sql" procedure from this page HERE <http://evdbt.com/scripts/> and referencing this presentation HERE <http://evdbt.com/download/presentation-three-types-of-table-compression/> and this white paper HERE <http://evdbt.com/download/paper-three-types-of-table-compression/>. CARL (calculate average row length) will attempt to recalculate how you can optimize your AVG_ROW_LEN by re-organizing the table so that all non-null columns are first in each row, leaving any null columns to trail. Along the way, it will also attempt to calculate AVG_ROW_LEN for the present organization. In order for CARL to work properly, it is best to have analyzed the table for column-level statistics. The presentation and white paper will also have passing mention of intra-block chaining, as well as other types of chaining/row-migration.

With more than 255 columns, you can't take advantage of BASIC or OLTP compression, but any version of Oracle and any wide table can take advantage of "trailing NULLCOLS" compression.

Hope this helps...

-Tim


On 8/14/17 21:09, Mark W. Farnham wrote:


There is a decent chance you would benefit by physically storing your table as two tables in a cluster and referenced by a view (make sure you can follow the rules for an insertable view if you don’t want to have to change your applications). I only mention this because of your indication your largest table might be worth some effort.

Since you have existing data, you can also determine the “most null” columns and you may have knowledge of which columns are used least. Putting the “most null” columns last can save a lot of space if you have them and saves some row reading overhead because Oracle does take a short cut when all the rest are null.

Putting the least frequently used columns late physically **might** pay off if your queries routinely only reference the columns used rather than “*”.

Even without clustering, that might mean you don’t need the second row piece for a query.

I mention this because my experience in the field shows that **mostly** tables with over 255 columns have a lot of unpopulated columns and a lot of rarely or never used columns and sometimes columns that are functionally audit trail columns that could be coordinated as a second logical object. IF you can get rid of them to get under 255, that is plan A.

RE: pctfree 0, yes, that controls whether a new row will be inserted into the block and does not change the mechanics of whether empty space in the block can be used for row updates that lengthen the row.

The ASSM space records are in their own blocks of the segment (not in a separate segment as I had BEGGED*) and additional space record blocks are allocated as needed.

(The order in which blocks are made eligible for new non-direct inserts may be different for ASSM and Freelists. I’m not aware of any official documentation on the freelist order of candidates structure and no utility exists to order or manage blocks that have becomes eligible for insertion. This can generate the “empty front” condition if all rows are coincidentally removed from blocks from the physical beginning of the storage allocation. Deletes mixed with only direct inserts tends to systematically generate this condition, since the direct inserts are placed high.)

*BEGGED: A reliability and simplicity claim was made when ASSM was previewed to me and I suggested this. I was not effective in persuading that having two segments to maintain was worth the possibility of defining the space management blocks as either a cache object or an object on faster storage than the table rows, nor did the argument of not having to read and discard (or interrupt scanning reads) to pass over the space management blocks prevail. Sigh. Discussion time was up, so I didn’t even mention the possibility of content statistics on blocks of the space management records. Besides, that IS complicated.

All the stuff mentioned earlier in the thread is probably gold.

mwf

*From:*oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *Mladen Gogala
*Sent:* Monday, August 14, 2017 5:07 PM
*To:* oracle-l@xxxxxxxxxxxxx
*Subject:* Re: Calculations in maximizing rows per block

Of course, it is always good to read the classics:

https://jonathanlewis.wordpress.com/2010/06/08/continued-rows/

https://jonathanlewis.wordpress.com/2017/05/19/255-columns-2/

https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1830023856761

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1606678460320

BTW, are they going to rename AskTom to AskConnor?

On 08/14/2017 04:13 PM, Sayan Malakshinov wrote:

    Hi Rich,

    I've described Intra-block row chainging in details here:
    http://orasql.org/2017/02/12/intra-block-row-chaining/

    That's better to migrate on 12.2:
    http://orasql.org/2017/04/21/intra-block-row-chaining-optimization-in-12-2/

    On Mon, Aug 14, 2017 at 10:23 PM, Rich J
    <rjoralist3@xxxxxxxxxxxxxxxxxxxxx
    <mailto:rjoralist3@xxxxxxxxxxxxxxxxxxxxx>> wrote:

    Hey all,

    I've got a table of 268 columns with a mix of NCHAR and NUMBER
    columns in 11.2 (moving to 12.1 soonish?). I'm having issues
    determining why I can only fit 2 rows in an 8K block with a
    PCTFREE of 1, with only 1/4 of rows packed 3 per block with a
    PCTFREE of 0.  Stats collection says the average row size is 2667
    bytes.

    My own sizing calculations based on the dated information from
    http://docs.oracle.com/cd/A58617_01/server.804/a58397/apa.htm has
    some problems.  Among them, it seems that there may be ASSM
    overhead in the block that could not have been accounted for in
    that pre-ASSM doc. This leads to a few questions:

    1) Are the block and record layouts documented in newer versions
    of Oracle?  Where?  The 10g docs mention "intra-block chaining"
    for rows with more than 255 columns, but no detail.  Nothing even
    in MOS that I can see.  Seems this is getting /less/ documented
    with each new version...

    2) With a PCTFREE of 0, does Oracle still use the remaining space
    in the block for row mods?  It seems that a 4K row in an 8K block
    leaves a lot of room for row mods, but again I can't find where or
    if this is documented.  Yes, I may be able to see this
    empirically, but I prefer to use that to back up (or refute!) the
    docs.

    If, after all things considered, packing 3 rows into each block
    instead of 2 of our largest physical table should have a decent
    positive impact on the business.

    Thanks!
    Rich



--
    Best regards,
    Sayan Malakshinov

    Oracle performance tuning engineer

    Oracle ACE Associate
    http://orasql.org


Other related posts: