RE: Q about compressed table

  • From: Tanel Poder <tanel.poder.003@xxxxxxx>
  • To: rjamya@xxxxxxxxx, 'Oracle Discussion List' <oracle-l@xxxxxxxxxxxxx>
  • Date: Sat, 20 Jan 2007 02:29:17 +0800

This query should give you a low-confidence estimate, based on
dba_tables.avg_row_len and num_rows, so your stats must be up to date.
 
I wrote it just now and it seems that I have missed something as it predicts
the real space usage 3-5% lower when uncompressing the table with alter
table move...
 
I tested it on freelist managed tablespace, ASSM adds additional overhead.
 
select 
 t.num_rows   -- number of rows in table
 * ( t.avg_row_len  -- average row length
  + 2   -- there's a 2-byte entry in block row directory for each row
  + 3   -- dbms_stats seems to not account for row header bytes
 ) 
 * ( 100   -- calculate how much there's block overhead
  / ( 100
   - t.pct_free 
   - ( 
    (  select sum( type_size * decode("TYPE", 'KTBIT', t.ini_trans, 1) ) 
     from v$type_size 
     where "TYPE" in (
       'KCBH', 
       'KTBBH', 
       'KDBH', 
       'KTBIT')
    )
    + t.avg_row_len * 0.5
   )
   / ts.block_size
  ) 
 ) estimated_size
from
 dba_tables t,
 dba_tablespaces ts
where
 t.tablespace_name = ts.tablespace_name
and t.owner = '&1'
and t.table_name = '&2'
/

Tanel.
 



  _____  

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of rjamya
Sent: Friday, January 19, 2007 22:53
To: Oracle Discussion List
Subject: Q about compressed table


I have a 400GB compressed partitioned table in a 10gR2 db. Is there a way to
estimate uncompressed size without exporting/dpexporting or copying the
table into another schema?  The estimate feature of expdp doesn't work all
that well with compressed segments. 

TIA
rjamya 
----------------------------------------------
Got RAC?


Other related posts: