If the size of the row pieces being bigger than the usable space in a single block is the problem then Hans suggestion is worth exploring. If the size of the row pieces would fit in a block, but you have dynamic size changes of columns such that one of the row pieces migrates that will continue to be a problem. One way to stabilize this is to adjust pctfree so only as many rows fit in a block at their initial insertion sizes that they will not migrate at their maximum length. This is more useful IF your maximum final row lengths are of similar size or at least a known maximum. This can be a chronic problem when a skeleton of a row is inserted initially and then after a lot of rows are inserted some other process grinds the new data filling in formerly null or shorter columns. *Sometimes* it is worthwhile to add one or more columns that get a fixed length default value on insert that can be shortened or nullified to compensate for the length subsequently added as the additional column values are filled in. Make sure you do this in the correct order or a single update to prevent the row piece migration. If you are managing your empty space in blocks tightly, dont forget that numbers (especially totals that get updated constantly upward) can grow (roughly a byte every two increased significant digits). The other thing you can do IF the aggregate of the row pieces will fit in a single block (per row) is to move to multiple tables. Youve previously rejected this because you did not consider using a cluster for those tables. IF necessary, you can more or less transparently restate your current table as a view of the multiple new tables in the cluster, with each row piece having no artificial row piece generation (unless you have columns that force out of line storage) by keeping them less than the boundary for one piece. This can be combined with Hans suggestion to use a larger block size if needed. Carefully consider life cycle and partitioning limitations that are currently associated with clusters. When they are the correct solution clusters seem like a magic trick. I tend toward skepticism that useful relations have over 200 attributes. Please do not equate that to a claim that there are no useful relations with over 200 attributes. But do seriously consider whether you can lose many of the columns without losing anything useful. For example, if some column in your table is null for every row you have, maybe you can lose it without losing any information. If you have * queries in applications you cannot fix or change, possibly you can make the columns that are actually used a physical table and use a view as a projection of your current table image. All these suggestions are of the something to consider variety rather than the silver bullet variety, and your mileage may vary. Thats all I can think of off the top of my head. mwf From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Hans Forbrich Sent: Thursday, April 24, 2014 1:41 AM To: oracle-l@xxxxxxxxxxxxx Subject: Re: Chained Rows Is this possibly a valid use case for larger tablespace block size? On 23/04/2014 4:29 PM, Kenny Payton wrote: Thanks. Unfortunately the data and access patterns change pretty frequently.  Another reason we find ourselves in this situation. On Apr 23, 2014 6:02 PM, "Sayan Malakshinov" <xt.and.r@xxxxxxxxx> wrote: On Thu, Apr 24, 2014 at 12:56 AM, Kenny Payton <k3nnyp@xxxxxxxxx> wrote: I know a lot of the tricks for avoiding chained rows and migrated rows but the only trick I know of to element these intra or inter block chained rows is to break the table into multiple tables or the row into multiple rows. Sometimes might be helpful to redefine table with moving less-used columns to the ends of rows. It allows to reduce extra-work if most queries use only the first columns