Hello; I am watching a very long select * from a tables with about 500mil rows. It runs under oracle 10.2.0.3 for hours how. The table is about 35-40G and the app team needs to dump it into a flat file. My DB block size is 8K, the extent size (this is a LMTS) is 300M. I have been watching the v$session_longops and the message that I have been seeing looks something like this: Rowid Range Scan: TABLE_NAME : 2899 out of 12647 Blocks done The table has about 5mil blocks. 12647 blocks is about one third of one extent. I run it with DOP=4. So it looks like the parallel slaves are each reading from one third of one extent at a time. This operation seems to take about 40 minutes. Extrapolating to 5mil blocks yields a huge number with will not be satisfactory to the application. Why would it take 40 minutes to read 100M worth of data? What could be the bottleneck? thank you Gene Gurevich Please consider the environment before printing this email. -- //www.freelists.org/webpage/oracle-l