Hi oracle-l, Again I turn to the combined list knowledge for help. I'm running some datapump imports, currently I'm working on a ~32GB datapump export file which contains the data only for a single table. Whilst importing, I keep running out of temp space. For a while, I'd just grow the temp space and try again, but it's now to the point where the import is using more temp space than the size of the file which it is importing! I've allocated over 60GB of temp space, and the import worker easily fills it all up over time and fails with an expected "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP". It's definitely the import which is using the temp space; the below is showing how much temp space each session is using and was taken after the impdp was running for roughly 35 minutes: SID SERIAL# TEMP_MB_USED MODULE||EVENT ----- -------- ------------- ---------------------------------------------------------------------------------------------------------------- 476 13 1 Data Pump Master wait for unread message on broadcast channel 479 1003 24581 Data Pump Worker Datapump dump file I/O 478 47 1 udi@mq-bb-prod-db (TNS V1-V3) wait for unread message on broadcast channel I assume the formatting will be messed up, but that's basically showing 24,581 MB temp space used so far for the Data Pump Worker session. What I am seeing is the Data Pump Worker session reading from the data file ("Datapump dump file I/O" event) and then writing out to temp space ("direct path write temp" event), but never actually writing to the tables datafiles! Can anyone explain why impdp is purely writing to temp space and not loading into the table as I'd hoped? It's being executed simply as: $ impdp 'scott/tiger' DIRECTORY=ORADBF \ TABLE_EXISTS_ACTION=TRUNCATE \ CONTENT=DATA_ONLY \ TABLES=SCOTT.EMP \ DUMPFILE=exp-table_name.dmp Regards, Tom-- //www.freelists.org/webpage/oracle-l