Re: Nothing works => Import using multiple compressed files ?!!

Hmmm....

Well, you *could* try something like this...

Change
$cat `echo $DUMPDIR/vald2000_NGE_1904_05162006_*.dmp.gz | sort` |
gunzip > /VMP1/archive/ngepipe.dmp &


to

for x in `ls $DUMPDIR/vald*.gz | sort`
do
  sleep 60
  gunzip < $x > /VMP1/archive/ngepipe.dmp
done &

And list the name of the input file (which I presume is actually a named
pipe) multiple times in the
FILE= portion of your parameter file.

That is:

FILE=/VMP1/archive/ngepipe.dmp,/VMP1/archive/ngepipe.dmp,/VMP1/archive/ngepipe.dmp,/VMP1/archive/ngepipe.dmp,/VMP1/archive/ngepipe.dmp

You'll want to list it AT LEAST as many times as you have compressed data
files...

Finally, start the import BEFORE you start the "for loop".

What will (should happen) is this:  'imp' will start up, open the named
pipe, and suspend itself, waiting for input.
It will wait forever, if necessary.  The 'for loop' will start up,
decompress the first file, and dump it to the named
pipe.  When GUNZIP stops writing, 'imp' will receive an EOF, stop reading,
do a little work, and then open the
"next" file in its list, which is, of course, the same named pipe.  It will
block, waiting for input.

The "for loop", running concurrently will wait for 60 seconds (giving "imp"
ample time to actually re-open the named
pipe and start reading from it) and then start writing the next data file.

And so on...

The key is this:

1.   IT IS AN ERROR TO ATTEMPT TO write TO A NAMED PIPE unless THERE IS
ALREADY A PROCESS reading
FROM IT.

2.  WHEN THE PROCESS writing TO A NAMED PIPE EXITS OR CLOSES THE PIPE, THE
PROCESS reading FROM
THE PIPE RECEIVES AN <EOF>.  In the case of "imp", this will cause it to
close the named pipe, and move on to the "next"
"file" in its list of input files.

Personally, I'd look for alternatives to compressing the data in the first
place.  You've already found a few of the pitfalls...




On 5/19/06, Alessandro Vercelli <alever@xxxxxxxxx> wrote:

> Hi Friends , > > I have to import using multiple compressed (*.gz) files. > I have 19 compressed dmp files which are named imp01.gz, > imp02.gz etc. All this have the data of one single schema. > > Now i want to import without uncompressing the files because > of space issue.I tried a script from asktom . This is my script. > > $cat `echo $DUMPDIR/vald2000_NGE_1904_05162006_*.dmp.gz | sort` | > gunzip > /VMP1/archive/ngepipe.dmp & > > $imp parfile=/VMP1/export/ngedumps/imp.par & > > And the par file looks like : > > userid='/ as sysdba' > file=/VMP1/archive/ngepipe.dmp > log=/VMP1/export/ngedumps/imp03.log > buffer=20971520 > feedback=100000 > commit=y > ignore=y > fromuser=nge > touser=nge > > The import goes thro' > the first dmp (imp01.gz) file. But it doesn't continue to the second ( imp02.gz) > file.It fails saying (last few lines from the import log) : > > . . importing table "BILLING_2002" > ........................ > Import file: expdat.dmp > > gunzip: stdout: Broken pipe > cat: write error: Broken pipe > cat: write error: Broken pipe > cat: write error: Broken pipe > cat: write error: Broken pipe > > Import terminated successfully with warnings. > cat: write error: Broken pipe > cat: write error: Broken pipe > cat: write error: Broken pipe > > I couldn't make out what this means. Does this indicate 2G limitation. > Can someone help on this please ? Thanks in advance. > > Regards, > Prem J > --

Hi Prem,
the 2G limit should be solved since 8i dbms but in case you can use a
named pipe to override this limit:

for DMPFILE in imp01.gz imp02.gz .....imp09.gz
do
     mknod /tmp/orapipe p
     gunzip < /tmp/orapipe > ${DMPFILE} &
     imp user/pass file=/tmp/orapipe <options>
     rm -f /tmp/orapipe
done

Regards,

Alessandro



--
http://www.freelists.org/webpage/oracle-l





--
Cheers,
-- Mark Brinsmead
  Staff DBA,
  The Pythian Group
  http://www.pythian.com/blogs

Other related posts: