Thanks Martin. since data loading time is measure. I'm going to apply DIY parallel execution. 1. create 200 ext table like ext_1/2/200 2. start 200 concurrent sqlplus session. 3. alter table ext1/2/200 location point to next csv file and load the data. On Sat, Jun 28, 2014 at 2:17 AM, Martin Berger <martin.a.berger@xxxxxxxxx> wrote: > Sidney, > > I don't know any way to reference the file name as a data column in > sqlldr. > I can suggest 2 solutions: > 1) use a preprocessor which reads all the files for you - but it can be > tricky to make it run with "parallel" > 2) create one external table per file and build a view over all those > files > > I'd call the 2nd method easier. > > hth > Martin > > > On Fri, Jun 27, 2014 at 6:30 PM, Sidney Chen <huanshengchen@xxxxxxxxx> > wrote: > >> hi List, >> >> I'm on a benchmark and need to load 8 thousands files, there is a >> requirement that need your advice. >> >> I need to assign the file name to the FILE_NO col, such as 0001 for all >> the records in the 0001.csv, and 0002 for all the rows in 0002.csv, and >> 8000 for the rows in 8000.csv. I want to refence the current file name for >> the FILE_NO column when enable parallel execution to load the data. >> Otherwise, I need to apply DIY parallel data loading. Thanks. >> >> >> CREATE TABLE DATA_EXT >> ( >> create_dt Varchar2(15) , >> FILE_NO number >> ) >> organization external >> (type oracle_loader >> default directory EXT_DIR >> access parameters ( >> records delimited by newline skip=1 >> badfile 'ERROR_DIR':'FIRST_LOAD.bad' >> logfile 'ERROR_DIR':'FIRST_LOAD.log' >> fields terminated by "," >> LRTRIM missing field values are null >> ) >> location ( >> '0001.csv' >> ,'0002.csv' >> ,'0003.csv' >> , '0004.csv' >> ... >> , '7999.csv' >> , '8000.csv') >> ) >> parallel >> reject limit 1000; >> >> >> -- >> Regards >> Sidney Chen >> >> > -- Regards Sidney Chen