10G UTL_FILE ORA-29283 DBMS_JOB

  • From: "Ethan Post" <post.ethan@xxxxxxxxx>
  • To: oracle-l <oracle-l@xxxxxxxxxxxxx>
  • Date: Mon, 22 Dec 2008 13:06:24 -0600

Seeing something a bit odd. I have a series of jobs which use DBMS_JOB, just
migrated to 10G. The jobs kick off and attempt to run every 10 minutes until
they are table to complete. There are some dependencies which they wait on.
Trouble is that they are logging ORA-29283. However, if I log in as the user
who owns the job and run DBMS_JOB.RUN to run the job they finish fine and
report no error.

29283, 00000, "invalid file operation"
// *Cause:  An attempt was made to read from a file or directory that does
//          not exist, or file or directory access was denied by the
//          operating system.
// *Action: Verify file and directory access privileges on the file system,
//          and if reading, verify that the file exists.


This is the line that either does or does not generate the error.

t_filehandle := utl_file.fopen(p_filelocation,   p_filename ||
v_fileextension,   v_filemode);

The variables are all specified in either the package header or the
procedure. This particular package happens to exist in another schema but
the job user has execute on it. All users also have execute on UTL_FILE.

Just to summarize this is the exact same job id, but it does not work when
run by the job queue process and does work when run using DBMS_JOB to run
the exact same job from the command line. I am stumped. The file is being
opened for file mode 'R'. Here are a few of my current guesses.

1. Need to switch to using DIRECTORY object, this might be a bug in 10G.

2. Too many job queue processes, they are all firing at the exact same time
and multiple procedures in the same package are all trying to access the
same file, perhaps some sort of very hard to recreate locking issue.

If anyone has seen anything like this please let me know.

Thanks

Other related posts: