Re: 10G UTL_FILE ORA-29283 DBMS_JOB

  • From: "Niall Litchfield" <niall.litchfield@xxxxxxxxx>
  • To: post.ethan@xxxxxxxxx
  • Date: Tue, 23 Dec 2008 10:05:01 +0000

Hi Ethan

How did you perform the migration? Was it via schema mode imp by any chance?
If so and the user importing the dump file is not the owner of the job
(SYSTEM say) then the credentials for the job get subtly changed, I'd take a
look at the XXX_user columns in dba_jobs and make sure they are correct if
your situation matches my description above.

Niall

On Mon, Dec 22, 2008 at 7:06 PM, Ethan Post <post.ethan@xxxxxxxxx> wrote:

> Seeing something a bit odd. I have a series of jobs which use DBMS_JOB,
> just migrated to 10G. The jobs kick off and attempt to run every 10 minutes
> until they are table to complete. There are some dependencies which they
> wait on. Trouble is that they are logging ORA-29283. However, if I log in as
> the user who owns the job and run DBMS_JOB.RUN to run the job they finish
> fine and report no error.
>
> 29283, 00000, "invalid file operation"
> // *Cause:  An attempt was made to read from a file or directory that does
> //          not exist, or file or directory access was denied by the
> //          operating system.
> // *Action: Verify file and directory access privileges on the file system,
> //          and if reading, verify that the file exists.
>
>
> This is the line that either does or does not generate the error.
>
> t_filehandle := utl_file.fopen(p_filelocation,   p_filename ||
> v_fileextension,   v_filemode);
>
> The variables are all specified in either the package header or the
> procedure. This particular package happens to exist in another schema but
> the job user has execute on it. All users also have execute on UTL_FILE.
>
> Just to summarize this is the exact same job id, but it does not work when
> run by the job queue process and does work when run using DBMS_JOB to run
> the exact same job from the command line. I am stumped. The file is being
> opened for file mode 'R'. Here are a few of my current guesses.
>
> 1. Need to switch to using DIRECTORY object, this might be a bug in 10G.
>
> 2. Too many job queue processes, they are all firing at the exact same time
> and multiple procedures in the same package are all trying to access the
> same file, perhaps some sort of very hard to recreate locking issue.
>
> If anyone has seen anything like this please let me know.
>
> Thanks
>
>
>


-- 
Niall Litchfield
Oracle DBA
http://www.orawin.info

Other related posts: