Yep. Checked already. Thanks. Sent from my iPhone > On 1 Sep 2014, at 18:19, "Kim Berg Hansen" <kibeha@xxxxxxxxx> wrote: > > You probably already checked, but does it happen on both nodes of the > cluster? Or only one? > I'm thinking that since the max is platform / O/S dependent, is the max > perhaps calculated from some O/S parameter that might be set differently on > the two nodes? > (I haven't anything to base this on - just a random idea ;-) > > > Regards > > > Kim Berg Hansen > > http://dspsd.blogspot.com > kibeha@xxxxxxxxx > @kibeha > > > >> On Mon, Sep 1, 2014 at 7:04 PM, Dominic Brooks <dombrooks@xxxxxxxxxxx> wrote: >> That's a problem copying to phone. >> 1835298655 vs >> 2147483647 >> >> Job interface is DBMS_JOB calling stored proc. >> >> It can't be a shrink operation because that (2147483647) is the max on this >> platform. >> >> The error itself is unexpected - if you exceed the max for your platform >> then you'd expect an ORA-02017: integer value required. >> >> The order of min & max is also strange. >> >> I'm not looking for a comprehensive review, changing code is possible, I >> don't like the code anyway. >> >> But my question really just stems from observations that the same code on >> the same database running in the same job is sometimes able to set the value >> and sometimes not. >> >> This naturally triggers the question "why?" and whether it's a good/safe >> idea for code to be setting this, particularly when it is an oft recommended >> approach for working with large sets of data. >> >> It also raises the question of what else was running on the db at this time, >> how much memory was being used, and comes back to whether it is a sensible >> thing to do. >> >> Sent from my iPhone >> >>> On 1 Sep 2014, at 15:45, "Mark W. Farnham" <mwf@xxxxxxxx> wrote: >>> >>> I’m not sure, but it is suspicious because (obvious if you add the >>> thousands delimiters) >>> >>> >>> >>> 1,835,298,655 is greater than 214,783,647 >>> >>> >>> >>> So it is AT LEAST an erroneous error message bug. >>> >>> >>> >>> I’m not sure where the 1.8 billion limit comes from in your stack. Do you >>> know? What error tosses when you exceed that by 1 manually from sql*plus? >>> (Do you even get an error?) >>> >>> >>> >>> What is the job interface (since you have implied it is *not* sql*plus) ? >>> >>> >>> >>> The root cause *might* be some type conversion error. >>> >>> The root cause *might* be something about already made during the running >>> process or transaction, so a change at that point in your process cannot be >>> made. Perhaps it cannot shrink if a given job has already used more. >>> >>> >>> >>> As I wrote, “I’m not sure.” >>> >>> >>> >>> An early work-around attempt would be to make this be the very first thing >>> the job does. Still, a full explanation is in order unless what is >>> different between executions that generate the error versus executions that >>> succeed can be made. “Erratic” just doesn’t cut it. >>> >>> >>> >>> I hope this helps, >>> >>> >>> >>> mwf >>> >>> >>> >>> From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] >>> On Behalf Of Dominic Brooks >>> Sent: Monday, September 01, 2014 8:58 AM >>> To: oracle-l@xxxxxxxxxxxxx >>> Subject: Sort_area_size ora-0068 >>> >>> >>> >>> Why might a process erratically report an ORA-00068 for a consistent "alter >>> session set sort_area_size"? >>> >>> There is some code setting workarea_size_policy to manual and attempting to >>> set sort_area_size to 2147483647. >>> >>> The value is hardcoded so it doesn't change. >>> But sometimes the code works, sometimes it doesn't. >>> Code was called from a job on a two node cluster. >>> >>> Error when raised is consistent: >>> ORA-00068: invalid value 214783647 for parameter sort_area_size, must be >>> between 1835298655 and 0. >>> >>> Running the commands manually in a sqlplus session works consistently. >>> >>> Thoughts? >>> >>> Cheers, >>> Dominic >>> >