Re: 2,2 GB Coredump

  • From: Ozgur Ozdemircili <ozgur.ozdemircili@xxxxxxxxx>
  • To: luka.grah@xxxxxxxxx
  • Date: Wed, 12 May 2010 08:54:43 +0200

Thanks all. The ulimit parameter is what I was looking for.


Özgür Özdemircili
http://www.acikkod.org
Code so clean you could eat off it


On Wed, May 12, 2010 at 12:02 AM, Luka Grah <luka.grah@xxxxxxxxx> wrote:

> You can use an old unix trick
> Create a directory core with 000 perms.
>
> Regards,
>  Luka
>
>
> On Tue, May 11, 2010 at 7:10 PM, Stefan Moeding <dba@xxxxxxxxxxx> wrote:
>
>> Hi!
>>
>> Ozgur Ozdemircili writes:
>> > Weird yet I have the max_dump_file_size =1024
>> >
>> > Until now I have been going with a script that check the file size and
>> > {echo "" > file }`s it every minute.
>> >
>> > Any more thoughts?
>>
>> Core files are written by the operating system and the OS obviously does
>> not honor the parameter. Depending on the shell the ulimit (/bin/sh and
>> family) or limit (/bin/csh and family) command can tell you the allowed
>> core size and also limit the size.
>>
>> Log in as oracle and (assuming a /bin/sh) run
>>
>> $ ulimit -a
>>
>> This should tell you among other things the maximum core size that the
>> oracle user can write. With
>>
>> $ ulimit -c 0
>>
>> the generation of core files should be disabled for all processes
>> started in this shell. Restarting the listener in this shell should do
>> the trick as all server processes are forked from the listener.
>>
>> --
>> Stefan
>> --
>> //www.freelists.org/webpage/oracle-l
>>
>>
>>
>

Other related posts: