> > >Ok guys, I have a question. I have a program that give out log files > > as > > >its computing. But when the program encounters problems then it just > > >goes insane and the logs become WAY to big to manage. Is there any > > way > > >I can create a "file quota" of some kind and tell it that is the > > certain > > >file grows to big, just to kill the job? You could run a script from a cron job to test for the size of a particular file, and if necessary, to kill a certain process. Would you perhaps want to keep the process running and just get rid of the log info? If you really don't care about the info, you could zero the log files or chop them in half. Instead of deleting log files that are in use (eg. "rm /var/log/messages"), you might want to zero them out: > /var/log/messages (Yes, type the greater-than.) That way your process doesn't get upset that the file is suddenly MIA, you don't have to re-set permissions, etc. Chopping the log file in half: #!/bin/sh FILE=`cat $FILE | wc -l` HALF=`echo $SIZE/2 | bc echo "Total size = $SIZE Half=$HALF" tail +${HALF} $FILE > $FILE.cut ls -l $FILE FILE.cut ---- Husker Linux Users Group mailing list To unsubscribe, send a message to huskerlug-request@xxxxxxxxxxxxx with a subject of UNSUBSCRIBE