RE: Database Connection Control

  • From: "Laimutis Nedzinskas" <lnd@xxxxxxx>
  • To: <Oracle-L@xxxxxxxxxxxxx>
  • Date: Tue, 15 Nov 2005 11:03:56 -0000

Isn't it better to try to limit a trace file size by MAX_DUMP_FILE_SIZE ?
Apart from wrong JDBC driver there are many other posibilities for an ugly 
application to flood Oracle. It's enough to generate a deadlock which generates 
a trace file(udump) and the file system will reach it's limits sooner or later. 
 
Alternatively: delete older trace files by automated process if limits are 
close. It's ugly but still better than db stop.
 

        -----Original Message-----
        From: oracle-l-bounce@xxxxxxxxxxxxx 
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Michael Chan
        Sent: 15. nóvember 2005 08:30
        To: oracle-l@xxxxxxxxxxxxx
        Subject: Database Connection Control
        
        

        Hi All,

        I recently encountered the following bug with my Oracle 9.2.0.6 
production database. 

        Bug 1725012 -- ORA-600 [TTCGCSHND-1] CONNECTING FROM 817 CLIENT TO 9I 
USING THIN DRIVER (JDBC)

        One of an inactive client, installed with 8i JDBC thin driver, wake up 
at one time and tried to connect to the production database.  Unfortunately, 
the connection triggered the bug and Oracle just output trace files 
continuously (2 files per second, 10M each) and the database finally crashed 
when the file system was full. 

        I'm wondering if I can control database access from server side, such 
that - if the client is installed with software with version that are not 
supported, they are prohitibied from accessing the database.

        Any inputs or thoughts will be appreciated.

        Thanks,

        Michael 

Other related posts: