RE: Measure database availability beyond 99.9%

  • From: "Rich Jesse" <rjoralist@xxxxxxxxxxxxxxxxxxxxx>
  • To: "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Fri, 29 Aug 2008 15:51:17 -0500 (CDT)

I've done this in the past, too.  My procedure was something like:

CREATE PROCEDURE some_dba_repository.uptime_monitor AS
        v_value NUMBER;
BEGIN
        SELECT SUM(VALUE) INTO v_value
                FROM sys.v_$sysstat
                WHERE NAME IN ('user commits','user rollbacks');

        INSERT INTO transaction_log
                (tx_count, time_stamp)
                VALUES (v_value, SYSDATE);
        COMMIT;
END uptime_monitor;

It records transaction counts which may serve other uses, too.  The idea
being that downtime is encountered when the count goes down instead of up on
consecutive records.  The length of the outtage is the difference in the two
rows' respective time_stamp values.  The LEAD and/or LAG analytics are very
helpful here.

In some ways, its a BTN solution.  The reason I preferred it is because it
was consistent, persistent, and worked on every version of Oracle we had
(exception being the analytic reporting).  And I could tweak the reporting
without affecting the simple data collection.

Just a thought.  Enjoy!

Rich

> We have had the same experience with EM Grid Control.
>
> Unfortunately we have not yet had the time to investigate other tools, but
> how about this:
> Build a heartbeat table and schedule a database job (i.e. dbms_scheduler) to
> insert into that table every x seconds. (the value of x depends on the
> needed precision).
>
> Then you can build your own reports/alerts based on the data in that table.
> We use this approach for checking the streams availability as recommended in
> Note:418755.1 - 10.2.0.x.x Streams Recommendations
>
> Regards,
>
> Vlado Barun, M.Sc.


--
//www.freelists.org/webpage/oracle-l


Other related posts: