Re: Options for auditing data changes

  • From: Sreejith S Nair <Sreejith.Sreekantan@xxxxxxxxxx>
  • To: Sriram Kumar <k.sriramkumar@xxxxxxxxx>
  • Date: Mon, 28 Mar 2011 12:18:12 +0530

Few updates.

Flash Back data archive provides a sound option to audit before and after 
images of data and is pretty easy to configure. It helps to find versions 
of data at different times data which was changed. But I could not see a 
field like CLIENT_ID / SESSION_INFO which is available in 
V$LOGMNR_CONTENTS which actually denotes the information of the client 
connection who changed the data.

The AS OF TIMESTAMP queries will give the versions of data - but it 
doesn't seem to give who changed the data. To satisfy this it looks like I 
need to have a field in my table which holds the user who changed it . 

Another option which came was Audit Vault, but looking at the price below, 
I may have to think twice !



Please put in your comments on this.

Kind Regards,
Sreejith Nair




From:   Sriram Kumar <k.sriramkumar@xxxxxxxxx>
To:     Sreejith.Sreekantan@xxxxxxxxxx
Cc:     oracle-l@xxxxxxxxxxxxx
Date:   03/25/2011 06:29 PM
Subject:        Re: Options for auditing data changes



Hi,
 
have you looked at flashback data archive?. Could be a good fit
 
http://www.oracle.com/technetwork/issue-archive/2008/08-jul/flashback-data-archive-whitepaper-129145.pdf
 
 
 
best regards


 
On Tue, Mar 22, 2011 at 4:30 PM, Sreejith S Nair <
Sreejith.Sreekantan@xxxxxxxxxx> wrote:
Hi Friends, 

I thought I will get some inputs on my  following implementation. The 
requirement is to audit some data changes with in the system.( Oracle 10.2 
on RHEL 4.7 ) 
The audit is required in a sense that, the before images of data and 
information of who changed the data is required. I have looked at options 
like Oracle Auditing,FGA and so. But this cannot give me audit for the 
 data changes,when and who changed. 

The first thing that comes into my mind are using triggers . Another 
option is using log miner. I have successfully tested it out with both of 
these approaches. The environment is like 

1 ) For some critical tables for which audit is required triggers were 
written ( ours is an OLTP application ) 
2 ) For some non critical tables, log miner which was called by a stored 
procedure which runs at certain periods is used. 
3 ) audit data is stored in a different schema , with same table names as 
in base schema. 

I would like to know your thoughts on this. 

Thank You, 

Kind Regards, 
Sreejith Nair 






DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."











DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."




GIF image

Other related posts: