I cannot answer your question directly. I can suggest the problem space for something like the data masking pack is somewhat more complex than you might expect. In particular your solution should: 1) deal with referential integrity 2) deal with the lack of database referential integrity 3) deal (I think) with subsetting - that is picking a consistent subset of records given condition 1 or condition 2 4) deal with arbitrary degrees of encryption/masking. In other words if I'm *buying* something that does data masking I want it to a) work with my application whoever coded it b) be configurable as to what I mask and how c) be somewhat version independent. if I'm commissioning someone to code something for a specific project I can probably give rather more detailed and defined constraints on what they produce. I'm not suggesting that the pack is good, just the problems that whoever wrote it might have had. including all the stuff I haven't thought of but customers have. On Mon, Oct 26, 2009 at 6:12 PM, Kenneth Naim <kennaim@xxxxxxxxx> wrote: > My colleague implemented data masking of credit cards, ssn’s and bank > accounts and ran into some errors, so I was asked to review the code > generated by the pack. It seems a lot more complicated than it needs to be. > I see constraints/triggers/tables being dropped, renamed and recreated, > procedures being created and later dropped for every table being masked, > hundreds of lines of null being concatenated together etc. Is this the way > this pack should work? Does this logic make any sense to any one? How is > this better that a simple procedure that disables constraints and triggers, > and generates a random value for the column and then perform a mass update. > > > > Thanks, > > Ken > -- Niall Litchfield Oracle DBA http://www.orawin.info