Re: File Processing Question

  • From: "Matthew Zito" <mzito@xxxxxxxxxxx>
  • To: <niall.litchfield@xxxxxxxxx>
  • Date: Wed, 29 Sep 2010 12:31:26 -0400

You could add a VIP to the rac cluster, attach it to one of the nodes, and have the FTP client connect to that VIP. Then, when a node fails, the VIP will roll over to the other node, and the next time the FTP client connects, it'll connect to the surviing node.


Or are you concerned about the log files on the down node that haven't yet been processed?

Matt



On Sep 29, 2010, at 12:24 PM, "Niall Litchfield" <niall.litchfield@xxxxxxxxx > wrote:

After the wisdom of crowds here.

Consider a system that processes files uploaded by ftp to the DB server. Currently the upload directory is polled periodically for new files (since they don't all arrive on a predictable schedule with predictable names). Any new files are processed and then moved to an archive location so that they aren't reprocessed. The polling and processing is done by java stored procedures. This system is a RAC system with no shared filesystem storage. The jobs that poll run on a particular instance via the 10g Job Class trick. The question that I have is how would you implement resilience to node failure for this system. It seems to me that we could do

add shared storage - at a cost probably.
ftp the files directly to the db - implies code changes probably
Does anyone else do anything similar and if so how?

--
Niall Litchfield
Oracle DBA
http://www.orawin.info

Other related posts: