Re: RE : File Processing Question

  • From: Guillermo Alan Bort <cicciuxdba@xxxxxxxxx>
  • To: bertrand.guillaumin@xxxxxxxxxx
  • Date: Wed, 29 Sep 2010 13:49:33 -0300

If you are choosing the node in which to run the procedure based on a
service, then it would be simple to add a VIP to that service that migrates
from node to node. You could have some data loss, which could be solved by
rsync or a shared filesystem.

Just as a side note, I have a failover cluster with four nodes and I have
some files that need to be available in all of the nodes but not at the same
time, instead of syncing them, I just use NAS, which removes the problem
from the servers and leaves it on the storage. it's a nice option if you
have the shared storage (which I must assume you have since you are using
RAC). if a NAS is not available, then some clusterwares provide you with an
NFS toolbox which can be used for this. But the point is, you need to have
the info on the storage box and not on the internal disk.

hth
Alan.-


On Wed, Sep 29, 2010 at 1:34 PM, Bertrand Guillaumin <
bertrand.guillaumin@xxxxxxxxxx> wrote:

> Well,
> If you upgrade to(or already use) 11GR2 clusterware, you can use ACFS.
> It works quite well for a similar problem on our system.
>
> Best regards,
> Bertrand Guillaumin
> ________________________________________
> De : oracle-l-bounce@xxxxxxxxxxxxx [oracle-l-bounce@xxxxxxxxxxxxx] de la
> part de Matthew Zito [mzito@xxxxxxxxxxx]
> Date d'envoi : mercredi 29 septembre 2010 18:31
> À : niall.litchfield@xxxxxxxxx
> Cc : ORACLE-L
> Objet : Re: File Processing Question
>
> You could add a VIP to the rac cluster, attach it to one of the nodes, and
> have the FTP client connect to that VIP.  Then, when a node fails, the VIP
> will roll over to the other node, and the next time the FTP client connects,
> it'll connect to the surviing node.
>
> Or are you concerned about the log files on the down node that haven't yet
> been processed?
>
> Matt
>
>
>
> On Sep 29, 2010, at 12:24 PM, "Niall Litchfield" <
> niall.litchfield@xxxxxxxxx<mailto:niall.litchfield@xxxxxxxxx>> wrote:
>
> After the wisdom of crowds here.
>
> Consider a system that processes files uploaded by ftp to the DB server.
> Currently the upload directory is polled periodically for new files (since
> they don't all arrive on a predictable schedule with predictable names). Any
> new files are processed and then moved to an archive location so that they
> aren't reprocessed. The polling and processing is done by java stored
> procedures.   This system is a RAC system with no shared filesystem storage.
> The jobs that poll run on a particular instance via the 10g Job Class trick.
> The question that I have is how would you implement resilience to node
> failure for this system. It seems to me that we could do
>
>
>  *   add shared storage - at a cost probably.
>  *   ftp the files directly to the db - implies code changes probably
>
> Does anyone else do anything similar and if so how?
>
> --
> Niall Litchfield
> Oracle DBA
> <http://www.orawin.info>http://www.orawin.info
> --
> //www.freelists.org/webpage/oracle-l
>
>
>

Other related posts: