We do perform almost the same process, but FTP to a server that is a shared location. The files are also pre-processed with a naming convention that is the unique identifier for the partitioning key in the main tables they stage to, so it is by default, flagged and couldn't be reloaded into the environment... So you have two choices- sweet talk the Manager in charge of hardware purchases to get you disk or sweet talk the developers in charge or rewriting the code. Please ask yourself which ones are more likely to be motivated by compliments... :) Kellyn Pedersen Sr. Database Administrator I-Behavior Inc. http://www.linkedin.com/in/kellynpedersen www.dbakevlar.blogspot.com "Go away before I replace you with a very small and efficient shell script..." --- On Wed, 9/29/10, Niall Litchfield <niall.litchfield@xxxxxxxxx> wrote: From: Niall Litchfield <niall.litchfield@xxxxxxxxx> Subject: File Processing Question To: "ORACLE-L" <oracle-l@xxxxxxxxxxxxx> Date: Wednesday, September 29, 2010, 10:20 AM After the wisdom of crowds here. Consider a system that processes files uploaded by ftp to the DB server. Currently the upload directory is polled periodically for new files (since they don't all arrive on a predictable schedule with predictable names). Any new files are processed and then moved to an archive location so that they aren't reprocessed. The polling and processing is done by java stored procedures. This system is a RAC system with no shared filesystem storage. The jobs that poll run on a particular instance via the 10g Job Class trick. The question that I have is how would you implement resilience to node failure for this system. It seems to me that we could do add shared storage - at a cost probably. ftp the files directly to the db - implies code changes probablyDoes anyone else do anything similar and if so how? -- Niall Litchfield Oracle DBA http://www.orawin.info