If memory serves me correctly (which ain't guaranteed at 3 in the morning!) that heartbeat is once every three seconds isn't it? If so, you would need a database with a HUGE number of datafiles before that should cause any problems to a controller surely? Of course, I could be misunderstanding your point, in which case you're free to ignore me. :) Pete "Controlling developers is like herding cats." Kevin Loney, Oracle DBA Handbook "Oh no, it's not. It's much harder than that!" Bruce Pihlamae, long-term Oracle DBA -----Original Message----- From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Thomas Day Sent: Monday, 12 December 2005 7:02 AM To: Marquez, Chris Cc: mark.brinsmead@xxxxxxx; oracle-l@xxxxxxxxxxxxx Subject: Re: Dell-Oracle-Linux: Anyone else run this...because its not working for us! We have a non-clustered, RAID 0 development machine. We have 5 databases up on it but only 6 or 7 uses access any of the databases. One of the databases was under a heavy performance test, but it was not the database that had the problem. The problem was not with the storage media. A 10 hour surface scan produced no errors. My opinion is that the controller is not up to the task of keeping all of the datafiles updated with Oracle's point-in-time requirement. My understanding is that Oracle updates the controlfiles and the datafiles with a "heartbeat" counter so that Oracle will know that the datafiles are good. Under certain circumstances, this seems to be beyond the ability of the controller (to get all these counters written in a timely manner). That's just my understanding of the problem. I'm certainly open to enlightenment. If there's a way to ensure that this problem can be prevented I'd love to learn it. Dell didn't offer any encouragement. -- //www.freelists.org/webpage/oracle-l -- //www.freelists.org/webpage/oracle-l