Re: Cron management...
- From: "Mladen Gogala" <dmarc-noreply@xxxxxxxxxxxxx> (Redacted sender "mgogala@xxxxxxxxx" for DMARC)
- To: oracle-l@xxxxxxxxxxxxx
- Date: Sun, 12 Apr 2015 22:12:19 -0400
On 04/12/2015 08:01 PM, Seth Miller wrote:
Chris,
Now that Mladen is done belittling your Linux admin for simply being
cautious, the rest of us can offer you constructive advice.
Seth, I understand that you don't like me very much and that is your
prerogative, but saying that an SA which refuses to install a commercial
scheduler product, which by its nature must use "root" access is "simply
cautious" is ridiculous. The very first reason to question the SA
judgment is the fact that crontab was used for scheduling NetBackup
jobs. As I have said before, I have worked with NetBackup for
approximately 6 years and was configuring scripts for NetBackup. I never
once had to schedule them through crontab. It has always been done by
the SA, from a centralized backup scheduler. NetBackup has a centralized
scheduler and there is no need to any additional schedulers, if
NetBackup is the problem.
In addition to that, SA doesn't get to decide what runs on the company's
systems. If the company purchases a software product, it's SA's job to
install it or facilitate the installation thereof, to the systems on
which the product is meant to run. The SA has an advisory role, not the
veto power .
Last, I actually googled for "Tidal scheduler" and found out that the
vendor is Cisco Systems, a huge and very well respected software vendor.
As I have explained earlier, any scheduling product on Linux/Unix
systems must run as user root. Not trusting a product by such huge
corporation, just based on the natural requirements for all of the
scheduler products is ridiculous. After all, the local Linux/Unix
scheduler runs as "root":
[mgogala@medo ~]$ ps -ef|grep crond|grep -v grep
root 850 1 0 Apr11 ? 00:00:00 /usr/sbin/crond -n
[mgogala@medo ~]$
Not only does it run as root, but the code is also widely available on
the Internet, so if there are any bugs which would permit switching the
user to "root" without the proper authorization (/etc/sudoers), they are
most likely to be known for an open source product like "cron".
Commercial products like "tidal" are much more likely to be tested and
audited through and through.
If you like your shell scripts and are comfortable with cron, you
might be able to just enhance it enough to eliminate the single point
of failure and dramatically reduce your risks by centralizing your
backups.
Modify your rman scripts to use an Oracle wallet to authenticate to
the databases remotely through an rman client. That way, you can take
a backup without having to be on the server and won't expose the
password of a privileged account.
What about performance? NetBackup usually pushes the data from DB server
to media server. Adding the 3rd network point can severely impact the
performance, since all the communication would actually go through that
dedicated "backup server". What happens if there are several
simultaneous backups, all going through the "backup server"? Do you
need to backup all the databases at separate times? RMAN maps libobk.so
into its address space at the time when "allocate channel device type
SBT" is executed and it's libobk.so which facilitates the communication
between rman and the media server. So, all communication for database
backups would go through this "backup server", which would not only be a
bottleneck, but also a single point of failure.
I would also suggest creating a separate sysdba account just for the
use of logging in to do the backups.
New SYSDBA account? Now that's secure!
--
Mladen Gogala
Oracle DBA
http://mgogala.freehostia.com
--
//www.freelists.org/webpage/oracle-l
Other related posts: