On the other hand it might not be the best platform for your database itself just yet http://blogs.sun.com/roch/date/20060922 -
Yes, I'm aware of that blog entry by Roch even before I posted my reply to Mladen.
In my mind, "best platform" is such a broad term and a "best platform" for one organization may not be the best for another.
But even at the filesystem level, I am not going to say/argue that ZFS is the "best" filesystem to run anyone's database against, for what is best for me might not be the best for others.
In my case, I would like to consider factors other than performance when choosing what is best for my specific situation. At the filesystem level, I will most *likely* use raw for critical files IF performance is all I'm after. But if UFS or ZFS is good enough (performance-wise) for a specific application, then I may choose to use one of them simply because they offer other things. I accept the fact that like most things, I can't have everything I want.
But even is ZFS is not the "best" for a specific application, please note that in Solaris, ZFS can *co-exist* with other filesystems. One can put the database in raw, UFS, VxFS, or ay other filesystem. Then, a ZFS filesystem can be created just for the purpose of what is being discussed in this thread -- to compress datapump output on the fly.
> It'd be fun to see those numbers with compression on :(I don't think that in this context, we should be comparing performance between compressed vs. uncompressed filesystems. The task that is being discussed in this thread is to compress the datapump file -- on the fly. Compressing the datapump file will eat up CPU cycles whether it is done by the filesystem or some other program. Therefore, if one wants to measure performance, what should be measured is the time it takes to do the entire task.
Again, the topic is compressing datapump output on the fly. Mark and Mladen mentioned compressed filesystems. That is the reason I mentioned ZFS. I think that ZFS is *a* solution that can address that specific problem.
I have not tried datapump with ZFS but I see no reason why it won't work. Is it "supported" by Oracle? I don't know.
Lastly, here's a 2.8GB Oracle extended SQL trace file that is sitting in a ZFS filesystem in my box. The size of this 2.8GB on disk is 776MB:
---- jforonda@u20$ ls -lh test_ora_17255_t.trc -r--r--r-- 1 jforonda staff 2.8G Sep 11 2004 test_ora_17255_t.trc jforonda@u20$ du -sh test_ora_17255_t.trc 776M test_ora_17255_t.trc jforonda@u20$ ----Applications don't have to know that the file is compressed. Compression and decompression is done by ZFS. In other words, the file can be accessed like any UNcompressed file. Like this:
---- jforonda@u20$ tail -5 test_ora_17255_t.trc END OF STMT PARSE #21:c=0,e=123,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=740525621053 BINDS #21: EXEC #21:c=0,e=259,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=740525621393 EXEC #3:c=0,e=3416,p=0,cr=1,cu=3,mis=0,r=1,dep=0,og=4,tim=740525622047 jforonda@u20$ ---- Thanks. James http://jforonda.blogspot.com -- http://www.freelists.org/webpage/oracle-l