I've dug up some old notes I wrote about 7 years ago working with encrypted
tablespaces in 188.8.131.52.
The data is encrypted block by block on disc, but the blocks are not
encrypted in memory.
Undo and redo are encrypted in some way. I didn't check anyy details here,
but since a single transaction could modify both encrypted and unecrypted
data, and since a single undo block can contain multiple (small)
transactions there must be some special mechanism in place that allows for
selective encryption and decryption of undo and redo vectors that is likely
to add some overheads to DML.
Since data blocks are encrypted on disc and decrypted for memory then
keeping the working data sets cached becomes more important - my tests
suggested that for operations that were almost pure encryption/decryption
and very little else (e.g. direct path tablescans to filter and aggregated
blocks; insert into into encrypted t/s select from non-encrypted data),
there was (as you have seen) a significant performance impact (20% - 30%)
A special thought for Exadata is "where does encryption/decryption take
place", followed by "how does this affect predicate push to storage" and
"how does this affect storage indexes", also "what's the effect on the
exadata cell flash cache 'megablock' mechanism for compressed data
affected by encryption". There may be clues in the way that
compression/decompression works - the database server compresses, the cell
Indexed accesses work as normal since the blocks in memory are decrypted.
For indexes that are very well cached the extra overhead of execution will
be about the cost of decrpting tableblocks that have to be read from disc
because those are the ones that will have to be decompressed. It probably
becomes more important to ensure that you don't have to read and decrypt
blocks to discover that you didn't want a row - i.e. you want very good
indexing and minimal appearance of filter predicates in the execution plan..
On Wed, 30 Jun 2021 at 20:25, Lok P <loknath.73@xxxxxxxxx> wrote:
Thanks a lot Jonathan. Planning to test it again by creating two
tablespaces side by side and running the queries again on exactly matching
objects. Btw, out of curiosity , as a few teammates highlighted that we
will see degradation in index range scan performance(not equal
operators but mainly range/between operators) post tablespace encryption
because the actual values would be encrypted ones. Is that true and we
should be really concerned about the index range scan performance ?