Mark,
The statement UPDATE TABLE_A SET COLUMN_A = 1.234 must lock all the rows in
that table, regardless of each row's initial value for COLUMN_A. So, each
block must be updated to record the lock and the interested transaction, even
if no values are changing. Or so it seems to me.
Matt
On Thursday, June 17, 2021, 04:12:08 PM EDT, Mark W. Farnham <mwf@xxxxxxxx>
wrote:
#yiv1560829874 #yiv1560829874 -- _filtered {} _filtered {}#yiv1560829874
#yiv1560829874 p.yiv1560829874MsoNormal, #yiv1560829874
li.yiv1560829874MsoNormal, #yiv1560829874 div.yiv1560829874MsoNormal
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;font-family:New;}#yiv1560829874
a:link, #yiv1560829874 span.yiv1560829874MsoHyperlink
{color:blue;text-decoration:underline;}#yiv1560829874 a:visited, #yiv1560829874
span.yiv1560829874MsoHyperlinkFollowed
{color:purple;text-decoration:underline;}#yiv1560829874
span.yiv1560829874EmailStyle17 {color:#1F497D;}#yiv1560829874
.yiv1560829874MsoChpDefault {} _filtered {}#yiv1560829874
div.yiv1560829874WordSection1 {}#yiv1560829874
While I agreed with what JL wrote, IF we knew how fast processors are now, then
an update all rows query might compare the block images on the fly and not
write the blocks where all the rows in a block were updated to the same value
so that the actual block image resulting from the update didn’t need to be
written.
Off the top of my head I *think* that would work and be efficient, but I am not
engaged enough to ponder possible edge cases. But I *think* you have all the
information you need in hand at commit time to flag blocks to *not* bother
writing. If the buffer had to flush along the way before the commit, they would
already be in the redo stream, but with the massive memory now available a
private redo thread might be able to handle it both correctly and efficiently.
I really like Oracle’s redo model though. Someone once put it to me thusly:
Everything Bill Bridge starts ends up in checkmate. (Meaning Bill wins.) Making
sure the bucket has no leaks was definitely priority number one.
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On ;
Behalf Of Jonathan Lewis
Sent: Thursday, June 17, 2021 10:16 AM
To: ORACLE-L (oracle-l@xxxxxxxxxxxxx)
Subject: Re: Original design approach to Oracle REDO Logs
Mike,
I don't think you can ask about hypothetical strategies for the redo log before
stating what you think the purpose of the database should be.
If you expect the database to be the reference of last resort for the
correctness and (immediate) consistency of the data then you might like a
different strategy from someone who thinks the database is some sort of record
of evolution of the data that allows the current version of the data to be
re-imaged by a client.
Regards
Jonathan Lewis
On Thu, 17 Jun 2021 at 13:55, Michael D O'Shea/Woodward Informatics Ltd
<woodwardinformatics@xxxxxxxxxxxxxxxx> wrote:
Chaps, request for opinions/discussion/feedback .....
Question: If Oracle were written today, would the same strategy behind the
"redo log" be adopted.
An example .... for a table of 10,000,000 rows, and a contrived piece of DML
update someTable
set someColumn1 = 1.234
that "updated" all the rows
but where someColumn1 was only updated to 1.234 for 6 rows as the remaining
10,000,000 - 6 rows were already 1.234
Should (assuming just DML and also just the basic data types, number, varchar2,
date, ... ) the redo log
* record the data "change" for all 10,000,000 "updates"
* record the real data change for just the 6 real updates
* record solely the SQL used to perform the update for some playback purpose
* send a message to some message broker such as Solace, Tibco, .... allowing
subscribers to process the data change that way (might be replication/backup,
some sort of refresh or push notification to other applications, email dispatch
and so on)
* some other approach else
There is a considerable movement to event streaming technology such as Kafka to
(indirectly) drive data change events to downstream and dependent systems (in
Oracle assumably by polling the redo log file, or maybe some LogMiner interface
.. I don't know the detail) in databases that include Oracle, MongoDB (referred
to as Change Streams), and many more.
My "ask" focuses more on "just the database" interoperability with the
remainder of what is often a large tech stack, and not the original design
decision around redo logs for data recovery.
Mike
http://www.strychnine.co.uk