Re: Dealing with locking in bulk update scenario

Ah the joys of memory caches!
If I were you I would reconsider the "load 50,000 records in memory"
part. For me it's a major design flow in your system. What are the db
block buffers in the SGA, if not a cache? Database management systems
were introduced partly to solve locking problems. Either you rely on
Oracle, or you reinvent the wheel. For instance you can keep both the
new and the old values (hello memory consumption) and update only when
the value is the same as what you know as the old one - but then it
doesn't tell you what to do when it no longer matches, and so on.
Alternatively, instead of computing the new value and setting a new
value, eg

    set col = computed_value

you could compute a delta, and execute

     set col = col + delta

If col has been increased in between, it would not mess everything up.
But once again the big mistake is the cache. Let me guess, Java
programmers who just "persist" data in Oracle?


Stephane Faroult
RoughSea Ltd <http://www.roughsea.com>
Konagora <http://www.konagora.com>
RoughSea Channel on Youtube <http://www.youtube.com/user/roughsealtd>

On 11/02/2010 10:43 AM, RP Khare wrote:
> Let me clear the scenario, the real-life issue that we faced on a very
> large database. Our client is a well-known cell phone service provider.
> Our database has a table that manages records of the current balance
> left on the customer's cell phone account. Among the other columns of
> the table, one column stores the amount of recharge done and one other
> column manages the current active balance left.
>
>
> We have two independent PL/SQL scripts. One script is automatically
> fired when the customer recharges his phone and updates his balance.
> The second script is about deduction certain charges from the
> customers account. This is a batch job as it applies to all the
> customers. This script is scheduled to run at certain intervals of a
> day. When this script is run, it loads 50,000 records in the memory,
> updates certain columns and performs bulk update back to the table.
>
>
> The issue happened is like this:
>
>
> A customer, whose ID is 101, contacted his local shop to get his phone
> recharged. He pays the amount. But till the time his phone was about
> to recharge, the scheduled time of the second script fired the second
> script. The second script loaded the records of 50,000 customers in
> the memory. In this in-memory records, one of the record of this
> customer too.
>
>
> Till the time the second script's batch update finishes, the first
> script successfully recharged the customer's account.
> Now what happened is that is the actual table, the column:
> "CurrentAccountBalance" gets updated to 150, but the in-memory records
> on which the second script was working had the customer's old balance
> i.e, 100.
>
>
> The second script had to deduct 10 from the column:
> "CurrentAccountBalance". When, according to actual working, the
> customer's "CurrentAccountBalance" should be 140, this issue made his
> balance 90.
> Now how to deal with this issue.
>
>
> ..................
> Regards,
> Rohit P. Khare
>

Other related posts: