Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable I was thinking a lot about this situation... So, I've concluded that: The problem: My desire is to ensure a situation where a LDD table with a unique key = such as CPF could allow insertion of a value k without having to verify = its uniqueness among remote servers. The idea would be to allow the remote servers to have complete autonomy = for insertion, otherwise, they could have a insertion time rate very low = comparated to independent local databases. This could make a bad image = to our product, once most of tables candidate to be LDD will have unique = or primary keys. My conclusion: This solution is not possible.=20 The only way to ensure total independence for unique/primary keys is to = use independent visible LDD tables (that is, combining all keys with the = LOCAL column). Consider that few people will be acceptable to convert = their keys combining to this column. Do you agree? Despite GI might ensure 99,9% probability to servers being synchronized, = this 0,1% is enough to force us to verify remote servers looking for = active transactions with the same values, or to strike the model = consistence.=20 The possible: As it is impossible to reduce to zero the probability of a independent = insertion without consistency being violated, I think what we could do = is trying to minimize this probability. suggestion 1 One way I thought could reduce the probability of verifying remote = servers near to 50%, is to stabilish a priority chain. That is, if we = have 30 servers in a cluster, everyone will have a different priority = order defined, for example, with a integer value 1..30, representing the = priority degree. Consider that the only situation that could have unsynchronized GI about = a given value k is when there is an opened transaction about a value k. = So, if two different servers will never have the same priority for the = same moment in time (priority could be changeable for servers along the = time), a server will never have to check the value k for any server with = lower priority.=20 That is, for the example of 30 servers, the server numbered 30 will = never have to check remote servers for any insertion, but only its own = GI. The server 29, will only have to check server 30. Server 28, only 29 = and 30. Server 27 checks 28, 29 and 30, and so on. This probability falls to 50% because of the following. Consider a n = servers cluster. For 1 .. n servers: - Server n has the maximum priority); - Server n - 1 has the second maximum priority (except n); - Server n - 2 has the third maximum priority (except n, n - 1); ... - Server 1 has de lowers priority (under all). Considering the sum of all, the total of lowest priority (what is = interesting for avoiding remote checks) is such atached picture. = Graphically, it's like the area of a retangle triangle. One way to ensure real probabilities reducing average remote access rate = is to stabilish priority policies to further the servers with most = probability of insert values to GI to get the podium of priority. ----- Original Message -----=20 From: "Peter B. Volk" <peter.benjamin.volk@xxxxxxxxxxxxxxxxx> To: <mysql-dde@xxxxxxxxxxxxx> Sent: Wednesday, December 14, 2005 9:38 PM Subject: [mysql-dde] Re: Fw: P/F/U Keys in LDD > Hey, >=20 > I've been thinking: >=20 > so what about this one: >=20 > S1 receives a query. There is in I/U on a P/U/F key index. S1 = evaluates the > insert. If the Key kan be inserted (no key violation) then S1 starts = to > sernd the GI insert to the other servers synchroniously. After the = majority > of the servers have agreed on the Insert then the update is commited. = Else > it is rolled back. The sending to the remote servers should be done on = a > rotation princible. so if we have S1 to S8 and S4 receives the query = then it > would send the update to S5,S6,S7,S8,S1 etc. This way we would avoid = massive > collision with other GI updates. Also this removes the master server = you are > worried about. >=20 > What do you think of that? >=20 > Peter >=20 >=20 > ----- Original Message -----=20 > From: "Fabricio Mota" <fabricio.mota@xxxxxxxxx> > To: <mysql-dde@xxxxxxxxxxxxx> > Sent: Tuesday, December 13, 2005 1:16 AM > Subject: [mysql-dde] Re: Fw: P/F/U Keys in LDD >=20 >=20 >> Sorry, I missed to answer one: >> 2005/12/12, Fabricio Mota <fabricio.mota@xxxxxxxxx>: >> > >> > Ich bin zur=FCck, >> > >> > (hahahahaha) >> > >> > >> > 2005/12/12, Peter B. Volk <PeterB.Volk@xxxxxxx>: >> > > >> > > ----- Original Message ----- >> > > From: Fabricio Mota >> > > To: mysql-dde@xxxxxxxxxxxxx >> > > Sent: Friday, December 09, 2005 2:57 AM >> > > Subject: [mysql-dde] Re: Fw: P/F/U Keys in LDD >> > > >> > > >> > > Buenas notches (that's not my language! hahaha) >> > > >> > > Oh yes, I didn't think about the lake. And the late-sync = protocol >> > > does not specifies to be water-resistent :). You've got the = reason. > Maybe >> > > it's better to make the unplugged server to rollback, and the = cluster > to >> > > bufferize logs from entire transaction for its future recovery. >> > > Another thing about it I was thinking today is to ensure > agreement >> > > of buffer, that is, to replicate the buffer log for all servers = in > late >> > > sinchronization. That's because of to prevent another fault to = damage > the >> > > global consistence. What do you think? >> > > >> > > [Peter]Well we'll need somesort of a global log anyway. Not = only >> > > for these un- and redo stuff but also for replication (see next > email). >> > > Yes, of course. I think global logs must be another starred point = to > be >> > > submitted to our analysis (wow, how many things!!!) >> > > >> > > By the way, in general manner do you agree with late >> > > synchronization? >> > > >> > > [Peter]I would agree under the following conditions (feel = free to >> > > disagree): >> > > >> > > 1.) RDD table can always be late synct (since they are only >> > > management tables and 90% of queries against these tables are = selects) >> > > Yes, I agree fully. Late sync must be applied in some situations = to >> > > help, and not to degrade. >> > > >> > > >> > > 2.) There is an upper bound for late sync. This means that = there > is >> > > a some time (e.g. 10sec.) where the sync commands can be delayed. = but >> > > after this time the sync mu=DFt be done >> > > I can't understand it clearly. >> > > >> > > [Peter] I mean that a late synch should not bee toooooo late. so = the >> > > sync should be done with an upper bound in time. so the queries = are > not >> > > suppost to be in the late sync queue for longer than 10 Sec. or = so. > This is >> > > simply a livlyness property >> > >> > >> Yes, this make sense. But the problem is: once late sync was = authorized, > the >> command was buffered and the transaction was commited, I think it is = very >> necessary that the remaining buffered command being sent to the = recipient >> before it to be reintegrated to group. Otherwise, the server might = come >> inconsistent. >> >> The exception was if the server fell in the lake :). But if it = happen, and >> there are no means to recovery the server state, then the DBA must = pull > out >> the server from the cluster, using *alter cluster drop <dde_server> * >> command. >> >> So, if this command being performed successfully, ALL pending late = sync in >> buffer must be purged. >> >> Hey, I will star that too, I think I did not explain it in spec!!!! >> >> >> >> > 3.) LDD tables are only partialy late synct. The only syncing = to > do >> > > is the GI update right? >> > > I think so. >> > > >> > > So a late sync could only be done if there was an agreement = on > the >> > > u/f/p keys. >> > > I agree. Maybe we will need to have means to ensure agreement = within >> > > dynamic operations. Maybe a flag per table, with a agreement = protocol, > I >> > > don't know yet. But it must be a trusted mean to ensure a safe = late > sync. >> > > >> > > [Peter]Well need to star this point until we have decided how to = do > the >> > > u/f/p key validation >> > >> > >> > Ok. Point starred. Command executed in 0.005s. :) >> > >> > AND Inserts and updates are imidiatly synct and delets are = late >> > > synct. >> > > In true, I think late sync for I/U/D might be allowed only if the >> > > faultous server and the targeted server are different. That's > important to >> > > avoid to prohibit full access to the island-server, during a = network > fail, >> > > for example, and at the same time, to ensure consistence. >> > > >> > > >> > > >> > > This is because if there was a deletion the remote server = would >> > > still query the server but the server would only return an empty = set. >> > > Inserts mu=DFt be synct in time because there is a kind of filter = that > the GI >> > > can set to optimize the number of remote server queried. Same = applies > for >> > > updates. >> > > >> > > This I also did not understand clearly. >> > > >> > > [Peter] imagine 2 servers. A query is set of on S1. S1 needs data = from >> > > S2. S1 needs to querie S2 to retrive the data he needs. in the = mean > time S2 >> > > has executed a query that deleted exactly those rows that S1 = needs. No > data >> > > was effected on S1. Now S1 queries S2. Without late sync for = Deletes > the >> > > Transaction would need to be delayed because the GI on S1 is not = up to > date >> > > and S1 can only query S1 if the lock on GI is released. With late = sync > S1 >> > > can query S2 without any problems because the GI modification = only has >> > > optimization effects. >> > >> > >> > Well I think we're agreeing in a point: late sync, for the most it = is >> > used, it must always ensure consistence. What you announced is a = *time >> > window *of inconsistence. >> > >> > What I would propose for that is: when any operation is performed, >> > although a server is down or incommunicable - late-sync targeted or > not - no >> > operation targeted to it must be allowed. >> > >> > The fundamentals of late-sync - in my concept, of course - is that = it > must >> > be an passive and *rightless* element of the transaction, unable to > decide >> > if it could be performed or not. If he is an active element (such = as: > delete >> > * from LDDTable where Server =3D 1 ----- note that here, for = example, > Server 1 >> > is an active element), then late sync is not suitable. This = violates the > *lemma >> > 4 *of late sync. >> > >> > Imagine: if in the delete query, there is an foreign key related to = any > of >> > these records? If we allow to delete it (with server down), it = could be > a >> > disaster! >> > >> > >> > -- >> > > >> > > Sem mais, >> > > >> > > Fabricio Mota >> > > Oda Mae Brown - Aprecie sem modera=E7=E3o. >> > > http://www.odamaebrown.com.br >> > > MySql-DDE discussion list >> > > www.freelists.org/ >> > > >> > > >> > >> > >> > -- >> > >> > Sem mais, >> > >> > Fabricio Mota >> > Oda Mae Brown - Aprecie sem modera=E7=E3o. >> > http://www.odamaebrown.com.br >> > >> >> >> >> -- >> >> Sem mais, >> >> Fabricio Mota >> Oda Mae Brown - Aprecie sem modera=E7=E3o. >> http://www.odamaebrown.com.br >> >> MySql-DDE discussion list >> www.freelists.org/ >> >=20 > MySql-DDE discussion list > www.freelists.org/ >=20 > MySql-DDE discussion list www.freelists.org/