[gnukhata-users] Re: challenge backing up data for one organization

  • From: Krishnakant <kk@xxxxxxxxxx>
  • To: gnukhata-users@xxxxxxxxxxxxx
  • Date: Tue, 15 Nov 2016 11:39:41 +0530

We have almost got the solution.

The internal alterations have got successfully automated.

Now we are finalising the data integrity constraints.

We hope to have this done by today evening, else as our domain expert Mr. kelkar suggested, we can make it as a separate update in the near future or an independent tool.

I am still awaiting user's response on this.

Happy hacking.

Krishnakant.



On Monday 14 November 2016 08:33 PM, Krishnakant wrote:

Dear all.

I know we are waiting for the release of GNUKhata version 3.5.

However there is a problem we are facing I wish to have user's feedback.

Let me first put the problem in it's gory details.

The thing is that we must facilitate safe migration of data from one machine to another.

For the non-technical people, it means that codes in every table have to be copied.

A code colum is a unique identifyer for every record.

Many know it by the name of id.

So we have accountcode for accounts, groupcode for groups, productcode for product, vouchercode for vouchers etc.

Needless to say that these values don't mean any thing to the end user but mean a lot to the machine for identifying a perticular row of data in a given table.

Now here's the issue,

Suppose you as a CA took data from one client using GNUKhata and he had the orgcode as 1 for his organisation.

There is another client who gave you her data form her machine.

Her orgcode too was 1.

Since these are two different clients with two different machines, it is highly possible that their individual machines generated same code.

Now in your machine as a CA the second client's data will not be taken because a duplicate value error will be generated.

So we have resorted to a technique where id or code of record will be a 20 digit time stamp which will be created when data is backed up.

perhaps it will be 19 digit but a big enough number so as to avoide any possible chance of duplication to the maximum extent.

Now this means that users having existing data will have a change in the database structure which essentially may render their existing data useless.

We have tried our level best to find out an absolutely safe solution.

One way is that we create a separate

migration tool and users can use it before they every start using the new version.

This means you first install the new version, then run the tool and then start using your system.

This is certainly going to take sime where around 4 days which will postpone the release.

We can release our version sooner than the tool itsenf but we have observed people rush into using the system and might damage their data. Other way is that people enter the data again if it's volume is less, or is just trial base. But this is based on the asumtion that every one is using it on trial basis, which I am sure is not the case.

3Rd suggestion is almost on the lines of a migration tool but we are not exactly certain of how it will work untill tomorrow.

We are writing an automated code to fire safe table alterations if and when the backup functionality is used for the first time.

This code will be done in a day but we will at least need 2 days of rigorous testing to ensure total data safety.

I am coming to you all for consensus. Afterall this is speciality of free software where free is freedom.

This is what makes FOSS so professional and transparently appealing.

Kindly suggest.

Happy hacking.

Krishnakant.





Other related posts: