[openbeos] Re: general update tool idea

  • From: Martin Kunc <makyba@xxxxxxxxx>
  • To: Linus Almstrom <openbeos@xxxxxxxxxxxxx>
  • Date: Wed, 19 Jun 2002 19:39:49 +0200

Hi,
but you are talking about source up-to-dating, but users wants so
binary update. And i am sure, that not only users can be up-to-date
with all (similar as is rpm in linux, update in windows, etc)

martin kunc


LA> Why would this be needed when we have and use cvs?

LA> On 2002-06-19 at 19:55:13 [+0200], openbeos@xxxxxxxxxxxxx wrote:
>> Hello,
>> I have an idea that i think would prove to be elegant. it would be an 
>> alternate method of updating our beos apps (similar to reos or 
>> beosupdate).
>> basically it's for people who want to always stay on the bleeding edge. 
>> we do that by providing an object based update.
>> we send the compiled objects for each file that changed/added. this would 
>> mean twice the size of the app/lib on the user's hard disc, but also 
>> means less network activity and more elegant updating. instead of 
>> redownloading a 2.2mb app, we only download the 70k that got updated 
>> between "releases". the linking is done in the user's computer, by the 
>> update application (calls gcc/ld/libtool/whatever needed). i think this 
>> is superior to the methods that exist today.
>> 
>> here's a sample session as i see it :)
>> - hello
>> - hi, ip  207.232.16.1 (217/2000)
>> - whatsnew app2024 29032002
>> - sending 2024 29032002-19062002 log
>> 
>> we then get a log which looks like
>> add blah.o
>> del tractor.o
>> edit stringview.o
>> del mom.o
>> 
>> we parse the file and delete every object that has del from our object 
>> directory.
>> now, we request optimization from the server and size of objects. (we 
>> have the option of size/plain/debug/generic x86 o3/cpu optimized (gcc3 
>> thinking) )
>> the server returns
>> app2024 ftp://athlonxp.beosupdate.org/2024/blah.o 35271 app2024 
>> ftp://athlonxp.beosupdate.org/2024/stringview.o 78943
>> 
>> we get the objects compressed over the network similar to cvs -z9 
>> compression.
>> 
>> when we finish downloading (we even have the progress of download (for UI 
>> app) since we know the size)
>> we just link it all together (and perhaps run some kind of checksum check 
>> on the generated binary to see that it fits.) and viola, we have the new 
>> app.
>> 
>> this method is generally good for those who want to cvs update but don't 
>> wanna waste so many cycles compiling, when in fact, only once is enough, 
>> instead of compiling in every user's comp, we do it centrally for them. 
>> the down size is that it would obviously only work for opensource apps.
>> 
>> anyway, for the server we can use a muscle server. (the main (muscle?) 
>> server basically only checks the ftp directory of the app on request for 
>> obj files created between now and the string the client sends. (it can 
>> cache it later)
>> or the ftp server can publish the updated apps for the main server. this 
>> is details for later.)
>> the ftp servers will need to have a constant link to cvs and compile the 
>> new sources.
>> + some kind of schedule util.
>> we can make different directories for different gcc flags or like in the 
>> example, different servers.
>> 
>> this is basically the feature's top design. shouldn't be very complex to 
>> implement, most of the code is available tools.
>> 
>> what do u think? makes sense?
>> thanks in advance, kobi.


Other related posts: