[haiku-commits] Re: haiku: hrev46860 - in src/kits: shared package/hpkg package/hpkg/v1

  • From: "Adrien Destugues" <pulkomandy@xxxxxxxxxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Mon, 17 Feb 2014 16:20:52 +0000

>>>> Sorry, I thought I had replied to this already.
>>>> I'm unsure how the wrapper can be done in a safe way, as you can't (by 
>>>> design) assume it's possible
>>>> to use a BDataIO for atomic writes. What would the wrapper do if it can 
>>>> only write part of the data
>>>> to the underlying BDataIO? should it block until the remaining part can be 
>>>> written? Should it
>>>> return an error, and risk getting the following Write calls out of sync?
>>> 
>>> I don't see why the writes have to be atomic. What I'm saying is that 
>>> the writes should be complete and fail otherwise. 

Ok, I think I understand the difference now.


>> Huh? ATM ZlibDecompressor does not provide the number of bytes written 
>> either *and* you are masquerading incomplete writes as a success!

ZlibDecompressor doesn't have to provide this. as you said, all the code was 
designed for atomic writes (either everything is written, or nothing is). I 
don't see how this can be implemented safely over the BDataIO interface, 
because once you have called BDataIO::Write, and it returns you a number of 
bytes smaller than you expected, you can't get your bytes back. So, yes, the 
wrapper could try to write more bytes in a loop, but if that fails, all it can 
do is return an error code. The caller has no idea if nothing was written at 
all, or maybe some of the data was.

Is it always safe to ignore this? I don't know, and I guess that depends on 
both the caller and the DataIO being used. In some cases, this may be 
recoverable by trying again later (that's the case with the BDynamicBuffer the 
HttpRequest code is currently using). If we don't use this, the whole download 
will have to be restarted.

So, the write-loop could be factored to the wrapper. But, the ZlibDecompressor 
would still have to ckeck if the loop managed to write all the data, and try 
again later (maybe at the next decompression chunk) if it didn't. I don't think 
BDataOuptput would be easy to use in such a way, with a "write as much as you 
can" policy. Yes, we could have a method return the number of bytes actually 
written, keep track of that in the caller, and get this working. But the code 
wouldn't be that much simpler for the caller. Or, we could try to do this all 
inside the wrapper, but it would probably mean accepting all incoming writes, 
storing them to an internal buffer, and trying to flush them later on. This 
makes it difficult to handle non-recoverable errors in time.

What we need is some kind of "warning" state for the stream: not a complete 
success, but not an irrecoverable error either. BDataIO provides this by 
allowing partial writes. Maybe the EAGAIN error code could be used with 
BDataOutput, but this would still need special handling from the caller. Maybe 
it would still manage to move some of the code to the wrapper.

-- 
Adrien.

Other related posts: