[haiku-commits] Re: haiku: hrev46866 - src/kits/network/libnetapi

  • From: Stephan Aßmus <superstippi@xxxxxx>
  • To: haiku-commits@xxxxxxxxxxxxx
  • Date: Thu, 13 Feb 2014 10:36:49 +0100

Hi,

On 13.02.2014 10:15, Adrien Destugues wrote:
 From what I understand, the dynamic stack allocated arrays are a GCC
extension anyway and should be avoided. So why do you not use the
BStackOrHeapArray in the second chunk of the patch? I haven't read the
code in context and don't know if it would suffer from the same
potential overflow problem, but the array is of dynamic size in any
case, so you should switch it there as well.

The gcc extension is allowing dynamic arrays of non-POD types. In this case, 
the type is char which is a POD, so there is no such problem.
The problem we have is when allocating a big chunk of memory on the stack, and 
accessing it in a non-sequential way (for example starting to write it from the 
end). Our stack guard doesn't catch this, if you write past the protection page 
that's above the stack.

I've googled a bit and it seems the feature is called VLA (Variable Length Array) and it is part of the C99 standard, but not the C++ standard. It doesn't seem to have to do with PODs as such.

I think the second case is fine, because the data is written in-order to the 
buffer and the stack will grow. But it seems zlib decompression isn't writing 
in-order, leading to the crash.

This does not make sense to me at all. You simply don't know what size the decompressor will return at this point in the code, and you should not make any assumptions. Therefore you may allocate an array larger than the remaining stack space. (That being said, technically even the fixed size for the BStackOrHeapArray may not fit anymore...)

I'm not completely satisfied with the current solution, as it involves too much 
copies of the data (from the socket to a BNetBuffer, then to a temporary buffer 
sent to ZLib, then to a DynamicBuffer, then again to a temporary buffer for 
sending to the listener). We should refactor the code to avoid this, probably 
using a pipeline of BDataIO subclasses to process everything in a stream.

Yeah, it does look a bit confusing. I guess it's OK to have the code grow a bit first, and then rewrite it once all requirements are known.

Best regards,
-Stephan



Other related posts: