> As an additional twist I've tried to implement a simple pool type > allocator > for fixed size blocks. basically this creates an area and then > assigns bits > of it to the requests for data or stores freed bits on a free block > list. > This should be much quicker than malloc/free and speed will be of the > essence in the mbuf code. > Does that answer peoples questions=3F In addition, for outgoing packets it would make sense leave space at the beginning of a buffer to prevent memmove()s. When we have buffers of 2048 bytes anyway (and that's a good size IMHO) we could have a small structure at the beginning which manages free space in that block (simply start and end offsets). I haven't looked into Gigabit ethernet etc. - does the packet size stay the same for them=3F > Well, there is one big question remaining, where do we switch from > flat > storage to mbufs=3F the logical place is in the driver for the card, > but > we're not planning on touching them straight away, so more reasonably > it > will partly depend on how we interface with the cards. I'd envisage > at > present it'll be done in an "if" layer that sits between the > encapsulation > and the network card. There should definitely be a layer between the card driver and the rest of the stack - that way, we can change the underlaying driver structure to directly support mbufs. I have also thought about a zero-copying network stack - this isn't possible with the standard posix API (as the internal buffers have to be copied into/back from the buffers provided by the read/write function), but it would be possible to let the C++ Net API directly pass the internal buffers to the application (could be a special mode). The drawback is that the buffers may stay for a long time in user-land, and we'd have to allocate new ones (enlarge our pool) - which could be a possible stability problem. Adios... Axel.