[openbeosnetteam] Re: mbuf's

  • From: "David Reid" <dreid@xxxxxxxxxxxx>
  • To: <openbeosnetteam@xxxxxxxxxxxxx>
  • Date: Fri, 8 Feb 2002 16:32:53 -0000

I'll try to answer...

> > As an additional twist I've tried to implement a simple pool type
> > allocator
> > for fixed size blocks.  basically this creates an area and then
> > assigns bits
> > of it to the requests for data or stores freed bits on a free block
> > list.
> > This should be much quicker than malloc/free and speed will be of the
> > essence in the mbuf code.
> > Does that answer peoples questions=3F
>
> In addition, for outgoing packets it would make sense leave space at
> the beginning of a buffer to prevent memmove()s.
> When we have buffers of 2048 bytes anyway (and that's a good size IMHO)
> we could have a small structure at the beginning which manages free
> space in that block (simply start and end offsets).
> I haven't looked into Gigabit ethernet etc. - does the packet size stay
> the same for them=3F

Well, traditionally there is logic for this, but of course we can adjust as
required.

I mean, for instance, there's no reason why the card (some of which uses
rings of buffers) can't have a pointer to an mbuf with the page attached.
That way we simply copy directly into the page and pass the mbuf inot the
stack.  That's how linux and bsd tend to do things.  It has some advantages,
but requires rewriting the drivers to take advantage so I'd been hoping to
wait until we had a working stack before tackling that as an "optimisition".

> > Well, there is one big question remaining, where do we switch from
> > flat
> > storage to mbufs=3F  the logical place is in the driver for the card,
> > but
> > we're not planning on touching them straight away, so more reasonably
> > it
> > will partly depend on how we interface with the cards.  I'd envisage
> > at
> > present it'll be done in an "if" layer that sits between the
> > encapsulation
> > and the network card.
>
> There should definitely be a layer between the card driver and the rest
> of the stack - that way, we can change the underlaying driver structure
> to directly support mbufs.

Exactly.

> I have also thought about a zero-copying network stack - this isn't
> possible with the standard posix API (as the internal buffers have to
> be copied into/back from the buffers provided by the read/write
> function), but it would be possible to let the C++ Net API directly
> pass the internal buffers to the application (could be a special mode).
> The drawback is that the buffers may stay for a long time in user-land,
> and we'd have to allocate new ones (enlarge our pool) - which could be
> a possible stability problem.

Let's be honest.  We are SO far away from having a working stack that while
these discussions are interesting I'm more inclined to say, "let's start
getting something that works and worry about some of this later".

I mean how many of us have written a net stack from scratch before? Hands
up? No-one?  OK then, let's be honest and grown-up enough to admit that we
don't have all the answers. We'll get them by actually doing the work and
writing code. The stuff I did on newos was invaluable and I know a lot more
now than I did (in some cases way too much!) and I fully expect that process
to continue.  Will we get it right first time? Haha. Yeah right. So, let's
get soemthing that we "think" will work and run with that.  Like everything
in software, once it's been done once, doing it the second/third/etc time
gets easier and easier...

How about this?

Each cards gets an rx and tx queue, each managed by a thread. The thread
basically just gets data from the card/stack and passes it onto the
stack/card, transferring from/to mbufs as required.  From the exit/entry
point of that function up to the final passing back to the user is mbuf's.

Works for me.

david



Other related posts: