> Solution to fix that issue need more consideration. Why do we not fix it by finishing your new netstack together? ;) I would stall PPP until the netstack is usable. The current netstack is not usable, either, so I can only help finishing one of the two netstacks to be able to finish and test PPP. > I don't like the idea of a public variable exported by kernel module api. > Beside that issue, that's global variables aiming to be used *per* module. > max_xxxx variables won't have same value for each network modules using > mbuffers... Now I see the big problem with that. :( As nobody really knows how our mbuf netstack works there is only one solution (I already named it ;). > Did I say I don't like *mbuf*, by the way!? > BSD/Linux stack is monolithic, where public/non public symbols isn't an issue. > With our modular design, we can't do that, we need a public way to set and > get > these maximum headers sizes. Per module using mbuf.o services, in bonus, as > not > every network module will share the same maximum headers sizes... > :-( We have an alternative in our repository. May I help? What needs to be done to get ethernet support? OTOH, I cannot test the ethernet interface because I do not have a network (only PPPoE). Would loopback be sufficient? :) > > I think I do not understand. :) > > I thought that mbinit() lives in the core. There is no copy of this > > initialization living in _each_ network module like tcp, etc. > > Could you please explain to me why this will work? > > You perfectly understood, I'am the one replying too fast on this topic with > enough consideration. Sorry. No problem. > > Should not the declarations go into a private header in the core's > > directory? > > These variables are, currently ( :-( ), required by mbuf.o and non-core > network > modules tcp and ipv4. I guess that why they ends there in the first place. > But it's a bug, clearly: > the core module Remove the bug (= the core module ;). > > No to the first question? :)) > > Hum... I need to look at this issue better before being able to keep a > position > on this question ;-) > But whatever it will, these maxumim values should be handled better than thru > this mess. IMHO, it will be unnecessary work to solve the bug. Let's get the new netstack working. What is the new netstack's status? Waldemar