Sitting in a cyber cafe in Philadelphia at the moment. Nice sunny dy and I can feel lunch coming on in a short while (sushi!) >> We can now build ourselves as a kernel driver. [...] >> This is the start of allowing ourselves to be very cool and do >> all sorts of stuff, as well as simplifying our i/o with regard to >> running as a process. developing our own IPC so we could be >> running the net stack in one team while other teams used the >> services was looking like a PITA and would have been a performance >> drag in the extreme. So, the solution? >> >> Basically we'll build the stack in a series of pieces that live in >> the kernel. > > I can't agree more on this shift of stack design :-) Well, let's not get too excited yet - we don't have it working and as someone pointed out it took Be about a year to get this working correctly... > >> - socket driver - this will be built mainly from code that is already >> in place in net-server/net_server.c, albeit it'll be removed from >> there and added to a separate file that will build a simple socket >> driver. >> This is responsible for creating a /dev/net/socket entry and >> handling requests to/from that socket. > > Hum, nobody want to use my submitted (and commited to CVS) driver > net_kit/source/kernel/net_stack_driver.c?! > It does exactly what we need: a device driver wrapper between clients > (via libnet.so) and our future kernel-living network stack. Philippe, no offense, but I actually want to rip out all sorts of code and reorganise what we have anyways! Given this your code is very useful but I can't help thinking that a totally new clean codebase will be cool and should allow us to have a best possible implementation. > Just modify the NET_STACK_DRIVER_PATH constant in > net_kit/source/server/include/stack_driver.h to let him publish the > /dev/net/socket entry instead of /dev/net/stack... > > What's only missing there is glue code to pass the call he receive to > the *core* network stack module. > More in this "core network kernel module" point below... This is trickier than it sounds really. >> When started for the first time it will load the net-server kernel >> module. > > What about calling it the network stack kernel module(s) now, as the > concept of "servers" may being miss-interpreted by [Open]BeOS community > members with the well-known userland servers concept? I've said repeatedly I don't really care about naming at the moment, although given the stage we're at maybe I should :) Why not simply call it net_stack? >> - kernel modules - these will be everything else. >> All the existing modules need to be rewritten to allow them to >> be kernel modules and then we need to decide where they should be >> installed. > > I suggest we put them here: > /boot/home/config/add-ons/kernel/network/* Hmmm, yes this should work, but maybe net_stack to differentiate between the stack and drivers? Or maybe we have /network /network/stack /network/drivers > > get_module() and open_module_list() always look for a module_name in > this order: > /boot/home/config/add-ons/kernel/* > /boot/beos/system/add-ons/kernel/* > > For example, get_module("network/core/v1") will look for a file named > > /boot/home/config/add-ons/kernel/network > /boot/home/config/add-ons/kernel/network/core > /boot/home/config/add-ons/kernel/network/core/v1 > /boot/beos/system/add-ons/kernel/network > /boot/beos/system/add-ons/kernel/network/core > /boot/beos/system/add-ons/kernel/network/core/v1 > > with a "modules" data symbol publishing a "network/core/v1" module_info > name. I've been reading up on this and you're right. > > On a side note, this would prevent collision with existing > /boot/beos/system/add-ons/kernel/network/* in BONE-based BeOS systems, > too: as all BONE modules are named bone_xxxx, we should not encounter > module name collision problem until we don't prefix our modules > "bone_*". Namespace collisions suck. > > We can also create sub-directories for specific network sub-modules > type (interface, protocol, framing, whatever). > BTW, I doubt we could have the same module interface (public functions) > for all of our modules types (protocols, interfaces, core module). This is also something that appeals to me as it should make the modules better and give us a lot more flexability, something the way we were doing it didn't give us. >> - What functions do we need in the core module? Basically what do we >> want to export and how does the socket driver talk to the core >> module? We need ioctl for sure as that's how most of the functions >> work, and we'll need an init/uninit (or do we want to call it >> start/stop) but can anyone think of other things we need? > > Here come my so-called "core network kernel module" point: Erm, how did we jump topics? > > The /dev/net/socket device driver don't need to do more than simply > load the network/core kernel module at init_driver() time, > pass each call he receive on any socket to corresponding functions > exported by the core module and unload the core module at > uninit_driver() time (which would do nothing, as the core module will > ask to be keep loaded via the B_KEEP_LOADED module_info flag. The driver isn't the core I was referring to. What you outline below is exactly what I was thinking, but one point I'm hazy on and maybe you can help with... I call sendto(...) in my app. This passes a pointer to a buffer for the data. I wrap all the data up in a struct and pass the pointer to the struct into ioctl - how does the kernel module get the data? In my reading it seems like the kernel can't actually read userland memory? Also, if I call read/write or any of that fmaily there are actually 2 returns. How much data did we read/write and an error code. How do we handle those as the kernel knows the errors but the ioctl simply returns a value saying if it succeeded or not! Can you throw light on this? I have a possible solution in mind if this is an issue, but I'd like to hear from someone who knows this stuff :) > > The *link* between the driver > and the core module would be a opaque thing, which I've called a > net_endpoint pointer in my current > net_kit/source/kernel/net_stack_driver.c implementation: <snip> I'm concerned about using the term endpoint as it's too like the C++ types we'll be using. Otherwise I think you're on the right lines for sure. > Same idea for all device driver hooks: pass them > to the corresponding network core module calls... Yep. > > This way, we isolate the device driver code from the network core > module one, the net_endpoint * doing the media between the two. > For the driver, it's an opaque thing, for the network core module, it > could be a struct socket *. > > In this pseudo-code, I use the struct sock_fd * because I can directly > use the struct socket *, as for one to be valid socreate() require > that the family/type/protocol are valid, which I haven't in my > endpoint() code. Or did I miss a create_uninited_socket_struct() > somewhere? OK. Noted with thanks. > > But using the sock_fd structure add a useless layer, as the devfs > already handle the file descriptor <-> socket pairing for us, with the > help of /dev/net/socket driver "cookies"... > > With this mechanism, no need to work on the driver code ever. ;-) If we have to alter the driver code, so what? Not a concern I think. > >> Todo >> >> Well, the above really! First things to be done I think are as >> follows... 1. create the new socket driver and get it working. >> 2. get the core code we have working as a kernel module >> 3. start getting the other modules working as kernel modules and >> figure out where to install them > > Absolutly. > > Points 1 & 2: I can write a network core module skeleton tonigh if you > like, but as I'm not yet fluent with the current BSD-based > net_kit/source/server/* code, I'm not sure I can, well, glue all > together to make the stack really works... not in that short time. :-( > > For point 3, I guess the ethernet module is first candidate. > We'll have to define the network kernel sub-modules interface. > As I've previously write, I doubt we can have a generic interface for > all kind of network sub-module. > For example, interface sub-modules would have some up(), down(), > input() and output() but no resolve()/lookup(). For protocol > sub-modules, up() and down() don't mean anything... well Think this through and look at our current code. This is going to get messy and I'm not sure it'll be buildable :( Maybe someone can help. If I'm a sub- module and I want to call mget() then the allocation should take place at the "core" level, not in the module making the call. Also if I want to free memory that's a core function. So, the core needs to export all sorts of extra fucntions. Keeping track of all this is likely to be a nightmare and I'm not sure it's possible, however I hope it is. Look also at the route code. I think we just need to be clear that once a module has a valid pointer to a structure it can mess with it, but getting/freeing the pointer should be done centrally. Maybe. > >> get ready for some nasty debugging experience and dig out the old >> serial dbeugging cable as this is a truly useful tool to use. > > Hey, there is a another way: > http://www.betips.net/chunga.php?ID=540 > > Hint: sometimes, tail -f block forever, you don't see any new message, > when in fact there is. Just stop it and re-run. Cool. I do mostof my development via telnet and putty on a Windows 2K machine so the serial debugging is better for me :) david