[openbeosnetteam] skeleton sockets fd support driver

  • From: philippe.houdoin@xxxxxxx
  • To: openbeosnetteam@xxxxxxxxxxxxx
  • Date: Wed, 27 Feb 2002 15:51:55 +0100 (MET)

Hi all,

Yesterday, I was bored at work. :-)
So I start to write some code to add light on last week ML thread about 
sockets file descriptor support, client-side library and stack-side interface.

Here is the result:
http://philippe.houdoin.free.fr/phil/beos/openbeos/net_kit.zip

This file contains a very light libnet.so and a skeleton /dev/net/stack 
driver, as a proof of concept on fd sockets support.
It compile and run on my BONified BeOS 5.0.3 system.

Don't hold your breath, it does nothing usefull ;-)
Reason: there's no driver <-> net stack interface done (yet?!), 
so the sockets returned by this libnet.so::socket() are all, well, 
useless. 
It show only one way fd support to sockets could 
be implemented under [Open]BeOS, nothing more.
Not that I know other way...

Ugly ASCII diagram:

                    _______________
                   |               |
                   |   libnet.so   |
                   |_______________|
user land                  |
- - - - - - - - - - - - - -+- - - - - - - - - - - - - - 
kernel land                |
                   ________|_______
                  |                |
                  | /dev/net/stack |
                  |________________|


This lib/driver is widely inspired by BONE design, I confess.
Well, in fact the ASCII diagram too :-)

a) libnet.so export to userland clients the net_kit API:

- BSD sockets api: socket(), bind(), connect(), listen(), accept(), 
etc. Bonus, sockets are file descriptors too (welcome to inetd, 
select() pooling, etc).
- BNet* classes from BeOS NetworkKit
- Any new api (C++ or plain C) we would find cool to add
- stack configuration/querying api (for preference app, etc). Think about 
ifconfig tool, etc...

BTW, for previous [Open]BeOS network api compatibility, BONE's 
libsocket.so and libbind.so, like libbnetapi.so could be easily symlinked to 
use this all-in-one libnet.so...

BTW bis, for kernel network needs, like network fs (nfs, cifs, etc), the socket 
API should be 
exported too by a kernel *module*, as drivers/modules can't link against 
libraries...

b) the net_stack_driver, published under /dev/net/stack (or whatever, 
only libnet.so need to know where he lives) offer the file descriptor 
support: for each open("/dev/net/stack"), he create a new net_endpoint. 

The libnet.so use this mechanism to create a valid, file descriptorized 
socket. And to do the real job behind connect(), etc...
libnet.so will use the driver too to offer a configuration api... 
using some special socket file descriptor... 

In his current skeleton stage, this driver do nothing but allocating any 
"fake" endpoint asked by libnet.so and logging into syslog. 
All *real* calls will fail.
Non-blocking mode is not there, too, but this should start with 
implementing setsockopt() stuff first... 
Select() is not implemented too, the notify_select_event() kernel call 
should be tested before...

c) socket driver <-> net stack interface: 
it's where a userland stack / kernel land stack design fork the code:

- With kernel land way, the *stack* is just there, somewhere in the kernel. 
Under BeOS and NewOS, 
it would be "modules", I guess.

               ________________
              |                |
              |   libnet.so    |
              |________________|
user land              |
- - - - - - - - - - - -+- - - - - - -
kernel land            |
               ________|_______
              |                |
              | /dev/net/stack |
              |________________|
               ________|_______
              |                |
              | stack module   |
              |________________|
                       .
                       .
                       .
             __________|_______
            |                  |
            | /dev/net/tulip/0 |
            |__________________|


- With userland way, the *stack* live in another userland team.
Some communication need to be implemented between the driver and this 
*net_server*, probably via a port and shared areas. Anyway, client data would 
have to be or copied or clone_area'd to be accessible 
from the net_server team...

                team A                team B
             _______________
            |               |   
            |    client     |
            |_______________|
             _______|_______      ____________ 
            |               |    |            |
            |   libnet.so   |    | net_server |
            |_______________|    |____________|
user land           |               |     |
- - - - - -- - - - -+- - - - - - - -+- - -+- - -
kernel land         |               |     |
            ________|_______        |     |
           |                |       |     |
           | /dev/net/stack |-------+     |
           |________________|             |
                                          |
                            ______________|___
                           |                  |
                           | /dev/net/tulip/0 |
                           |__________________|

In fact, even without the need for fd socket support, this library 
<-> net_server interface is still needed, as client's libnet.so live is in 
*his* team address user space, not the net_server one:

       team A                team B
     _______________
    |               |   
    |    client     |
    |_______________|
     _______|_______      ____________
    |               |    |            |
    |   libnet.so   |----| net_server |
    |_______________|    |____________|
                                |
user land                       |       
- - - - - -- - - - - - - - - - -+- - - - -
kernel land                     |
                    ____________|_____
                   |                  |
                   | /dev/net/tulip/0 |
                   |__________________|


"Voila".
Any thoughs about this design?

Should we put aside this fd socket support until R2, 
as it's not the top priority? Technical design choice 
may have some impact on net_server design, I think.
Any hint(s) on the IPC design I should try to connect this 
driver with the net_server in CVS?

-Philippe.

Other related posts: