[kismac] Re: Large files/nos of APs

  • From: Brad Knowles <brad@xxxxxxxxxxxxxxxxxxx>
  • To: kismac@xxxxxxxxxxxxx
  • Date: Sat, 22 Jan 2005 20:00:13 +0100

At 7:06 PM +0100 2005-01-22, Michael Rossberg wrote:

 but i think we are hitting another barrier here. with 3000 access points
 the data volume becomes to big to be handled entirely in memory in real
 time. this means we need a database alike function within kismac. this
 on the other hand would mean no instant updates, as we cannot execute a
 couple of database calls whenever a packet comes in.

Have you checked out Berkeley db? You should be able to get transactional data rates and still handle very large databases. It is used as the implementing technology underneath MaxSQL, the full ACID-compliant version of MySQL. Should be pretty simple to program, too.


 the only solution that i see is an engine which uses for lately
 active networks a memory solution and for the rest a database
 backend. unfortunately i am not able to program in my spare time.

Berkeley db takes care of that for you, by caching in memory as much of the database as is being actively used as possible. Throw more memory at it, and it can keep the whole thing in memory. If you're short on memory, it runs a bit slower.


        But either way, you can handle millions of entries with relative ease.

this project size requires a commercial development team. sorry.

Maybe, but I'm not convinced. If I were a programmer, I'd go look at how postfix and sendmail interface with Berkeley db, and rip off code from them. Or maybe go to the Sleepycat website and look at their programming examples.


--
Brad Knowles, <brad@xxxxxxxxxxxxxxxxxxx>

"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."

    -- Benjamin Franklin (1706-1790), reply of the Pennsylvania
    Assembly to the Governor, November 11, 1755

  SAGE member since 1995.  See <http://www.sage.org/> for more info.

Other related posts: