> Pros: Cons: > Approach 1: Array > Fast Access Fixed amount, space wasted > Simple Code Poor insertion and deletion > > Approach 2: Linked List > Fast insertion & deletion Slower access > No wasted slots Wasted space (pointers to next) > Any number possible (little) more complex code > > Approach 3: Extensible Arrays > Fast Access Complex Code > Moderate waste (you don't want to extend the array for every insert/delete) > > I can't think of any others. Sorted data structures - trees, skip lists, and other dynamically allocated and automatically balancing data structures. Not as fast as an array (perhaps 10 times slower than an array lookup, which may sound silly but may actually be worth it to get rid of the artificial limits), and they have the advantages of a linked list. The key could even be a small integer, used as a handle to the data, or a pointer if the nodes in the data structure don't move around. The small integer (used much like you would an array index but never getting reused for deallocated items; always incrementing) is better since invalid key/handles are detected. > I have many of the same issues in the VM work that I am doing. The *average* > area will probably be <= 256k. So should I optimize for something that size? And then there are areas which are hundreds of megabytes big. Possibly gigabytes if running on a 64 bit processor. Big images (600 dpi scans of large things). Big RAM disks (my pet project, 128MB areas aren't big enough!). Hours of sound or video in an editor. Maybe even get rid of hard drives and just use RAM (particularly if magnetic nonvolatile RAM becomes affordable). - Alex