That's excellent. Thanks Jeremy. I'm a little curious about memory usage too. I stood up the muscled server, and ran three clients. I had one of the clients publish 100,000 set data messages as quickly as possible, again and again, and I had two other clients subscribe to the data. I'm able to sustain about 20,000 messages per second between a dual processor PIII 600 and a 12" Powerbook. The receiving clients don't do anything with the incoming messages. They just take the incoming message off the queue and throw it away (well, it's a MessageRef object, so I expect that not doing anything means eventual release/deallocation). Yet I see the memory usage on the receiving processes grow from ~20MB to ~125MB over time. I don't think memory fragmentation should be an issue as you're using object/memory pools, and I'm not doing any heap allocation, so I'm a little surprised that memory usage grows much at all. Does it have something to do with the incoming message queue being jammed with data? Are the object pools being resized? If that's the case, is there a recommended method to restrict the size of the object pools? I'll probably have a better idea of what to do as I become more familiar with the source, but I'm wondering if this is a normal thing I'm seeing. I'm planning to include the muscle system in a pretty mixed environment, including possibly some C# clients, so I'm thinking of writing a C# client API modeled after the Java client API. Is there interest by anyone else in such a thing? Wilson