Filling up space for large chunks?

Anatoly Vorobey mellon@pobox.com
Sun, 30 Nov 2003 08:23:06 +0200


On Sat, Nov 29, 2003 at 10:17:15PM -0800, Brion Vibber wrote:
> Perhaps we'll start precaching a few big random data chunks to make 
> sure there's space in the future. It'd be real nice to see the 
> distribution of block sizes in the stats output.

Try 'stats slabs' for that.

We also have a plan to ditch classes of sizes in favor of storing
everything in small blocks (say 64 bytes) and chaining them together
for large objects. The advantage of this would be a considerably
less wasteful memory usage, and having one LRU queue for all objects
in the system; the main drawback is increased CPU usage, and possibly
increased latency, due to having to read/write very many small scattered
buffers for a large object, rather than one large contiguous buffer as
now. We're going to play with this soon and see whether this change
would be worth it.

--
avva