Heap allocator fragmentation study
Goal
I wanted to understand how different allocation patterns affect heap fragmentation
in glibc’s ptmalloc2. Specifically: can I provoke severe external fragmentation
where available free memory is scattered into useless small chunks?
Test Patterns
I wrote three allocation patterns and measured RSS (Resident Set Size) after freeing half the allocations:
Pattern A — Alternating sizes: Allocate alternating 32-byte and 1MB blocks, then free all the large ones. Expected: severe fragmentation — large holes between 32-byte islands.
Pattern B — Same size: Allocate 10,000 × 64-byte blocks, free half randomly. Expected: minimal fragmentation — glibc bins chunks by size.
Pattern C — Exponential sizes: Allocate 1B, 2B, 4B … up to 1MB, free odd-indexed ones. Expected: moderate fragmentation.
Results
| Pattern | Alloc RSS | After Free RSS | Freed % |
|---|---|---|---|
| A | 512 MB | 489 MB | 4.5% |
| B | 640 KB | 128 KB | 80% |
| C | 2.1 MB | 1.4 MB | 33% |
Pattern A produced the worst fragmentation by far — as expected. The large freed blocks couldn’t be returned to the OS because 32-byte live allocations sat between them.
What Worked
The experiment clearly demonstrated external fragmentation. Pattern A is a known adversarial pattern for buddy-style allocators and it showed here too.
What Didn’t Work
I tried to observe glibc’s MADV_DONTNEED behavior (where it releases pages without
unmapping) using /proc/PID/smaps — but the accounting was too noisy to draw conclusions.
The RSS numbers were inconsistent across runs due to kernel page reclaim timing.
What I Learned
- glibc does a surprisingly good job with same-size allocations (unsurprising — it has per-size bins)
- The worst case for a general allocator is long-lived small allocations interleaved with short-lived large ones
- RSS is a misleading metric —
Pssfrom/proc/smapswould be better
Follow-up
This motivated building MemArena: for workloads where lifetime is known, a bump allocator simply never fragments.