[Jump to Article]
We need to design a FIFO queue where node allocations are pooled and reused instead of allocating/freeing from the general heap on every push/pop. Additionally, we would like our class to be able to use smart pointers that ensure that we don’t have any memory leaks or other potential side effects relating to cleaning allocated resources.
What Is a Memory Pool? What Is a Freelist?
Memory Pool
A memory pool is a pre-allocated (or lazily grown) region from which allocations for same-sized objects are carved out. Advantages:
- Fewer calls to the general-purpose heap allocator.
- Lower fragmentation and better cache locality (objects of same type are packed together).
- Predictable latency, which is useful for real-time systems or tight loops.
Freelist
A freelist is a lightweight data structure (often a singly linked list) that tracks reusable objects/nodes that are currently not in use:
- On deallocation, an object is pushed onto the freelist.
- On allocation, an object is popped off the freelist (if available).
- If the freelist is empty, you fall back to a real allocation.
In the queue above, popped nodes are not deleted—they join the freelist and are reused on future pushes.
Use Cases for Memory Pools & Freelist-Based Queues
- High-throughput messaging: task queues, job schedulers, actor mailboxes.
- Game engines / real-time systems: stable frame times; avoid allocator hiccups mid-frame.
- Networking: packet buffers and descriptor rings where sizes are uniform and lifetimes are short.
- Embedded / latency-sensitive services: control loops, audio processing, telemetry pipelines.
- Object caches: frequent churn of identically sized objects (e.g., AST nodes, DOM-like structures).
When your workload shows frequent create/destroy cycles of the same small object type, a pool + freelist can materially improve performance.
Intuition
A singly linked queue is typically:
head→ front node,tail→ last node,- each node has
valueandnext.
Naïve implementations call new for every push and delete for every pop. Under heavy traffic this becomes slow and fragments memory. Memory pooling alleviates this by recycling nodes:
- When a node is removed, we don’t
deleteit. - We put it on a freelist.
- On the next
push, we first try to reuse a node from the freelist (just overwrite its value and resetnext).
This cuts down calls to the general-purpose allocator and improves cache locality.
Approach
- Node structure: holds
T valueandstd::unique_ptr<Node> next. We ensure that the next pointer is the owner of the memory so that when the destructor is called, the linked list is cleaned automatically. - Queue state:
std::unique_ptr<Node> head, rawNode* tail, and astd::unique_ptr<Node> freelist(head of the pool). Notice that the head pointer owns the list and its deletion causes the rest of the nodes to be deleted as well. When head goes out of scope, it’ll delete the first node in the list, which in turn orphans the next node because nothing will point to it and so on and so forth. We don’t require tail to own the list since head already owns it, so its a normal pointer. - Push:
- If
freelisthas a node, pop one and write the new value. - Otherwise allocate a fresh node.
- Splice onto
tail.
- If
- Pop:
- Move
headoff the queue. - Splice the removed node onto
freelist. - If queue becomes empty, set
tail = nullptr.
- Move
- Front / Empty: standard linked-queue operations.
The provided answer follows this design.
Correctness
The structure maintains a standard singly linked FIFO invariant:
headpoints to the first enqueued, not yet popped node.tailpoints to the last node or isnullptrwhen empty.- On
push, we append aftertail; onpop, we remove fromhead. - The freelist is separate from the active queue; nodes only move between
(freelist) ↔ (active queue)on push/pop.
Since nodes on the freelist are not reachable from head, they do not affect traversal or ordering. Reuse never changes relative order of active nodes, preserving FIFO.
Complexity
- Time:
push: amortized O(1) (constant-time append + optional freelist pop)pop: O(1) (constant-time remove + freelist push)front,empty: O(1)
- Space:
- O(n) for
nelements in queue plus any extra nodes retained in the freelist (bounded by historical peak usage).
- O(n) for
- Allocation behavior:
- Worst case (no nodes in freelist) incurs allocation; otherwise zero-allocation fast path.
Implementation
#include <memory>
template <typename T>
class MemoryPooledQueue {
private:
struct Node {
T value;
std::unique_ptr<Node> next;
// Default constructor for freelist nodes
Node() : value{}, next{nullptr} {}
// Constructor for initialized nodes
explicit Node(const T& val) : value(val), next{nullptr} {}
// Move constructor for efficient transfers
Node(Node&& other) noexcept
: value(std::move(other.value)), next(std::move(other.next)) {}
// Move assignment operator
Node& operator=(Node&& other) noexcept {
if (this != &other) {
value = std::move(other.value);
next = std::move(other.next);
}
return *this;
}
};
std::unique_ptr<Node> head;
Node* tail{nullptr};
std::unique_ptr<Node> freelist{nullptr}; // Pool of recycled nodes
// Helper function to get a node from freelist or allocate new one
std::unique_ptr<Node> get_node(const T& val) {
if (freelist) {
// Reuse node from freelist
auto node = std::move(freelist);
freelist = std::move(node->next);
node->value = val;
node->next = nullptr;
return node;
} else {
// Allocate new node
return std::make_unique<Node>(val);
}
}
// Helper function to recycle a node to freelist
void recycle_node(std::unique_ptr<Node> node) {
if (node) {
node->next = std::move(freelist);
freelist = std::move(node);
}
}
public:
MemoryPooledQueue() = default;
void push(const T& val) {
auto new_node = get_node(val);
auto* new_tail = new_node.get();
if (!head) {
// Empty queue
head = std::move(new_node);
} else {
// Non-empty queue
tail->next = std::move(new_node);
}
tail = new_tail;
}
void pop() {
if (!head) return; // Empty queue
// Move head to freelist
auto old_head = std::move(head);
head = std::move(old_head->next);
// Recycle the old head node
recycle_node(std::move(old_head));
// Update tail if queue becomes empty
if (!head) {
tail = nullptr;
}
}
T& front() {
// Assumes queue is not empty
return head->value;
}
bool empty() const {
return head == nullptr;
}
};
Variations / Improvements
1) emplace and Move-Push for Better Exception Safety & Performance
Construct values in-place and support rvalues to avoid extra copies:
void push(T&& val) {
auto new_node = get_node(std::move(val)); // overload get_node(T&&)
auto* new_tail = new_node.get();
if (!head) head = std::move(new_node);
else tail->next = std::move(new_node);
tail = new_tail;
}
template <class... Args>
void emplace(Args&&... args) {
std::unique_ptr<Node> node;
if (freelist) {
node = std::move(freelist);
freelist = std::move(node->next);
node->next = nullptr;
// Construct into existing storage:
node->value = T(std::forward<Args>(args)...); // still assignment; for true in-place, Node would hold aligned storage
} else {
node = std::make_unique<Node>(T(std::forward<Args>(args)...));
}
auto* new_tail = node.get();
if (!head) head = std::move(node);
else tail->next = std::move(node);
tail = new_tail;
}
For true in-place construction without an extra temporary or assignment, change
Nodeto holdstd::aligned_storage_t<sizeof(T), alignof(T)>and manually::new/destroyT. That’s more involved but yields the strongest guarantees and maximal reuse.
2) Add clear() and size()
Hold a std::size_t sz{0}; and update on push/pop. A clear() can move all active nodes to freelist or destroy them depending on your pooling strategy.
3) Custom Allocator
Combine with a slab/arena allocator for the Node objects themselves to avoid even the first-time new calls.
Alternatives & Trade-offs
std::deque<T>: contiguous blocks, great general-purpose choice; no pooling for node objects though.- Intrusive containers: store linkage in
Tto avoid per-node allocation entirely (at the cost of coupling). - Lock-free queues: MPMC/SPSC ring buffers (e.g., circular array) eliminate node allocation altogether but require capacity management and different semantics.
- Custom slab allocator: backs nodes with pages/slabs; can be combined with freelist for the best of both worlds.