RAII custom allocator

[Jump to article]

Problem Recap

Implement a RAII arena allocator that:

  • Allocates one large block in the constructor.
  • Provides allocate(size_t size) that returns aligned sub-blocks carved from that arena.
  • Tracks stats: total allocation count and peak usage.
  • Frees the entire block in the destructor.
  • (Nice-to-have) reset() to reuse the arena without freeing memory.

Intuition

A bump-pointer (arena) allocator is a fast strategy for many small allocations:

  • Keep a current offset into a pre-allocated buffer.
  • On each allocate(size), align the offset and bump it by size.
  • No per-allocation free; you either reset() (reuse the whole arena) or destroy the allocator (releases the whole block).
    This design dramatically reduces allocator overhead and fragmentation in workloads with “allocate many → drop all at once” patterns.

Correctness & Safety

  • RAII: Resource lifetime is tied to the allocator object; the destructor releases the big block exactly once.
  • Alignment: Always return pointers aligned to alignof(std::max_align_t) by default (sufficient for any standard type).
    We’ll implement a variant that can also accept a custom alignment.
  • Overflow checks: Guard against size_t overflow on internal computations.
  • No double-free: We make the allocator non-copyable (copy would duplicate owning pointer) and movable (transfer ownership cleanly).

Complexity

  • allocate(size): O(1) (simple pointer arithmetic + optional std::align).
  • reset(): O(1) (zero the bump pointer).
  • Memory usage: at most the provided arena size.

C++ Implementation

The below implementation focuses on alignment, move safety, overflow safety as well. You don’t need to consider everything to get the question marked as correct on the code judge.

#include <cstddef>
#include <new>
#include <algorithm>
#include <utility>
#include <cassert>
#include <limits>
#include <type_traits>

class CustomAllocator {
private:
    unsigned char* memory_block_ = nullptr;  // base
    std::size_t    block_size_    = 0;       // total capacity in bytes
    std::size_t    offset_        = 0;       // current bump offset
    std::size_t    peak_usage_    = 0;       // max offset observed
    std::size_t    total_allocs_  = 0;       // # of successful allocations

    static constexpr std::size_t kDefaultAlignment = alignof(std::max_align_t);

    // Helper: round up 'x' to multiple of 'align' (power-of-two align)
    static constexpr std::size_t round_up(std::size_t x, std::size_t align) noexcept {
        return (x + (align - 1)) & ~(align - 1);
    }

    // Helper: addition with overflow check (returns false on overflow)
    static bool add_overflow(std::size_t a, std::size_t b, std::size_t& out) noexcept {
#if defined(__has_builtin)
#  if __has_builtin(__builtin_add_overflow)
        return __builtin_add_overflow(a, b, &out);
#  endif
#endif
        out = a + b;
        return out < a; // fallback
    }

public:
    // Non-copyable (prevents double-free); moveable.
    CustomAllocator(const CustomAllocator&) = delete;
    CustomAllocator& operator=(const CustomAllocator&) = delete;

    CustomAllocator() = default;

    // Constructor: allocate the arena
    explicit CustomAllocator(std::size_t total_size)
        : memory_block_(nullptr), block_size_(total_size),
          offset_(0), peak_usage_(0), total_allocs_(0) {
        if (total_size > 0) {
            // new (nothrow) could be used to avoid exceptions; here we allow bad_alloc to propagate
            memory_block_ = static_cast<unsigned char*>(::operator new[](total_size));
        }
    }

    // Move constructor
    CustomAllocator(CustomAllocator&& other) noexcept
        : memory_block_(std::exchange(other.memory_block_, nullptr)),
          block_size_(std::exchange(other.block_size_, 0)),
          offset_(std::exchange(other.offset_, 0)),
          peak_usage_(std::exchange(other.peak_usage_, 0)),
          total_allocs_(std::exchange(other.total_allocs_, 0)) {}

    // Move assignment
    CustomAllocator& operator=(CustomAllocator&& other) noexcept {
        if (this != &other) {
            // release my current buffer
            if (memory_block_) {
                ::operator delete[](memory_block_);
            }
            memory_block_ = std::exchange(other.memory_block_, nullptr);
            block_size_   = std::exchange(other.block_size_, 0);
            offset_       = std::exchange(other.offset_, 0);
            peak_usage_   = std::exchange(other.peak_usage_, 0);
            total_allocs_ = std::exchange(other.total_allocs_, 0);
        }
        return *this;
    }

    // Destructor: release the entire arena
    ~CustomAllocator() {
        if (memory_block_) {
            ::operator delete[](memory_block_);
        }
    }

    // Allocate with default alignment (suitable for any standard type)
    void* allocate(std::size_t size) {
        return allocate_aligned(size, kDefaultAlignment);
    }

    // Allocate with custom alignment (must be power of two)
    void* allocate_aligned(std::size_t size, std::size_t alignment) {
        if (!memory_block_ || size == 0) return nullptr;
        if ((alignment & (alignment - 1)) != 0) {
            // alignment must be power of two
            return nullptr;
        }

        // Compute the aligned start offset (rounded up)
        std::size_t aligned_off = round_up(offset_, alignment);

        // Check overflow on aligned_off + size
        std::size_t new_off;
        if (add_overflow(aligned_off, size, new_off)) {
            return nullptr; // overflow
        }
        if (new_off > block_size_) {
            return nullptr; // out of arena space
        }

        void* ptr = memory_block_ + aligned_off;
        offset_   = new_off;

        ++total_allocs_;
        if (offset_ > peak_usage_) peak_usage_ = offset_;
        return ptr;
    }

    // Reset bump pointer (reclaim all space; does NOT free memory)
    void reset() noexcept {
        offset_ = 0;
        peak_usage_ = 0;
        total_allocs_ = 0;
    }

    // Accessors
    std::size_t capacity()       const noexcept { return block_size_; }
    std::size_t used()           const noexcept { return offset_; }
    std::size_t peak_usage()     const noexcept { return peak_usage_; }
    std::size_t total_allocs()   const noexcept { return total_allocs_; }
    bool        owns_memory()    const noexcept { return memory_block_ != nullptr; }
};

Usage Example

#include <cstdio>
#include <cstdint>

int main() {
    CustomAllocator arena(1024); // 1 KiB

    // Allocate 24 bytes (default alignment)
    void* p1 = arena.allocate(24);
    // Allocate 64 bytes aligned to 32
    void* p2 = arena.allocate_aligned(64, 32);

    std::printf("used=%zu peak=%zu total_allocs=%zu\n",
                arena.used(), arena.peak_usage(), arena.total_allocs());

    // Reuse the same arena without reallocation
    arena.reset();
    std::printf("after reset: used=%zu peak=%zu total_allocs=%zu\n",
                arena.used(), arena.peak_usage(), arena.total_allocs());
}

Edge Cases & Pitfalls

  • Zero-size allocate: returns nullptr (by choice). You can instead return a unique non-null sentinel if needed.
  • Alignment must be power-of-two. If not, we fail the request (return nullptr).
  • Large sizes / overflow: guarded with overflow checks; requests that exceed capacity fail fast.
  • Object lifetime: This allocator returns raw storage. If you want to construct objects, use placement new:
    void* mem = arena.allocate_aligned(sizeof(T), alignof(T));
    T* obj = new (mem) T(args...);  // placement new
    // explicit destructor if you want to recycle: obj->~T();
    
    The arena itself doesn’t call destructors on reset() or destruction—this is standard for raw-storage arenas.

When to Use This

  • Frame/local arenas in games or real-time systems.
  • Batch processing where allocations share a similar lifetime.
  • Parsing / AST building with eventual wholesale discard.
  • Temporary scratch buffers in hot paths where general-purpose allocation is too slow.

Optional Extensions

  • Add a small header per allocation to support LIFO deallocate() (stack-allocator style).
  • Track largest single request or allocation histogram for profiling.
  • Provide a monotonic_buffer_resource-like adaptor to use with <memory_resource> APIs.

1 Like