Zakero's C++ Header Libraries
A collection of reusable C++ libraries
Classes | Macros
Zakero_MemoryPool.h File Reference

Zakero MemoryPool. More...

Go to the source code of this file.

Classes

class  zakero::MemoryPool
 A pool of memory. More...
 

Macros

#define ZAKERO_MEMORYPOOL_IMPLEMENTATION
 Activate the implementation code. More...
 
#define ZAKERO_MEMORYPOOL_PROFILER
 Activate the profiling code. More...
 
#define ZAKERO_MEMORYPOOL_PROFILER_FILE
 Profile data file. More...
 

Detailed Description

Here, you will find information about and how to add the Zakero MemoryPool to your project.

See zakero::MemoryPool for the class API documentation.

Dependencies
TL;DR:
This library will provide a memory pool for your application. To use:
  1. Add the implementation to a source code file:
    #define ZAKERO_MEMORYPOOL_IMPLEMENTATION
    #include "path/to/Zakero_MemoryPool.h"
  2. Add the library to where it is used:
    #include "path/to/Zakero_MemoryPool.h"
What Is It?

The Zakero MemoryPool library will create and manage a region of memory. From this pool of memory, sections of memory can be allocated and freed. When allocated, the memory is identified by an offset into the region of memory instead of a pointer. Programs are expected to be "good citizens" by using the offset and not writing outside of their allocated area.

The region of memory is anchored to an anonymous file descriptor. The benefit of using the file description is that the Operating System can remap the file to a larger area as needed. And since all allocated memory uses an offset, no pointers end up pointing to a bad location.

Why Use It?

As with many things, there are benefits and draw backs to using a memory pool. For the MemoryPool object they are:

Benefits

  • SPEED!!! Much faster allocations than new or malloc
  • Data focused, allocations are based on size not object-type
  • The entire memory pool can be easily shared across process-space
  • Can automatically grow as needed [optional feature]

Draw Backs

  • Requires extra work to convert offsets to pointers
  • If the memory pool expands, pointers can be invalidated
  • Memory fragmentation has more of an impact
  • No bounds checking in memory writes

To put things into perspective, allocating memory is a very expensive operation. Using the MemoryPool means this operation only needs to happen once. Allocating memory from the MemoryPool just needs to scan the memory region that can hold the request size. Even for extremely large memory pools, this is a very fast operation. Requiring the size (in bytes) to be allocated from the MemoryPool contributes to the speed of the allocation.

Since the MemoryPool uses a Unix File Descriptor for the memory region, only that Unix File Descriptor must be shared between processes to access the entire MemoryPool data.

As a result of using a Unix File Descriptor, allocating memory returns an offset into the MemoryPool data. When the memory region expands in the MemoryPool, the location of the "file" may change. Using offsets into the data negates the problem of pointing to invalid memory. Unfortunately, programming uses pointers instead of offsets. MemoryPool does provide a way to be notified when the memory region moves so the pointers an application is using can be updated.

Unless C++'s Placement New is being used extensively, the developer must be fully aware of where they are in their area of the memory region so that other data is not over written.
Placement New will break when the memory region moves.

Memory fragmentation already happens in most applications. The impact of the fragmentation is not felt due to huge amounts of memory in today's computers. However, for small amounts of memory fragmentation becomes a larger issue. Over the course of an application's life-time, it may allocate and free memory many times. At some point in time, if a large block of memory is requested, that allocation may fail because a contiguous region of memory is not available. This is the problem that plagues memory pools.

If the benefits out-weigh the draw backs for your application, then the MemoryPool is what you need.

Note
This implementation is limited to signed 32-bit file sizes.
How To Use It?

Step 0

Your compiler must support at least the C++20 standard.

Step 1

The first step is to select which C++ source code file will contain the Zakero MemoryPool implementation. Once the location has been determined, add the following to that file:

#define ZAKERO_MEMORYPOOL_IMPLEMENTATION
#include "path/to/Zakero_MemoryPool.h"

The macro ZAKERO_MEMORYPOOL_IMPLEMENTATION tells the header file to include the implementation of the MemoryPool.

In all other files that will use the MemoryPool, they need to include the header.

#include "path/to/Zakero_MemoryPool.h"

Step 2

After creating a MemoryPool, it must be initialized before it can be used.
Once that is done, you can freely allocate and free memory from the MemoryPool.

This is an example of creating two std::array's that are backed by the MemoryPool.

zakero::MemoryPool memory_pool("Array_Data");
constexpr size_t count = 100;
size_t array_size = sizeof(int64_t) * count;
memory_pool.init(array_size * 2);
off_t offset = memory_pool.alloc(array_size);
int64_t* ptr = memory_pool.addressOf(offset);
std::array<int64_t,count>* array_1 = new(ptr) std::array<int64_t,count>();
offset = memory_pool.alloc(array_size);
ptr = memory_pool.addressOf(offset);
std::array<int64_t,count>* array_2 = new(ptr) std::array<int64_t,count>();
// Do stuff with array_1 and array_2

Version
0.8.1
  • Bug fixes
  • API changes
0.8.0
  • Allocate and manage memory pool
  • Automatically expand the memory pool as needed
  • Share the region of memory using the file descriptor
Author
Andrew "Zakero" Moore
  • Original Author
Todo:

Add support for huge file sizes (64-bit / huge table fs)

  • Maybe toggled via a macro flag

Be able to defrag the memory pool

Pass a lambda to the resize() method so that how the memory is moved can be controlled. For example, if the memory was holding a texture, the texture should be "clipped" or "zero'ed" space added around the existing data.

Be able to initialize a MemoryPool with a Unix File Descriptor and the file size.

Macro Definition Documentation

◆ ZAKERO_MEMORYPOOL_IMPLEMENTATION

#define ZAKERO_MEMORYPOOL_IMPLEMENTATION

Defining this macro will cause the zakero::MemoryPool implementation to be included. This should only be done once, since compiler and/or linker errors will typically be generated if more than a single implementation is found.

Note
It does not matter if the macro is given a value or not, only its existence is checked.

◆ ZAKERO_MEMORYPOOL_PROFILER

#define ZAKERO_MEMORYPOOL_PROFILER

Defining this macro will cause the zakero::MemoryPool implementation to also include profiling data.

Be careful using this feature. The Zakero MemoryPool will create a new instance of the Zakero Profiler. If a zakero::MemoryPool is created before the intended Zakero Profiler, the profiler in zakero::MemoryPool will take precedence.

Note
It does not matter if the macro is given a value or not, only its existence is checked.
See also
ZAKERO_MEMORYPOOL_PROFILER_FILE

◆ ZAKERO_MEMORYPOOL_PROFILER_FILE

#define ZAKERO_MEMORYPOOL_PROFILER_FILE

Define this macro and set it to the file name than will contain the profiling data. If the Zakero MemoryPool has already setup else where, then this macro will be ignored.

The default value is "./zakero_MemoryPool_profile.json".

Note
ZAKERO_MEMORYPOOL_PROFILER must be defined for this macro to have any effect.
zakero::MemoryPool
A pool of memory.
Definition: Zakero_MemoryPool.h:241