Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
STL style vectors are convenient because they hide the details of internal buffer management, and present a simplified interface, but sometimes convenience can be a trap!
STL style vectors are convenient because they hide the details of internal buffer management, and present a simplified interface, but sometimes convenience can be a trap!
In my previous post I touched briefly on STL vectors with non-simple element types, and mentioned the ‘vector of vectors’ construct in particular as a specific source of memory woes.
In this post I’ll talk about this construct in more detail, explain a bit about why there is an issue, and go on to suggest a fairly straightforward alternative for many situations.
One of my standard interview questions for c++ programmers has been:
What can be done to optimize the following class?
#include class cLookup { std::vector<std::vector<long> > _v; public: long addLine(const std::vector<long>& entries) { _v.push_back(entries); return _v.size() - 1; } long getNumberOfEntriesInLine(long lineIndex) const { return _v[lineIndex].size(); } long getEntryInLine(long lineIndex, long indexInLine) const { return _v[lineIndex][indexInLine]; } };
(Index type issues ignored for simplicity.)
Note that the question intentionally doesn’t tell you what needs to be optimised. I guess I ideally want applicants to then either:
ask what should be optimised, or
immediately spot what is ‘wrong’ with the code (and go on to make some suggestions for improvement)
It’s generally a good first reflex in any optimisation situation to try to find out some more about exactly what’s going on (whether by profiling, or measuring statistics, or even just looking at how a piece of code is used), and (perhaps more fundamentally), what needs to be optimised, and so asking for more information is probably a good initial response to pretty much any optimisation question.
But the vector of vectors in the above code such an expensive and inefficient construct that if you’ve ever done something like this in the wild, or come across similar constructs during optimisations, then it should really stand out as a red flag, and I don’t think it’s unreasonable to just assume that this is what the question is asking about.
The class methods can be divided into a ‘lookup building’ part, which might only be used at preprocess or loading time, and a non-mutating ‘query’ part.
In real life we should usually be motivated and directed by some kind of performance measure, and there are then two issues potentially resulting from that vector of vectors, with our performance measures ideally telling us which (or both) of these issues we actually need to focus on:
inefficiencies during construction, or
run-time memory footprint
The construction part has the potential to be horribly inefficient, depending on the compilation environment.
It’s all about what the top level vector does with the contained objects when increasing buffer capacity, and, more specifically, whether or not C++11 ‘move semantics’ are being applied.
To increase buffer capacity, vectors allocate a new block of memory and then move the contents of the existing buffer across. Before C++11 move semantics the move part actually requires copy construction of elements in the new buffer from elements in the old buffer and then destruction of the old elements, but with C++11 move semantics this copy and destroy can be avoided.
We can see this in action with the following test code:
#include #include int i = 0; class cSideEffector { int _i; public: cSideEffector() { _i = i++; std::cout << "element " << _i << " constructed\n"; } cSideEffector(const cSideEffector& rhs) { _i = i++; std::cout << "element " << _i << " copy constructed (from element " << rhs._i << ")\n"; } ~cSideEffector() { std::cout << "element " << _i << " destroyed\n"; } }; int main(int argc, char* argv[]) { std::vector<std::vector<cSideEffector> > v; v.resize(4); v[0].resize(1); v[1].resize(1); v[2].resize(1); v[3].resize(1); std::cout << "before resize(5), v.capacity() = " << v.capacity() << '\n'; v.resize(5); return 0; }
Building this with Clang 3.0, without the -std=c++0x option, I get:
element 0 constructed
element 1 copy constructed (from element 0)
element 0 destroyed
element 2 constructed
element 3 copy constructed (from element 2)
element 2 destroyed
element 4 constructed
element 5 copy constructed (from element 4)
element 4 destroyed
element 6 constructed
element 7 copy constructed (from element 6)
element 6 destroyed
before resize(5), v.capacity() = 4
element 8 copy constructed (from element 1)
element 9 copy constructed (from element 3)
element 10 copy constructed (from element 5)
element 11 copy constructed (from element 7)
element 1 destroyed
element 3 destroyed
element 5 destroyed
element 7 destroyed
element 8 destroyed
element 9 destroyed
element 10 destroyed
element 11 destroyed
But if I add the -std=c++0x option (enabling C++11 move semantics), I get:
element 0 constructed
element 1 constructed
element 2 constructed
element 3 constructed
before resize(5), v.capacity() = 4
element 0 destroyed
element 1 destroyed
element 2 destroyed
element 3 destroyed
So we can see first of all that without move semantics enabled each buffer reallocation triggers copy and delete for all the existing elements. And then we can see that this is something that this is something that can potentially be resolved by turning on support for move semantics.
Note that this copy and delete usually won’t be an issue for basic types (where copy can be just a memory copy, and delete a null operation), but if the vector elements have their own internal buffer management (as is the case with vectors of vectors) then this results in a whole lot of essentially unnecessary memory heap operations, which be both a significant performance hit and bad news for memory fragmentation.
Move semantics can be pretty cool if you can depend on the necessary compiler support, and you can then find a load of stuff on the web discussing this in more detail (e.g. here or here).
At PathEngine, however, we need to support clients building the SDK on a bunch older compilers, and if there is some way to implement stuff like this lookup class without all this overhead regardless of whether C++11 move semantics are actually available then we definitely want to do this, and the lookup construction stage remains an optimisation target!
Actually, in practice we would never want to do something like cLookup::addLine() during loading. Something like this would normally only be called at preprocess generation time, with the possibility to store the resulting preprocess data to persistent files and load back in at runtime with much more direct data paths.
(In reality there would be some other stuff to support this in that cLookup class, this is a simplified example and doesn’t give the whole picture.)
And then, while we obviously want to avoid all those memory ops during preprocess generation, the real issue here is probably the implications of the vector of vector construct on the SDK run-time, and particular on run-time memory footprint.
It turns out that vectors of vectors are also very bad news for your system memory heap.
For something like PathEngine this is *worse* than the potential construction inefficiencies from unnecessary copying,
and this is an issue *whether or not* C++11 move semantics are being applied.
From the point of view of memory footprint there are two main problems here:
The fact that separate buffer allocations are made per entry in _table, and
Operating system and vector class memory overhead for each of those allocations
The situation will look something like this:
One buffer gets allocated at the top level, by _v, and filled in with the sub vector class data, and a bunch of other buffers then also get allocated by these sub vectors.
Note that each vector has four data members, a buffer pointer, a current size value (or end pointer), a current capactity (or end pointer), and a pointer to an allocator object. It’s most common to see STL containers without allocation customisation, and so it can be a surprise to see the allocator pointer. When allocator customisation is not being used the allocator pointers all be set to zero, but nevertheless get stored once per lookup line, and help to bulk out the memory overhead.
If each of the four vector data members is a 32 bit value then this gives us a starting cost of 16 bytes per line in the lookup, most of which is not really necessary in this case.
And allocating an individual buffer for each line is also a bad thing because allocating a lot of buffers will lead to memory fragmentation, and because there is a non-negligeable amount of built in hidden overhead per system memory allocation. Each individual buffer allocated from the system will require additional data related to tracking the buffer in the system memory allocation heap, and will also most likely be aligned to some fixed value by the system (so with memory buffer lengths actually being extended to a multiple of that fixed value).
As a result, for a lookup object created with short lines, the memory overhead for each line can easily exceed the amount of actual data being stored.
And we should also be aware of the possibility of vector overallocation.
In the code shown, the subvectors probably shouldn’t be overallocated because they are copy constructed from existing vectors (although I haven’t researched the ‘official’ situation for this in the standard!), but the top level vector will be overallocated (by a factor of 1.5, on average, as discussed in my previous post), and in the general case of vector of vector construction overallocation may not just mean buffers being larger than necessary, but also potentially that more buffers are allocated than actually needed.
In practice, in loading code, we set things up so that vectors get loaded without any overallocation, but it can be worth checking that this is definitely working as expected.
With or without overallocation it’s clear that there are some significant memory footprint and fragmentation issues, and these kinds of memory issues are then also usually performance issues, because of the need to keep memory caches working effectively, and because of heap management costs!
The key point, for this use case, is that we actually only ever need to add data to the end of the lookup, and so we just don’t need all that separate dynamic state tracking for each individual line.
And the solution is then to ‘collapse’ all those individual line buffers into a single buffer, giving us just one single buffer containing all the lookup entries concatenated together in order.
Since the lookup lines are not all the same length we also need some way to find the lookup entries for each line, and so we use a second buffer which indexes into the concatenated entries for this purpose.
The index buffer tells us exactly where each line starts in the concatenated entries buffer.
Note that we give the index buffer one entry per lookup line, plus one additional entry which points to the end of the concatenated entries. (This extra index entry enables us to treat all the lookup lines in the same way, and avoids an explicit test for first or last line.)
We could add code for this directly into the cLookup class fairly easily, but this is something we’re bound to come across with regards other data structures, and we should set things up to make it as easy as possible for us to avoid the vector of vectors construct, so why not create a new container class for this purpose, which can be used to replace vectors of vectors more generally?
We need to maintain a couple of dynamically resized buffers (with increasing size), and std::vector is actually just the thing for that (!), so we can start out with something like:
[i]; } return _v.size() - 1; } void shrinkToFit() { _v.shrinkToFit(); } long getNumberOfEntriesInLine(long lineIndex) const { return _v.subVectorSize(lineIndex); } long getEntryInLine(long lineIndex, long indexInLine) const { return _v[lineIndex][indexInLine]; } };
Note the addition of a shrinkToFit() method, which can be called when the calling code has finished adding lines, to avoid overallocation.
Vectors of vectors should be avoided if possible, and if data is only ever added at the end if the top level vector this is easy to achieve.
This is an example of an optimisation which should be applied pre-emptively, I think, because of the performance and memory footprint implications.
If you have vectors of vectors in your code, take a look at this right now and see if they can be replaced with something like cCollapsedVectorVector!
** This is a repost from upcoder.com, please check the existing comment thread for this post before commenting. **
Read more about:
Featured BlogsYou May Also Like