malloc replacements?

We’ve built some great tools lately including one to test fragmentation of different allocators.  I’m currently in the process of hooking up allocators such as tcmalloc, nedmalloc, Hoard, and jemalloc.  Also native platform specific ones such as the Windows low-fragmentation heap. I’m having to dig in to some of their internals to pull out the data that we need which is taking a bit of time, but things are progressing well.

If anyone knows of other allocators we should be looking at, would you please leave a comment?  I would like to make sure we’re comparing all of our options.

15 thoughts on “malloc replacements?

  1. Hugo Heden

    Maybe this article can be useful:

    Policy-Based Memory Allocation
    Fine-tuning your memory management
    Andrei Alexandrescu and Emery Berger

    I’ll quote part of the conclusion:

    “Configurable memory allocation is, as Emery’s research has shown, a practical, all-in-one alternative to both specialized and general-purpose allocators. Emery’s numbers (refer to the paper [6] for details) consistently show that allocators created with HeapLayers perform just as well as, or better than, monolithic allocators, be they general-purpose or specialized. Moreover, HeapLayers’ layered architecture encourages easier experimentation, simpler debugging, and unparalleled extensibility. Instead of being an oddball chapter of Modern C++ Design, memory allocation should have been one of the best success stories of policy-based design.”

  2. pd

    No idea if these are any good, just thought I’d try to help with a bit of Googling:

    A Memory Allocator

    A Comparison of Memory Allocators in Multiprocessors



    The Memory Fragmentation Problem: Solved? (1997)

    Keep up the great work

  3. Kelly

    This isn’t a drop-in replacement for malloc, but could probably be adapted to do that relatively easily though the algorithms sound similar to tcmalloc’s:

    Also based on some of the comments here:
    it might be possible to improve tcmalloc to better release mmapped blocks at least on linux and other unix systems. Right now it uses madvise(MADV_DONTNEED) which based on manpages for madvise probably just means the OS can swap it out rather than free it permanently, but I could be wrong.

  4. mirza

    Each alloc algorithm is trying to get best deal between low fragmentation and speed. Native c++ lib on each platform choose one that should be best for general usage. I think that solution to fragmented memory should be in elimination of allocations of *single* objects on heap. For example: std::vector is one big heap alloc, no fragmentation. std::vector is many little allocs (fragmentation alert!). If MyClass contains string attribute(s), fragmentation alert again! In such case MyClass should contain only (int) handle to string and strings should be stored together in string container (one implementation of which I sent to, but there are several available on internet). String container is, again, only two allocs (if done right), no mather how many strings, and therefore no fragmentation.

    So, what I am trying to say that if C++ is used in fragmentation-aware way, you don’t need to change default alloc algorithm. If you allocate lots of individual objects on heap, changing alloc will not help you *much*. For example, people are crying that Firefox, open for 2 days and doing nothing, eats 1GB or more RAM. If you change alloc, that number might be, say, 500MB (albeit I don’t think so) … but thats still bad! Not to mention that less-fragmenting alloc will be slower by definition.

  5. mirza

    hm, html distroyed my example, I will change braces:

    “std::vector{MyClass} is one big heap alloc, no fragmentation. std::vector{MyClass*} is many little allocs (fragmentation alert!).”

  6. Steve Chapel

    mizra says “For example, people are crying that Firefox, open for 2 days and doing nothing, eats 1GB or more RAM. If you change alloc, that number might be, say, 500MB (albeit I don’t think so) … but thats still bad!”

    It sounds like those people are suffering from extreme memory leaks. After I’ve been using Firefox 2 for a week, it uses only 200 MB of RAM. After three days of using Firefox 3 beta 1, it’s still using only 126 MB. I can’t see how changing the allocator is going to reduce memory usage by hundreds of megabytes. It won’t do anything noticeable for people having the most severe problems.

  7. pavlov Post author

    Steve: I disagree. Given the growth patterns of our fragmentation the app is going to keep taking up more and more and more memory over time without actually leaking. If you never get Firefox 2 over 200mb then I suspect you aren’t loading very many windows or tabs at once. I can get Firefox 2 over 200mb _really_ quickly.

  8. Steve Chapel

    It’s not that Firefox never gets above 200 MB. I regularly see it reach 500 MB when I have many tabs open with pages that use different plugins. I usually close most of my tabs on a regular basis, so memory use then drops to under 200 MB. If I run the Browser Mem Buster Test, which always keeps ten tabs open, memory use stays under 200 MB the entire time and drops to about 100 MB after closing all tabs but one. In my experience, fragmentation and memory leaks combined never seem to be taking more than around 100 MB total, even after a week of use.

    I suppose I need to ask: How many tabs are we talking about? Dozens? Hundreds? I suppose I also need to ask: On what OS are you seeing this extreme fragmentation, because I never see it with Windows XP. Let’s get specific.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s