Miłosz Orzeł

.net, js, html, arduino, java... no rants or clickbaits.

How fast is .NET Garbage Collector? Part 1.

.NET GC is very fast! Well... I hope you need more than this reassuring statement, if so, read on :) I will show you some test results to prove that I’m not lying but first I will give you a quick reminder about what GC is:

Garbage Collector is fundamental component of .NET CLR. It takes care of freeing managed heap memory so a programmer doesn’t have to think about deallocation. Contrary to popular belief, GC handles memory occupied for both reference and value types. Why? Because quite often space for things like structures or integers is allocated on the heap. It happens for example when value type is an item of array or is a field in a class instance.

GC in .NET uses mark and sweep algorithm: it walks through objects graph starting from root elements (such us statics, references on stack or registers) and marks every object it can reach. When the walk is done, GC knows it can safely free the memory of objects that have not been marked – because as unreachable they are useless for the application.

For efficiency purposes GC supports notion of generations. When the object is created it belongs to Gen 0 (except for so called large objects). If Gen 0 object survives GC cycle it’s moved to Gen 1. If it survives one more cycle it goes to Gen 2 and stays there (there’s no Gen 31). Most objects are short lived – they don’t get pass Gen 0 or Gen 1 so .NET GC tries to free memory from lower generations first. It does Gen 2 (aka full collection) far less often then Gen 0 collection. If the object is big – above 850002 bytes it is allocated in Large Object Heap (LOH) and goes straight to Gen 2. Treating big objects the same way as small objects would have negative impact on heap defragmentation3 algorithms because moving such objects is time-consuming.

GC supports different modes for workstation and server configurations, is able to do some work in background threads, has to consider special cases like finalizers or pined memory buffers, its settings vary between platforms (e. g. CLR on PC is not the same as the one on Xbox)… well let’s stop here – I’ve promised a “quick reminder”! 

Ok, time for the test! 

In this first post I will show you how fast .NET GC can handle large arrays of value (non-reference) types. The test will involve an array of bytes that occupies about two gigabytes of memory. Despite its large size, such object does not put much pressure on the Garbage Collector. This is because the only reference GC has to check is the one for the array itself. If array becomes unreachable all memory occupied by its elements can be safely reused. Additionally, such array, being part of LOH, is not copied (to avoid heap fragmentation) when it survives collection cycle… In the second installment of this article I will show you how GC can handle more complex scenarios. We will examine performance for object array, and more importantly, a tree of objects with thousands of references… 

Note about test environment: I’ve run the test code on desktop PC with Intel i-5 2400 3.1 GHz 4 Core CPU, 12 GB of DDR 3/1333 RAM, running Windows 7 Ultimate x64. The program was a .NET 4.0 console application compiled in Release mode and run without debugger attached.

Here’s the test code:

static void TestSpeedForLargeByteArray()
{
    Stopwatch sw = new Stopwatch();

    Console.Write(string.Format("GC.GetTotalMemory before creating array: {0:N0}. Press any key to create array...", GC.GetTotalMemory(true)));
    Console.ReadKey();
    byte[] array = new byte[2000 * 1000 * 1000]; // About 2 GB
   
    sw.Start();
    for (int i = 0; i < array.Length; i++)
    {
        array[i] = 1; // Touch array elements to fill working set              
    }          
    Console.WriteLine("Setup time: " + sw.Elapsed);

    Console.WriteLine(string.Format("GC.GetTotalMemory after creating array: {0:N0}. Press Enter to set array to null...", GC.GetTotalMemory(true)));
    if (Console.ReadKey().Key == ConsoleKey.Enter)
    {
        Console.WriteLine("Setting array to null");
        array = null;
    }
               
    sw.Restart();
    GC.Collect();                 
    Console.WriteLine("Collection time: " + sw.Elapsed);
    Console.WriteLine(string.Format("GC.GetTotalMemory after GC.Collect: {0:N0}. Press any key to finish...", GC.GetTotalMemory(true)));

    Console.WriteLine(array); // To avoid compiler optimization...
    Console.ReadKey();
}

As you can see test is very simple. The code uses two methods of static GC class: GC.GetTotalMemory and GC.Collect. The former returns the amount of allocated managed memory and the latter forces Garbage Collector to do its job and free unused memory. The only thing that might be surprising is the loop that touches array items. Without it you will witness “strange” phenomenon: after array is defined GC.GetTotalMemory would report about 2 GB but you will not see memory usage increase in Task Manager! This is because taskmgr.exe shows Working Set data. You can run more advanced tool: Resource Monitor (resmon.exe) to see what’s happening: 

This is the screenshot before the “touch”:

Memory usage before access to array items. Click to enlarge...

And this is the one after the loop is executed:

Memory usage after access to array items. Click to enlarge...

Here are the test results (yeah, finally):

GC.GetTotalMemory before creating array: 41,568. Press any key to create array...
Setup time: 00:00:01.0633903
GC.GetTotalMemory after creating array: 2,000,053,864. Press Enter to set array to null...
Setting array to null
Collection time: 00:00:00.1443391
GC.GetTotalMemory after GC.Collect: 53,800. Press any key to finish...

What you can see here is that it took GC just around 150 milliseconds to free about 2 GB of memory. Nice, huh? You can appreciate the speed especially if you compare it with 1 second it took to just iterate over the array!

Below is a screenshot from Performance Monitor (perfmon.exe) running couple of .NET memory counters:

Performance Monitor with managed memory counters. Click to enlarge...

Our array is a big object (well over LOH threshold) – hence after it is defined memory of LOH increases yet Gen 0 heap size remains flat. 

Below are the results for GC.Collect run when reference to the array was not set to null:

GC.GetTotalMemory before creating array: 41,568. Press any key to create array..
Setup time: 00:00:01.0385779
GC.GetTotalMemory after creating array: 2,000,053,864. Press Enter to set array to null...
Collection time: 00:00:00.0001287
GC.GetTotalMemory after GC.Collect: 2,000,053,824. Press any key to finish...

Fraction of a millisecond. Totally negligible! Why I’m even mentioning a test for situation when memory is not collected? Well, in part two you will see that GC usually has more work to do when heap items survive collection cycle…

I hope to post the second part of the article in about a week or two. No promises though – you know, life… ;)

Update 26.07.2014: I've changed the screenshot from perfmon (Gen 0 and Gen 1 scale is bigger now).

Update 24.06.2014: I wrote the second part of this article few days ago. Click here to read it.

1. You can use GC.GC.MaxGeneration method to check it.

2. LOH threshold is an implementation detail. Most sources mention 85KB as the limit but it's not always the case - array of doubles with as little as 1000 items goes on LOH (on x86)...

3. .NET 4.5 introduces LOH improvements for better fragmentation prevention through balancing and enhanced free list usage. Future releases will probably have compaction option too.

Pingbacks and trackbacks (1)+

Add comment

Loading