Michael Kopp About the Author

Michael is aTechnical Product Manager at Compuware. Reach him at @mikopp

The impact of Garbage Collection on Java performance

In my last post I explained what a major Garbage Collection is. While a major Collection certainly has a negative impact on performance it is not the only thing that we need to watch out for. And in case of the CMS we might not always be able to distinguish between major and minor GC. So before we start tuning the garbage collector we first need to know what we want to tune for. From a high level there are two main tuning goals.

Execution Time vs. Throughput

The first thing we need to clarify if we want to minimize the time the application needs to respond to a request or if we want to maximize the throughput. As with every other optimization these are competing goals and we can only fully satisfy one of them. If we want to minimize response time we care about the impact a GC has on the response time first and on resource usage second. If we optimize for throughput we don’t care about the impact on a single transaction. That gives us two main things to monitor and tune for: runtime suspension and Garbage Collection CPU usage. Regardless of which we tune for, we should always make sure that a GC run is as short as possible, but what determines the duration of GC run?

What makes a GC slow?

Although it is called Garbage Collection the amount of collected garbage has only indirect impact on the speed of a run. What actually determines this is the number of living objects. To understand this let’s take a quick look at how Garbage Collection works.

Every GC will traverse all living objects beginning at the GC roots and mark them as alive. Depending on the strategy it will then copy these objects to a new area (Copy GC), move them (compacting GC) or put the free areas into a free list. This means that the more objects stay alive the longer the GC takes.The same is true for the copy phase and the compacting phase. The more objects stay alive, the long it takes. The fastest possible run is when all objects are garbage collected!

With this in mind let’s have a look at the impact of garbage collections.

Impact on Response Time

Whenever a GC is triggered all application threads are stopped. In my last post I explained that this is true for all GCs to some degree, even for so called minor GCs.  As a rule every GC except the CMS (and possibly the G1) will suspend the JVM for the complete duration of a run.

The easiest way to measure impact on the response time is to use your favorite tool to monitor for major and minor collections via JMX and correlate the duration with the response time of your application.

The problem with this is that we only look at aggregates, so the impact on a single transaction is unknown. In this picture it does seem like there is no impact due to the garbage collections. A better way of doing this is to use the JVM-TI interface to get notified about stop-the-world events. This way the response time correlation is 100% correct, whereas otherwise it would depend on the JMX polling frequency. In addition, measuring the impact that the CMS has on response time is harder as its runs do not stop the JVM for the whole time and since Update 23 the JMX Bean does not report the real major GC anymore. In this case we need to use either verbose:gc or a solution like dynaTrace that can accurately measure runtime suspensions via a native agent technology.

Here we see a constant but small impact on average, but the impact on specific purepaths is sometimes in the 10 percent range. Optimizing for minimal response time impact has two sides. First we need to get the sizing of the young generation just right. Optimal would be that no object survives its first garbage collection, because then the GC would be fastest and the suspension the shortest possible. As this optimum can not be achieved we need to make sure that no object gets promoted to old space and that an objects dies as young as possible. We can monitor that by looking at the survivor spaces.

This chart shows the survivor space utilization. It always stays well above 50% which means that a lot of objects survive each GC. If we were to look at the old generation it would most likely be growing, which is obviously not what we want. Getting the sizing right, also means using the smallest young generation possible. If it is too big, more objects will be alive and need to be checked, thus a GC will take longer.

If after the initial warmup phase no more objects get promoted to old space, we will not need to do any special tuning of the old generation. If only a few objects get promoted over time and we can take a momentary hit on response time once in a while we should choose a parallel collector in the old space, as it is very efficient and avoids some problems that the CMS has. If we cannot take the hit in response time, we need to choose the CMS.

The Concurrent Mark and Sweep Collector will attempt to have as little response time impact as possible by working mostly concurrently with the application. There are only two scenarios where it will fail. Either we allocate too many objects too fast, in which case it cannot keep up and will trigger an “old-style” major GC; or no object can be allocated due to fragmentation. In such a case a compaction or a full GC (serial old) must be triggered. Compaction cannot be done concurrent to the application and will suspend the application threads.

If we have to use a continuous heap and need to tune for response time we will always choose a concurrent strategy.


Every GC needs CPU. In the young generation this is directly related to the number of times and duration of the collections. In old space and a  continuous  heap things are different. While CMS is a good idea to achieve low pause time, it will consume more CPU, due to its higher complexity. If we want to optimize throughput without having any SLA on a single transaction we will always prefer a parallel GC to the concurrent one. There are two thinkable optimization strategies. Either enough memory so that no objects get promoted to old space and old generation collections never occur, or have the least amount of objects possible living all the time. It is important to note that the first option does not imply that increasing memory is a solution for GC related problems in general. If the old space keeps growing or fluctates a lot than increasing the heap does not help, it will actually make things worse. While GC runs will occur less often, they will be that much longer as more objects might need checking and moving. As GCs becomes more expensive with the number of objects living, we need to minimize that factor.

Allocation Speed

The last and properly least known impact of a GC strategy is the allocation speed. While a young generation allocation will always be fast, this is not true in the old generation or in a continuous heap. In these two cases continued allocation and garbage collection leads to memory fragmentation

To solve this problem the GC will do a compaction to defragment the area. But not all GCs do a compact all the time or they do it incrementally. The reason is simple, compaction would again be a stop-the-world event, which GC strategies try to avoid. The Concurrent Mark and Sweep of the Sun JVM does not compact at all. Because of that these GCs must maintain a so called free list to keep track of free memory areas. This in turn has an impact on allocation speed. Instead of just allocating an object at the end of the used memory, java has to go through this list and find a big enough free area for the newly allocated object.

This impact is the hardest to diagnose, as it cannot be measured directly. One indicator is a slowdown of the application without any other apparent reasons, only to be fast again after the next major GC. The only way to avoid this problem is to use a compaction GC, which will lead to more expensive GCs. The only other thing we can do is to avoid unnecessary allocations while keeping the amount of memory usage low.


Finally allocate as much as you like, but forget as soon as possible, before the next GC run if possible. Still don’t overdo it either, there is a reason why using StringBuilder is more efficient than simple String concatenation. And finally, keep your overall memory footprint and especially your old generation as small as possible. The more objects you keep the less the GC will perform.


  1. I like the article except the conclusion

    > After all, doing less allocations is by far the most
    > effective optimization of your garbage collector
    > performance.

    Doing fewer allocations means to store objects in member variables for reuse. But this way you build a graph of live objects. A lot of live objects slow down the GC as you said. And those old generation objects are expensive to collect as you said.

    So the conclusion for me is: instantiate as you like but forget early. Objects should not leave the local scope if possible.

    Also, as you know, in newer versions of Java short living objects are allocated on the stack. This means zero allocation and descruction costs, no GC involved.

  2. Thanks for the hint, I would say both are true and therein lies the crux of an optimal application.

    The conclusion maybe should have been, “allocate as you like, but only as much so that the object is forgotten before the next GC run.” Which means before your space or TLA is full.

    As for allocations on the stack having no allocation and destruction cost, that is not entirely true. The TLA which is used for thread local allocation is neither unlimited in size nor can we have TLAs for 100s of threads. It is correct to say that the allocation for a limited number of objects per thread is very very cheap. The TLA will still be garbage collected, but in the optimum contains no live objects at that time. Which is what the G1 will be all about.

    So less allocations do help. Less allocations do not mean caching objects or having more objects. But instead of allocating the same thing multiple times in iterations of a loop or recursions of a function, you should allocate it once for the duration of the loop or method chain.

  3. Hi shogg,

    I updated the blog according to your hint, again thanks for pointing it out.

  4. thx for the article.

    btw, StringBuilder is more efficient than StringBuffer, the latter of which is synchronized for multi-threaded operation…

  5. Yeah I know, fixed it. This is turning into a community effort. keep them coming…

  6. Michael,

    Thank you for this very informative article. I have been looking into tuning my production JVM which uses CMS. One thing that baffles me most and the explanation of which I can find nowhere in Sun documentation is that even though I allocated 2 GB heap space using Xmx option, the young generation by default gets only 16 MB! For the life of me, I can’t figure out why the default allocation to young generation by CMS is so ridiculously low. Is there a good reason for it or should I up the young generation size by using NewRatio? Would you have any idea about this?


  7. Hi Danny,

    You might want to read the memory white paper (http://www.oracle.com/technetwork/java/javase/tech/memorymanagement-whitepaper-1-150020.pdf) it is the best base for garbage collector related information in the sun documentation.

    As for you predicament. I cannot fathom how this would depend on CMS, but did you set Xms as well? even if you give it 2GB max it will still start at the lowest default vale. if you use the default settings you could end up with 16 MB Eden in the end, but the young space should have up to 256 MB overall.

    If you want send me your complete java options, the platform that you use and I will take a look at it.

    // Mike

  8. Mike,

    Thank you for the link to the white paper. I will go through it. That 16 MB does seem to depend on CMS. If I don’t specify -XX:+UseConcMarkSweepGC in JVM options, then Eden space allocated by default is 256MB. It is only when I specify CMS, that it defaults to 16 MB, and never seems to rise. The server does not have very heavy traffic but it is heavy enough that I would expect JVM to use more than 16 MB of Eden space. I specified Xms same as Xmx which is 2 GB (to avoid heap resizing). Xms=2048m Xmx=2048m and -XX:+UseConcMarkSweepGC are the only JVM options explicitly specified.

    To me it seems what you said is right that Eden space usage should rise but I don’t see it even when the server has been in continuous use for one month. It is permanently stuck at 16MB (unless I specify NewRatio).


  9. The platform is Red Hat Enterprise Linux 5.

  10. Andrew Rothwell says:

    In these days of multi-core devices, why doesn’t some-one provide a dedicated processor to GC, and make sure there is a concomitant shared memory sub-system designed to handle applications running full tilt on 1..N-1 cores, with GC running on the Nth core? Or some architecture like this.

  11. Hi Dave,

    I have verified that and it seems to be a quirk in 1.5. 1.5 has a ratio (by experiment) of 1:128 where as 1.6 has 1:64 on cms where it is 1:3 if I leave the CMS out. I cannot find a documentation for that right now.

    It kind of makes sense to have a smaller young on CMS as the usage of CMS suggests to have “middle lifed objects” in old.

    I would suggest to use the new ratio explicitly. As I stated in the article getting the sizing right is the most important tuning mechanism anyway, so you should not leave it up to chance unless you have a very simple application.

    Sorry that I could not help you more here.

    // Mike

  12. If we want to optimize throughput without having any SLA on a single transaction we will always prefer a parallel GC to the concurrent one. There are two thinkable optimization strategies. ..

  13. Another approach is store objects off-heap and avoid the Garbage collector.
    Depending on the performance goals you can use a disk store or something like Ehcache’s off heap store (aka BigMemory)


  14. Very good article with lots of important information and graphics.Its quite useful for any one who wants to understand GC.

    5 tips on writing equals method in Java

  15. The server does not have very heavy traffic but it is heavy enough that I would expect JVM to use more than 16 MB of Eden space. There are two thinkable optimization strategies.
    nexium vs prilosec otc

  16. Took me time to read all the comments, but I really enjoyed the article. It proved to be very useful to me and I am sure to all the commenters here! It’s always nice when you. IT jobs

  17. Garbage collection is not free. The program has to be suspended while the garbage collector examines all of the program’s memory, and this happens unpredictably. For programs that need to work in real-time, such as games, this can be problematic. Garbage collection of the substantial amount of memory that a game might use takes a long time, so to the player, it would seem like the game had frozen.



5 − one =