Wolfgang Gottesheim About the Author

Wolfgang has several years of experience as a software engineer and research assistant in the Java enterprise space. Currently he contributes to the strategic development of the Compuware dynaTrace enterprise solution as a Technology Strategy in the Compuware APM division’s Center of Excellence. He focuses on monitoring and optimizing applications in production. Find him at @gottesheim

The DevOps Way to Solving JVM Memory Issues

The killer in any IT operations is unplanned work. Unplanned work may go by many names; firefighting, war rooms, Sev 1 incidents. The bottom line is that Operations must stop whatever planned work it was doing to manage this drill. This means little or no normal work is being accomplished. It is a scenario most of you will be familiar with: Your application servers are humming along happily until suddenly, without an obvious reason, memory usage starts to increase, soon followed by longer garbage collection suspensions that finally force you to restart the application. The operations team is typically unaware of the actual impact on end users (other than a service being down), and it additionally lacks data and time to further investigate the issue. As communication between the traditional silos of operations, testing and development teams is often less than ideal, a scheduled restart in a “low impact” timeframe is often the easiest solution and turns into something resembling a “Production Best Practice” over time. This adds to the workload of an operations team because unplanned work becomes unnecessary preventative work. It also becomes a suspect every time there is a problem with the application. Wouldn’t it be better to actually fix the issues instead of just working around them? Shouldn’t there be a general understanding across all teams responsible for an application to fix the problem as fast as possible and make sure that it’s prevented in the future?

In this blog we walk you through a case study where a memory leak in a 3rd party plugin impacted end user performance. Instead of hiding the problem with preventive JVM restarts, DevOps Best Practices were used which fostered the collaboration between Ops, Test and Dev.

The Rise of 3rd Party Plugins

While applications in the early days of computing were monolithic behemoths, their modern successors, no matter if desktop or browser based, usually provide extension points that allow developers to extend their functionality with plugins. Such plugins can be used both on the client and server side. Familiar examples from every-day use include browser plugins for IE and Firefox such as Skype, Flash or Java, and Add-ons for Outlook or Excel. A popular server-side example is plugins for WordPress, the platform we use for this blog instance. We use plugins that automatically filter out spam comments and provide various ways for you to share our posts on your social network of choice.

From an application owner’s perspective, the biggest benefit of a plugin-based architecture is the increase in flexibility – you can meet changing needs by adding new plugins, instead of worrying about upgrading a much larger system. But by using plugins, you grant a (more or less well-known) third party to access your data and systems, which frequently raises privacy concerns as well as security issues. An example of this is the Java browser plugin’s gaping security holes. While best practices such as sandboxing help with these risks, these discussions typically focus on the client side; the performance impact of plugins on your application’s server-side is often missed. We have covered the possible effects of client-side plugins before, and will focus on the server side in this blog post.

The trigger for Operations

Let’s get back to our memory issue that forced us – the R&D lab of Compuware APM – to get Dev and Ops together and work on a solution to regular scheduled restarts of our application servers. Across all Compuware APM product lines, we use a Salesforce-based case management solution called Case360 to support our customers. Within our R&D organization we internally use Atlassian JIRA, a popular Java-based bug tracking solution that has grown into a platform for agile software development through its plugin ecosystem. New issues raised by customers as well as changes to existing issues have to be synchronized between Case360 and JIRA. Since this is not an out-of-the-box capability of JIRA we looked and found a plugin that meets our requirement and worked well for us for the first several months.

Fast forward a couple of months! Seemingly out of the blue, we began to see performance issues with JIRA. Our production monitoring alerted Ops about decreased end user performance with some users aborting actions due to very long response times. Nobody had called in yet – but the early warning system indicated that users will soon complain.

Ops and Dev working together

Looking at the infrastructure monitoring data showed Ops that the root cause for the slower performance was high Garbage Collection time on the JIRA server. The pattern of GC times as well as JVM heap consumption indicated a “classical” memory leak.

Ops started to investigate and worked with our performance engineering team to establish causality between the start of the issues and other changes, but came up blank. No new plugins had been installed recently, and no updates to the underlying operating system, the Java runtime, or JIRA itself had been applied within the last weeks.

Due to the increased memory usage, an Out of Memory Exception (OOM) was unavoidable. As it was still during business hours a “controlled” restart was also not the best option. The OOM unfortunately happened. In this case our monitoring solution automatically triggered a full memory dump that allowed us to view the heap’s content at the time of the error. When analyzing the dump, we noticed a number of large object instances as shown in the following screenshot:

Automatically triggered Memory Dump shows objects that consumed most of the heap space

Automatically triggered Memory Dump shows objects that consumed most of the heap space

Looking at the class names of these instances, we were able to identify the actual culprit: the Salesforce synchronization plugin! The plugin had been in use for over half a year without any problem. It comes with a cache used for the tickets that were synchronized. With the number of tickets growing over time, this cache grew as well. Unfortunately, this cache was not limited, and when we finally reached a critical number of tickets and attachments, this cache caused JIRA to run out of memory.

The very high number of HashMap and HashMap Entry objects filled up JIRA’s heap.

The very high number of HashMap and HashMap Entry objects filled up JIRA’s heap.

With this information, we were able to pinpoint the root cause and reach out to the developers of the 3rd party provider of the plugin. The detailed data we had available – both the memory dumps and the impact it had on end user response time – avoided all collaboration and communication problems that you typically have. There was no finger pointing or going back and forth multiple times to provide more detailed log files. Within days (before another OOM could occur again), we had a fixed version, first deployed in our staging environment and tested by our performance team then later deployed in production giving Ops the confidence that it will solve the problem.

Don’t do it the “Easy Way”: Preventive JVM Reboots

What would’ve happened without proactively monitoring the application? It would have been the end users calling in and not the early warning alerts which gave our teams a head start with the analysis. JVM metrics and log messages alone would have not been useful to analyze this problem as no log output indicated an endless growing cache. Without this insight there was no immediate connection between the plugin and the start of the memory issues.

Reaching out to Atlassian and supplying the team with log files would’ve been the next step, adding additional turnaround time – time that our users have to spend dealing with sporadic outages. Even if they would’ve been able to point us into the right direction (to the plugin vendor), we would’ve lost more time there as the usual process of trying to reproduce the problem on their systems would’ve begun.
The most common solution we talked about in the introduction is to schedule restarts of JVMs during low traffic hours in order to prevent a major impact. Continuing this until the problem gets finally fixed or simply doing it forever. This is not “proactive”. It is just the easy way for damage control.

Do it the “DevOps Way”: Foster Collaboration and Be Proactive

As we learned from our own example – preventive restarting of application servers is not the only measure Ops has to fight problems within the application that impacts end users.

It requires a performance culture within the organization to put the right people, processes and priorities in place – supported by tooling which makes collaboration and root cause analysis easy. Having data readily available allows us to overcome all the typical collaboration and communication problems between those that are impacted by the problem (Ops) and those that have to fix the problem (Dev). With that you can ensure higher availability of your systems, resulting not only in happier users, but also freeing up Ops resources from troubleshooting.

Comments

*


+ 5 = twelve