JVM Memory managment

JVM memory management. How to find and prevent memory leaks

JVM Memory managment

JVM creators designed it with automatic memory management in mind, which means programmers don’t need to worry about memory allocation and memory. Unused objects can be released automatically in a transparent way, which is really convenient, especially when you’re new to JVM. But even in general, there’s less code to write and it’s less error-prone than the traditional approach which requires you to do everything manually. 

The reality, however, is not as ideal as it might sound, especially when you’re developing long-lived apps with huge traffic. Although it’s much harder to cause a memory leak in JVM than for example in C, it’s still possible. Choosing GC algorithms and parametrizing them can have a big influence on performance as well. And, as with any abstraction or automation, if you want to code intentionally (and that’s the professional approach) you need to understand what kind of work is done behind the scenes to be able to prevent or diagnose problems. Let’s take a look at some useful tools and techniques which will help you find the reason why your application is crashing or slowing down instead of working fast and able to do what it was created for.

OutOfMemoryError

The first thing we’ll need is a solid piece of code that causes OutOfMemoryError. OutOfMemoryError is an Exception thrown by JVM which informs us that we have less memory than we need. There could be many possible reasons why this Exception might be thrown and you can look at the cause of the Exception to see what’s going on. Right now, let’s write an app that keeps on allocating memory until we exceed the memory limit;

// file Application.scala
object Application {
  def main(args: Array[String]): Unit = {
    LazyList.from(0).toList
  }
}

Compile it

scalac <span class="token class-name">Application</span><span class="token punctuation">.</span>scala
Java

And run an app setting heap size to a constant value of 10 MB.

scala Application -J-Xmx10m -J-Xms10m

Xms and Xmx are JVM flags that specify the heap size of our application (or simply, how much memory our application will have), where Xms stands for the initial size of the heap and Xmx for the maximum size. 

In our case, 10 MB is a small enough value to experience a lack of memory pretty quickly. We can see that the application crashed with the followed error:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

Looking for the reasons

In this case, it’s obvious what the problem is. We have one thread and only one line of code. Real applications are of course much more complicated than this. When we see OutOfMemoryError in production, looking at the stack trace won’t help us because the line which caused the problem will be fairly random, and we are looking for a code that allocates memory and doesn’t release it. We can look inside JVM to find the source of the problem.

Let’s add a -XX:+HeapDumpOnOutOfMemoryError flag which causes the Heap Dump to be generated on OutOfMemoryError.

scala Application -J-Xmx10m -J-Xms10m -J-XX:+HeapDumpOnOutOfMemoryError

We can see that when our app crashed, a file with the extension .hprof and PID in the name was generated. This file is binary, so we need some tools to see what’s inside. There are many tools that can do the job, even online – like HeapHero (https://heaphero.io/), which you can use if your data isn’t sensitive. In the beginning, I would recommend VisualVM.

After importing the file, the most useful thing you can check is the list of all the allocated objects with the memory percentage of their use. We can see objects from scala.collection.immutable (:: and LazyList.State.Cons) and Integers which consist of almost the whole heap. This is consistent with what we did in our program. Lists and Integers are obvious since we’re creating List[Int] as a result. More interesting is the presence of LazyList.Cons. This is because Scala’s LazyList uses memoization so it keeps references to all the elements. This is the reason why we see so many LazyList.State.Cons objects in our dump.

Be careful when using -XX:+HeapDumpOnOutOfMemoryError option in an environment where disk space is significantly limited (e.g. in a cloud). This is a full heap dump which means that in the case of OutOfMemoryError it is at least as big as the maximum heap size.  In the case of a large heap, this might be much bigger than the disk space assigned to your image (because you probably don’t need much space as it’s good practice to never write directly to disk from your services).

Testing how living app allocates objects using VisualVM

In many cases, it can be useful to see how a living application allocates objects. You can use VisualVM for this purpose as well. This tool allows you to connect to any locally run JVM out of the box. If you need to inspect a deployed application, you can connect to it via a JMX connection. All you need to do is to set a flag which enables a JMX connection on application start.

scala Application -J-Xmx10m -J-Xms10m -J-Dcom.sun.management.jmxremote

After that, you are ready to connect with VisualVM. If you deploy the application in a cloud, make sure that the port you are using is open. 

Investigating your application this way gives you more options than simply looking at the heap dump. You can look at many useful statistics such as GC activity and used heap size. You can request JVM to create a heap dump at any time for you so you don’t need to wait until your application crashes. A really useful tool is a Sampler, which allows you to create a dump of used memory. This is really similar to a heap dump but, in this case, you can see the allocation rate for each thread so you can track down the thread which is allocating the memory more greedily than it should. This could be a very useful piece of information in your investigation process. 

Garbage collection

We’ve learned already how to see what happens with memory allocation while the application is running or after it has just crashed. Now let’s take a look at Garbage Collecton. Sometimes OutOfMemoryError is caused not by a memory leak but simply because we haven’t given our application enough memory to work with. This can also happen when our application starts using more memory than it usally does because of increasing traffic. But for whatever reason it happened, in order to see the whole picture, it’s really useful to be able to look at the GC logs as they could contain the missing pieces in our story. Analyzing GC logs is trickier than simply looking at the heap dump because it requires an understanding of how Garbage Collection algorithms work. Here is a pretty good introduction to the subject:

If you need deeper knowledge, I can recommend a really great book, Java Performance by Charlie Hunt.

Analyzing Garbage Collection logs

GC works transparently as a process inside JVM. However, we can tell JVM that we need it to generate GC logs. You can do it with the following additional parameter:

scala Application -J-Xlog:gc:file=gc-log

Keep in mind that, in contrast to the thread dump, this is an operation which – as its name suggests – accommodates logs instead of creating a one-time dump. This means that each significant GC operation is logged in and you can’t configure JVM to create logs only on application crashes. It’s important to make sure that there is enough disk space for logs. You can make use of parameters NumberOfGCLogFiles and GCLogFileSize too to make sure that the size of the logs is controlled.

scala Application -J-Xlog:gc:file=gc-log -J-XX:NumberOfGCLogFiles=10 -J-XX:GCLogFileSize=5M

GC logs are regular text files so you can read them in whatever text editor you like, but in order to be able to quickly analyze them, you need a special tool. You can use one of the many free online tools (e.g. https://gceasy.io/). It’s worth noting that GC logs don’t contain any application data, only data related to GC work, so you can safely upload your logs to external services without any fear of exposing sensitive data. 

There are many useful statistics you might want to take a look at. One of these is the percentage of time your application spends doing GC instead of working directly for you. Each significant growth of this metric should be alarming. You should look as well at the heap usage graph and check how often a full GC is run in comparison to a minor. After some practice, all of the data provided by this tool should give you a good overall picture of the memory utilization of your application.

 

Prevention is better than cure

So far we have discussed what to do when our application crashes or is unstable. Even though this is useful, we need to be able to react earlier. We definitely don’t want to have to work under pressure, trying desperately to fix memory management related problems while our application doesn’t work. As professionals we should do better than this – we need to monitor our apps in order to catch the moment when things start to go wrong, and before our application actually crashes. You can fin fix most memory management related problems without a significant impact on users if we only checked the right metrics in advance. 

Monitoring is not part of the scope of this article. Check out Graphite if you are looking at how to start on that subject. However, whatever monitoring system and alerting tool you are using (because you are using one, right?) you might want to add metrics related to memory utilization and GC. It will make your life way easier. 

Summary

Dealing with memory-related issues is hard and this article is only an introduction to the subject. We should never assume that using JVM will remove all responsibility for memory management. Furthermore, it’s worth noting that each GC cycle uses CPU, so optimizing your code might not only help you avoid crashes but could also reduce your cloud costs too.

See also 

Download e-book:

Scalac Case Study Book

Download now

Authors

Dorian Sarnowski

Passionate software engineer with ten years of professional experience specialized in creating scalable and high traffic web applications using new technologies. Enthusiast of clean code, automated testing, and agile methodology. Personally passionate about rock’n’roll history, lyricist, and bass player.

Latest Blogposts

17.04.2024 / By  Michał Szajkowski

Mocking Libraries can be your doom

Test Automations

Test automation is great. Nowadays, it’s become a crucial part of basically any software development process. And at the unit test level it is often a necessity to mimic a foreign service or other dependencies you want to isolate from. So in such a case, using a mock library should be an obvious choice that […]

04.04.2024 / By  Aleksander Rainko

Scala 3 Data Transformation Library: ducktape 0.2.0.

Scala 3 Data Transformation Library: Ducktape 2.0

Introduction: Is ducktape still all duct tape under the hood? Or, why are macros so cool that I’m basically rewriting it for the third time? Before I go off talking about the insides of the library, let’s first touch base on what ducktape actually is, its Github page describes it as this: Automatic and customizable […]

28.03.2024 / By  Matylda Kamińska

Scalendar April 2024

scala conferences april 2024

Event-driven Newsletter Another month full of packed events, not only around Scala conferences in April 2024 but also Frontend Development, and Software Architecture—all set to give you a treasure trove of learning and networking opportunities. There’re online and real-world events that you can join in order to meet colleagues and experts from all over the […]

software product development

Need a successful project?

Estimate project