Difference Between Stack Memory and Heap Memory in Java

Every application that runs on a Java virtual machine (JVM) is allocated its share of memory before its startup. The JVM uses this memory and divides it into two parts: Stack memory and heap memory.

Stack memory

Stack memory management depends on the references used in the code and is maintained by the JVM only until the execution of that code. For example, method calls are added on top of each other and removed when they exit the method execution. The memory portion allocated for this purpose works like a stack, and we call this a call stack. Similar to a call stack, stack memory is used for various temporary components like references to an object and temporary results. The size of the stack memory depends on the user’s operating system.

Understanding stack memory and heap memory

When it comes to heap memory, the JVM manages it using garbage collection (GC). An appropriate garbage collector and application design are crucial to manage the heap memory. Memory allocations are dynamic, making heap memory access slower than stack memory.

Heap memory: A deep dive

Understanding how heap memory works and how it impacts GC is essential to building a high-performance application. Heap memory in general is segmented into two parts:

  • Young generation
  • Old generation

Young generation

This section is further divided into two more spaces:

  • Eden space

    Any new object that is created in an application is first allocated based on the memory availability in the eden space. When there is a lack of space in this section, it results in a minor GC. Whenever a minor GC is triggered, all the objects that are no longer used or referred to in the application are removed and the ones that are still referenced are moved to the other section of the young generation, which is the survivor space.

  • Survivor space

    Objects are moved to this space after a minor GC. The survivor space is again split in two parts, S1 and S2. At the time of a minor GC, objects that are referenced are switched between S1 and S2 so that one of the sections is always empty. This is done to release the dependency on the eden space quickly and have a sequential list of least referenced objects. The two survivor spaces’ roles are switched in each minor GC. Whenever a minor GC causes the switch of objects from S1 to S2 or vice versa, if the object survives this cycle, its survival duration is incremented by one.

Old generation

Objects that live for a predefined duration in the young generation are moved to the old generation when the threshold set by the user is hit. For example, when an object is moved to the survivor space, it is expected to undergo multiple cycles of minor GC. Each time a minor GC happens, the object's survival rate is incremented by one. If the object survives 16 cycles of minor GC and if the tenuring threshold is set to 16, that object is moved into the old generation space automatically. The default value for the tenuring threshold varies for different garbage collectors and can be configured using JVM flags. Objects in the old generation are maintained until their references are modified in any other part of the running application.

When the space allotted for the old generation is full, it results in a major GC. Then, objects that aren't referenced in the old generation and the young generation are cleaned.

Out of memory

If there isn't enough space to allocate a new object after a major GC, the application crashes with an out-of-memory error. This could happen for various reasons, like:

  • A random application bug
  • Poor application design
  • An under-provisioned resource
  • Inefficient GC configuration

Random application bug-Java A random application bug
Poor application design in Java Poor application design
Under-provisioned resource-Java Under-provisioned resource
Inefficient Garbage Collector configuration in a JVM Inefficient GC configuration

Random application bug

This is the most common problem in the case of memory management. There is no one-size-fits-all approach for solving random application bugs, but they can be avoided by leveraging a test platform and a robust monitoring solution to provide component-level visibility into each application. The most common scenario is a deadlock.

Poor application design

Poor application design is a top-down issue that shows up gradually. A poor design balloons an issue over a period, and it is essential to periodically revisit these scenarios to keep the technical debt low. If not tracked properly, the code-level debt will pile up on top of the poor design. The most common scenario is an expedited code update to push a product live.

Under-provisioned resource

In most cases, this problem occurs when adequate stress testing is not done. An application designed for n users will not scale for 10n users. When there is uncontrolled growth in the number of users, the memory won’t be sufficient and frequent GC pauses could slow down the entire application. The best way to avoid this is to adopt an infrastructure monitoring solution to check the memory usage at the application level on a regular basis.

Inefficient GC configuration

Either inefficient heap size allocation or the incorrect choice of garbage collector can cause ineffective GC configuration. The common scenarios include missing the -Xms and -Xmx configurations before startup.

Based on the scenarios above, it's evident that most memory management issues start at the application layer. That’s why it is essential to have a memory-driven thought process during the development of an application. A few common approaches are:

  • Limit the scope of references as much as possible in your code. As the stack gets cleaned up for reference faster, most collections undergo a minor GC. This results in fewer GC pauses.
  • Explicitly make an object eligible for GC.
  • Cache what is necessary and avoid recreating objects repeatedly to ensure that the memory gets allocated only on heap memory.
  • Analyze your application requirements and choose the appropriate garbage collector based on that.
  • Using JVM flags is the golden rule. Depending on how the application is used, you may be able to leverage them. You need to monitor and capture various metrics like throughput, latency, and CPU to arrive at the appropriate flag. As all flags work together, you need to check if two or more flags can be set together to understand the usage of each of them.
  • Enable the verbose GC setting (-verbose:gc) to publish the GC logs after every GC. Note that these are cyclic files that don’t consume any of the CPU.
  • Collect GC logs regularly, and set alerts for various thresholds. Also, enable the heap dump setting (–XX:HeapDumpOnOutOfMemory) for out-of-memory scenarios.
  • Keep track of CPU bumps and correlate them with memory issues whenever possible.

An end-to-end Java monitoring tool is crucial for managing the memory used for an application. Since there is no single approach to solving memory problems, you need to connect multiple dots across platforms, from a web request to the application layer.

Was this article helpful?
Monitor your applications with ease

Identify and eliminate bottlenecks in your application for optimized performance.

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us