Identify and eliminate bottlenecks in your application for optimized performance.
Every application that runs on a Java virtual machine (JVM) is allocated its share of memory before its startup. The JVM uses this memory and divides it into two parts: Stack memory and heap memory.
Stack memory management depends on the references used in the code and is maintained by the JVM only until the execution of that code. For example, method calls are added on top of each other and removed when they exit the method execution. The memory portion allocated for this purpose works like a stack, and we call this a call stack. Similar to a call stack, stack memory is used for various temporary components like references to an object and temporary results. The size of the stack memory depends on the user’s operating system.
When it comes to heap memory, the JVM manages it using garbage collection (GC). An appropriate garbage collector and application design are crucial to manage the heap memory. Memory allocations are dynamic, making heap memory access slower than stack memory.
Understanding how heap memory works and how it impacts GC is essential to building a high-performance application. Heap memory in general is segmented into two parts:
This section is further divided into two more spaces:
Any new object that is created in an application is first allocated based on the memory availability in the eden space. When there is a lack of space in this section, it results in a minor GC. Whenever a minor GC is triggered, all the objects that are no longer used or referred to in the application are removed and the ones that are still referenced are moved to the other section of the young generation, which is the survivor space.
Objects are moved to this space after a minor GC. The survivor space is again split in two parts, S1 and S2. At the time of a minor GC, objects that are referenced are switched between S1 and S2 so that one of the sections is always empty. This is done to release the dependency on the eden space quickly and have a sequential list of least referenced objects. The two survivor spaces’ roles are switched in each minor GC. Whenever a minor GC causes the switch of objects from S1 to S2 or vice versa, if the object survives this cycle, its survival duration is incremented by one.
Objects that live for a predefined duration in the young generation are moved to the old generation when the threshold set by the user is hit. For example, when an object is moved to the survivor space, it is expected to undergo multiple cycles of minor GC. Each time a minor GC happens, the object's survival rate is incremented by one. If the object survives 16 cycles of minor GC and if the tenuring threshold is set to 16, that object is moved into the old generation space automatically. The default value for the tenuring threshold varies for different garbage collectors and can be configured using JVM flags. Objects in the old generation are maintained until their references are modified in any other part of the running application.
When the space allotted for the old generation is full, it results in a major GC. Then, objects that aren't referenced in the old generation and the young generation are cleaned.
If there isn't enough space to allocate a new object after a major GC, the application crashes with an out-of-memory error. This could happen for various reasons, like:
This is the most common problem in the case of memory management. There is no one-size-fits-all approach for solving random application bugs, but they can be avoided by leveraging a test platform and a robust monitoring solution to provide component-level visibility into each application. The most common scenario is a deadlock.
Poor application design is a top-down issue that shows up gradually. A poor design balloons an issue over a period, and it is essential to periodically revisit these scenarios to keep the technical debt low. If not tracked properly, the code-level debt will pile up on top of the poor design. The most common scenario is an expedited code update to push a product live.
In most cases, this problem occurs when adequate stress testing is not done. An application designed for n users will not scale for 10n users. When there is uncontrolled growth in the number of users, the memory won’t be sufficient and frequent GC pauses could slow down the entire application. The best way to avoid this is to adopt an infrastructure monitoring solution to check the memory usage at the application level on a regular basis.
Either inefficient heap size allocation or the incorrect choice of garbage collector can cause ineffective GC configuration. The common scenarios include missing the -Xms and -Xmx configurations before startup.
Based on the scenarios above, it's evident that most memory management issues start at the application layer. That’s why it is essential to have a memory-driven thought process during the development of an application. A few common approaches are:
An end-to-end Java monitoring tool is crucial for managing the memory used for an application. Since there is no single approach to solving memory problems, you need to connect multiple dots across platforms, from a web request to the application layer.
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.
Apply Now