Fundamentals 12 min read

Understanding Java Object Memory Layout and the Size of new Object()

This article explains how Java objects are stored in memory, analyzes the heap layout, object header, instance data, and padding, demonstrates the byte size of a new Object() with and without compressed OOPs, and discusses object access methods and garbage‑collection regions.

Architect's Tech Stack
Architect's Tech Stack
Architect's Tech Stack
Understanding Java Object Memory Layout and the Size of new Object()

In this technical guide we explore how the Java Virtual Machine (JVM) organizes objects in memory, starting with the distinction between the method area (where class metadata resides) and the heap (where object instances are allocated). The article first presents a simple class HeapMemory with two Object fields ( obj1 as an instance variable and obj2 as a local variable) to illustrate the difference between heap‑allocated objects and stack‑based references.

We then examine the JVM's memory layout on a 64‑bit system without compressed ordinary object pointers (CompressedOops). An object consists of three parts: the object header (Mark Word and Class Pointer), the instance data, and optional padding to align the size to an 8‑byte boundary. With compressed OOPs enabled (the default), the class pointer is reduced to 4 bytes, and padding may be added to keep the total size at 16 bytes.

To verify the theoretical size, the article introduces the org.openjdk.jol library. After adding the Maven dependency:

<dependency>
    <groupId>org.openjdk.jol</groupId>
    <artifactId>jol-core</artifactId>
    <version>0.10</version>
</dependency>

a demo program prints the layout of a newly created Object instance, confirming a 16‑byte footprint when compressed OOPs are active. The article then shows how disabling compressed OOPs with the JVM flags -XX:+UseCompressedOops (enable) and -XX:-UseCompressedOops (disable) still yields a 16‑byte size because the padding compensates for the larger class pointer.

Next, a class MyItem containing a single byte field is used to demonstrate the impact of object size when compressed OOPs are on versus off. With compression the object occupies 16 bytes; without compression it grows to 24 bytes, highlighting the performance benefit of compressed references for large numbers of objects.

The article also covers object access strategies: handle‑based access (where a handle pool stores object metadata and the reference points to a handle) versus direct pointer access (used by HotSpot, where the reference points straight to the object). Diagrams illustrate the extra indirection of handle access and its advantage when objects move in memory.

Finally, the guide explains the JVM heap organization into Young and Old generations, describing Eden, Survivor (S0/S1), and the processes of Minor GC, Full GC, and object promotion based on generation age. It clarifies concepts such as Stop‑the‑World pauses, space fragmentation, and the guarantee of contiguous allocation in Eden, as well as the role of the survivor spaces and the guarantee mechanism that borrows space from the Old generation when needed.

In summary, the article provides a comprehensive view of Java object memory layout, demonstrates how to measure object size with JOL, compares compressed versus uncompressed pointer configurations, and outlines the JVM's generational garbage‑collection strategy.

JavaJVMGarbage Collectionmemory layoutobject-sizecompressed-oops
Architect's Tech Stack
Written by

Architect's Tech Stack

Java backend, microservices, distributed systems, containerized programming, and more.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.