Backend Development 16 min read

Implementation of EROFS Tail‑Packing Inline Compression in Linux Kernel 5.17

This article explains the design and implementation of the EROFS tail‑packing inline compression feature merged in Linux kernel 5.17, covering modifications to the on‑disk format, mkfs and kernel code paths, new inode structures, map‑block handling, and performance considerations with detailed code excerpts.

Coolpad Technology Team
Coolpad Technology Team
Coolpad Technology Team
Implementation of EROFS Tail‑Packing Inline Compression in Linux Kernel 5.17

EROFS ztailpacking is a new feature introduced in Linux kernel 5.17 that adds tail‑packing inline support for compressed files, saving space and improving performance. The article walks through the implementation on both the mkfs side and the kernel side.

mkfs

Starting from the disk format, the inode data layout enumeration is extended to include new values for inline data handling:

/*
 * erofs inode datalayout (i_format in on-disk inode):
 * 0 - inode plain without inline data A:
 * 1 - inode VLE compression B (legacy):
 * 2 - inode plain with inline data C:
 * 3 - inode compression D:
 * 4 - inode chunk-based E:
 */
enum {
    EROFS_INODE_FLAT_PLAIN            = 0,
    EROFS_INODE_FLAT_COMPRESSION_LEGACY    = 1,
    EROFS_INODE_FLAT_INLINE            = 2,
    EROFS_INODE_FLAT_COMPRESSION        = 3,
};

The new h_idata_size field is added to struct z_erofs_map_header to record the size of inline data:

struct z_erofs_map_header {
    __le16    h_reserved1;
    /* record the size of tail‑packing data */
    __le16    h_idata_size;
    __le16    h_advise;
    __le8     h_algorithmtype;
    __le8     h_clusterbits;
};

During file creation, erofs_write_compressed_file() sets the appropriate compression flags and layout, and the new flag Z_EROFS_ADVISE_INLINE_PCLUSTER is introduced to indicate inline data:

#define Z_EROFS_ADVISE_INLINE_PCLUSTER        0x0008

A helper z_erofs_fill_inline_data() is added to store the inline data in the inode:

static int z_erofs_fill_inline_data(struct erofs_inode *inode, void *data,
                    unsigned int len, bool raw)
{
    inode->idata_size = len;
    inode->compressed_idata = !raw;
    inode->idata = malloc(inode->idata_size);
    if (!inode->idata)
        return -ENOMEM;
    erofs_dbg("Recording %u %scompressed inline data",
          inode->idata_size, raw ? "un" : "");
    memcpy(inode->idata, data, inode->idata_size);
    return len;
}

Kernel

The kernel locates and decompresses the inline data via z_erofs_map_blocks_iter() . The read path starts in z_erofs_do_read_page() , which builds a map record for each logical cluster and uses the new inline‑data flags to decide whether to fetch data directly from the inode.

static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe,
                struct page *page, struct list_head *pagepool)
{
    ...
    err = z_erofs_map_blocks_iter(inode, map, 0);
    if (err)
        goto err_out;
    ...
}

The map iterator checks for EOF, loads the inode lazily, and then reads the appropriate index structures ( struct z_erofs_vle_decompressed_index ) to resolve the physical location of the data, handling three cluster types: PLAIN, HEAD, and NONHEAD.

enum {
    Z_EROFS_VLE_CLUSTER_TYPE_PLAIN        = 0,
    Z_EROFS_VLE_CLUSTER_TYPE_HEAD        = 1,
    Z_EROFS_VLE_CLUSTER_TYPE_NONHEAD    = 2,
};

When the inline flag is set, the kernel uses the newly added members in struct erofs_inode ( z_tailextent_headlcn , z_idataoff , z_idata_size ) to locate the inline payload without accessing the disk.

struct erofs_inode {
    ...
#ifdef CONFIG_EROFS_FS_ZIP
    unsigned short z_advise;
    unsigned char  z_algorithmtype[2];
    unsigned char  z_logical_clusterbits;
    unsigned long  z_tailextent_headlcn;
    unsigned int   z_idataoff;
    unsigned short z_idata_size;
#endif
};

During write‑back, erofs_write_tail_end() is updated to handle padding for inline data, ensuring correct decompression later.

static int erofs_write_tail_end(struct erofs_inode *inode)
{
    ...
    erofs_off_t pos, zero_pos;
    ...
    if (erofs_sb_has_lz4_0padding() && inode->compressed_idata) {
        zero_pos = pos;
        pos += EROFS_BLKSIZ - inode->idata_size;
    } else {
        zero_pos = pos + inode->idata_size;
    }
    ret = dev_write(inode->idata, pos, inode->idata_size);
    ...
    ret = dev_fillzero(zero_pos,
            EROFS_BLKSIZ - inode->idata_size);
}

The patchset also updates the fuse implementation to mirror the kernel logic, and performance measurements on Linux 5.10.87 show noticeable space savings and speed improvements when using EOF lcluster inlining.

Patchset

erofs-utils: mkfs – support tail‑packing inline compressed data

erofs-utils: fuse – support tail‑packing inline compressed data

erofs: support inline data decompression

erofs: add on‑disk compressed tail‑packing inline support

References

PATCH v3 – https://lkml.org/lkml/2021/12/25/34

EOF lcluster inlining – https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git/commit/?h=dev&id=a7c1f0575ef881f15bfd54f2116d471d0ad30cef

erofs-utils: mkfs – support tail‑packing inline compressed data – https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git/commit/?h=dev&id=ddea76ad592f2a05d53fb229dff19b65f44753a0

erofs-utils: fuse – support tail‑packing inline compressed data – https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs-utils.git/commit/?h=dev&id=cb2f110e73680ab91857914c239abf99c4933852

erofs: support inline data decompression – https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/fs/erofs?id=cecf864d3d76d50e3d9c58145e286a0b8c284e92

erofs: add on‑disk compressed tail‑packing inline support – https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/fs/erofs?id=ab92184ff8f12979f3d3dd5ed601ed85770d81ba

Linux Kernelkernel developmentFilesystemEROFSmkfsinline compressiontail‑packing
Coolpad Technology Team
Written by

Coolpad Technology Team

Committed to advancing technology and supporting innovators. The Coolpad Technology Team regularly shares forward‑looking insights, product updates, and tech news. Tech experts are welcome to join; everyone is invited to follow us.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.