Fundamentals 19 min read

Understanding the Android Binder Driver: Memory Allocation, mmap, and Data Transfer

This article provides an in‑depth analysis of Android's Binder driver, explaining why the driver’s core concepts are simple yet involve complex kernel interactions, and detailing the mmap stage, physical page allocation, and per‑page data copying using kmap/kunmap.

Coolpad Technology Team
Coolpad Technology Team
Coolpad Technology Team
Understanding the Android Binder Driver: Memory Allocation, mmap, and Data Transfer

Introduction

The Binder driver appears simple at a high level but involves many kernel concepts; the author spent two weekends writing this article after seeing a previous Binder tutorial.

Key Questions

Why was Binder introduced and why not use shared memory?

What does "process isolation" mean in Binder?

When does the "single copy" happen and in which process?

What are the data‑size limits of Binder and where are they enforced?

Is the copy really performed only once and is it page‑wise?

Where does memory mapping occur – sender or receiver?

When are physical pages actually allocated?

Is the mapping done in the mmap step or during data transfer?

What is the purpose of allocating a kernel virtual address in Binder?

How is the receiver’s kernel virtual space allocated and managed?

Key Structures

The following structures are central to Binder’s operation.

/**
 * struct binder_alloc - per‑binder proc state for binder allocator
 * @vma:                vm_area_struct passed to mmap_handler (invariant after mmap)
 * @tsk:                tid for task that called init for this proc (invariant after init)
 * @vma_vm_mm:          copy of vma->vm_mm (invariant after mmap)
 * @buffer:             base of per‑proc address space mapped via mmap
 * @buffers:            list of all buffers for this proc
 * @free_buffers:       rb tree of buffers available for allocation (sorted by size)
 * @allocated_buffers: rb tree of allocated buffers sorted by address
 * @free_async_space:   VA space available for async buffers, initialized to half of the full VA space
 * @pages:              array of binder_lru_page
 * @buffer_size:        size of address space specified via mmap
 * @pid:                pid for associated binder_proc (invariant after init)
 * @pages_high:         high watermark of offset in @pages
 * @oneway_spam_detected: %true if one‑way spam detection fired
 */
struct binder_alloc {
    struct mutex mutex;
    struct vm_area_struct *vma;
    struct mm_struct *vma_vm_mm;
    void __user *buffer;
    struct list_head buffers;
    struct rb_root free_buffers;
    struct rb_root allocated_buffers;
    size_t free_async_space;
    struct binder_lru_page *pages;
    size_t buffer_size;
    uint32_t buffer_free;
    int pid;
    size_t pages_high;
    bool oneway_spam_detected;
};
/**
 * struct vm_area_struct - describes a virtual memory area
 * One per VM‑area/task. A VM area is any part of the process virtual memory
 * space that has a special rule for the page‑fault handlers (e.g., shared library,
 * executable area, etc).
 */
struct vm_area_struct {
    unsigned long vm_start; /* start address within vm_mm */
    unsigned long vm_end;   /* first byte after end address */
    struct vm_area_struct *vm_next, *vm_prev; /* linked list sorted by address */
    struct rb_node vm_rb;
    /* ... */
};
/**
 * struct binder_buffer - buffer used for binder transactions
 * @entry:               entry alloc->buffers
 * @rb_node:             node for allocated_buffers/free_buffers rb trees
 * @free:                %true if buffer is free
 * @clear_on_free:       %true if buffer must be zeroed after use
 * @allow_user_free:     %true if user is allowed to free buffer
 * @async_transaction:   %true if buffer is in use for an async txn
 * @oneway_spam_suspect: %true if total async allocate size just exceeds threshold
 * @debug_id:            unique ID for debugging
 * @transaction:         pointer to associated struct binder_transaction
 * @target_node:         struct binder_node associated with this buffer
 * @data_size:           size of @transaction data
 * @offsets_size:        size of array of offsets
 * @extra_buffers_size:  size of space for other objects (e.g., sg lists)
 * @user_data:           user pointer to base of buffer space
 * @pid:                 pid to attribute the buffer to (caller)
 */
struct binder_buffer {
    struct list_head entry;      /* free and allocated entries by address */
    struct rb_node rb_node;    /* free entry by size or allocated entry */
    unsigned free:1;
    unsigned clear_on_free:1;
    unsigned allow_user_free:1;
    unsigned async_transaction:1;
    unsigned oneway_spam_suspect:1;
    unsigned debug_id:27;
    struct binder_transaction *transaction;
    struct binder_node *target_node;
    size_t data_size;
    size_t offsets_size;
    size_t extra_buffers_size;
    void __user *user_data;
    int pid;
};

Binder mmap

When a Binder device is opened, the mmap stage sets up the virtual address space. The core function is binder_mmap which validates the caller, adjusts vm flags, assigns binder_vm_ops , and delegates to binder_alloc_mmap_handler to initialize the per‑process address space.

static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
    struct binder_proc *proc = filp->private_data;
    if (proc->tsk != current->group_leader)
        return -EINVAL;
    vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
    vma->vm_flags &= ~VM_MAYWRITE;
    vma->vm_ops = &binder_vm_ops;
    vma->vm_private_data = proc;
    return binder_alloc_mmap_handler(&proc->alloc, vma);
}

The actual mapping work is performed later by binder_alloc_mmap_handler , which records the buffer size, stores the start address of the user space ( alloc->buffer = (void __user *)vma->vm_start ), allocates an array of binder_lru_page , and creates the initial free buffer list.

int binder_alloc_mmap_handler(struct binder_alloc *alloc,
                              struct vm_area_struct *vma)
{
    int ret;
    const char *failure_string;
    struct binder_buffer *buffer;
    mutex_lock(&binder_alloc_mmap_lock);
    if (alloc->buffer_size) {
        ret = -EBUSY;
        failure_string = "already mapped";
        goto err_already_mapped;
    }
    alloc->buffer_size = min_t(unsigned long,
                               vma->vm_end - vma->vm_start, SZ_4M);
    mutex_unlock(&binder_alloc_mmap_lock);
    alloc->buffer = (void __user *)vma->vm_start;
    alloc->pages = kcalloc(alloc->buffer_size / PAGE_SIZE,
                           sizeof(alloc->pages[0]), GFP_KERNEL);
    buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
    if (!buffer) {
        ret = -ENOMEM;
        failure_string = "alloc buffer struct";
        goto err_alloc_buf_struct_failed;
    }
    buffer->user_data = alloc->buffer;
    list_add(&buffer->entry, &alloc->buffers);
    buffer->free = 1;
    binder_insert_free_buffer(alloc, buffer);
    alloc->free_async_space = alloc->buffer_size / 2;
    binder_alloc_set_vma(alloc, vma);
    mmgrab(alloc->vma_vm_mm);
    return 0;
    /* error handling omitted */
}

Physical Page Allocation

During a transaction, binder_alloc_new_buf eventually calls binder_update_page_range to allocate physical pages on demand and map them into the receiver’s user address space.

static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
                                   void __user *start, void __user *end)
{
    void __user *page_addr;
    unsigned long user_page_addr;
    struct binder_lru_page *page;
    struct vm_area_struct *vma = NULL;
    struct mm_struct *mm = NULL;
    bool need_mm = false;
    for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
        page = &alloc->pages[(page_addr - alloc->buffer) / PAGE_SIZE];
        if (!page->page_ptr) {
            need_mm = true;
            break;
        }
    }
    if (need_mm && mmget_not_zero(alloc->vma_vm_mm))
        mm = alloc->vma_vm_mm;
    if (mm) {
        mmap_read_lock(mm);
        vma = alloc->vma;
    }
    for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
        size_t index = (page_addr - alloc->buffer) / PAGE_SIZE;
        page = &alloc->pages[index];
        if (page->page_ptr) {
            /* page already allocated – move it to the front of LRU */
            continue;
        }
        if (WARN_ON(!vma))
            goto err_page_ptr_cleared;
        page->page_ptr = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
        page->alloc = alloc;
        INIT_LIST_HEAD(&page->lru);
        user_page_addr = (uintptr_t)page_addr;
        vm_insert_page(vma, user_page_addr, page[0].page_ptr);
        if (index + 1 > alloc->pages_high)
            alloc->pages_high = index + 1;
    }
    if (mm) {
        mmap_read_unlock(mm);
        mmput(mm);
    }
    return 0;
    /* error handling omitted */
}

Data Copy

Copying data from the sender to the receiver now uses a dedicated helper binder_alloc_copy_user_to_buffer . It obtains the target physical page, maps it temporarily into kernel virtual address space with kmap , copies the user data with copy_from_user , and then releases the mapping with kunmap .

unsigned long binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
                                                struct binder_buffer *buffer,
                                                binder_size_t buffer_offset,
                                                const void __user *from,
                                                size_t bytes)
{
    if (!check_buffer(alloc, buffer, buffer_offset, bytes))
        return bytes;
    while (bytes) {
        unsigned long size;
        unsigned long ret;
        struct page *page;
        pgoff_t pgoff;
        void *kptr;
        page = binder_alloc_get_page(alloc, buffer, buffer_offset, &pgoff);
        size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
        kptr = kmap(page) + pgoff;
        ret = copy_from_user(kptr, from, size);
        kunmap(page);
        if (ret)
            return bytes - size + ret;
        bytes -= size;
        from += size;
        buffer_offset += size;
    }
    return 0;
}

This per‑page copy loop ensures that only the necessary kernel virtual address space is allocated (via kmap) and released (via kunmap) for each page, avoiding the previous approach of reserving a large contiguous kernel virtual region.

Conclusion

The Binder data‑transfer path works page by page: physical pages are allocated on demand, mapped into the receiver’s user space, copied from the sender via a temporary kernel mapping, and then the kernel mapping is immediately released. Consequently, the kernel virtual address space is only a transient placeholder during the copy operation.

Memory ManagementAndroidkernelMMAPIPCBinderkmap
Coolpad Technology Team
Written by

Coolpad Technology Team

Committed to advancing technology and supporting innovators. The Coolpad Technology Team regularly shares forward‑looking insights, product updates, and tech news. Tech experts are welcome to join; everyone is invited to follow us.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.