How to Bind KVM vCPU to Specific Host CPUs with taskset
This guide explains processor affinity, shows how to isolate host CPUs using the isolcpus kernel parameter, and provides step‑by‑step commands to launch a KVM guest and bind its vCPU threads to dedicated CPUs with taskset, including verification techniques and useful tips.
Background and Processor Affinity
In SMP Linux systems the scheduler may move a process between CPUs, which can cause cache misses. Setting processor affinity (binding a process or thread to specific CPUs) can improve cache locality but may disrupt load balancing, especially on NUMA architectures.
Step 1 – Isolate Two Logical CPUs for a Guest
Add the isolcpus=2,3 parameter to the kernel command line (e.g., in GRUB) so that normal processes are not scheduled on CPUs 2 and 3.
title Red Hat Enterprise Linux Server (3.5.0)
root (hd0,0)
kernel /boot/vmlinuz-3.5.0 ro root=UUID=... isolcpus=2,3
initrd /boot/initramfs-3.5.0.imgAfter reboot, verify isolation with:
# ps -eLo psr | grep 0 | wc -l # → 106 threads on cpu0
# ps -eLo psr | grep 1 | wc -l # → 107 threads on cpu1
# ps -eLo psr | grep 2 | wc -l # → 4 threads on cpu2
# ps -eLo psr | grep 3 | wc -l # → 4 threads on cpu3The output shows only kernel helper threads on the isolated CPUs, confirming the isolation succeeded.
Step 2 – Launch a VM with Two vCPUs and Bind Them
Start the guest (example image rhel6u3.img) with two virtual CPUs:
# qemu-system-x86_64 rhel6u3.img -smp 2 -m 512 -daemonizeIdentify the QEMU process and its threads:
# ps -eLo ruser,pid,ppid,lwp,psr,args | grep qemu | grep -v grep
root 3963 1 3963 0 qemu-system-x86_64 ...
root 3963 1 3967 0 qemu-system-x86_64 ...
root 3963 1 3968 1 qemu-system-x86_64 ...Bind the main QEMU process to CPU 2 and each vCPU thread to its dedicated CPU using taskset:
# taskset -p 0x4 3963 # bind QEMU process to cpu2
# taskset -p 0x4 3967 # bind first vCPU thread to cpu2
# taskset -p 0x8 3968 # bind second vCPU thread to cpu3Confirm the new affinity masks:
pid 3963′s new affinity mask: 4
pid 3967′s new affinity mask: 4
pid 3968′s new affinity mask: 8Re‑run the ps command to see that the QEMU threads now run on the isolated CPUs:
# ps -eLo ruser,pid,ppid,lwp,psr,args | grep qemu
root 3963 1 3963 2 qemu-system-x86_64 ...
root 3963 1 3967 2 qemu-system-x86_64 ...
root 3963 1 3968 3 qemu-system-x86_64 ...Inspecting vCPU‑Thread Mapping in QEMU Monitor
Enter the QEMU monitor (Ctrl‑Alt‑2) and run info cpus:
(qemu) info cpus
* CPU #0: pc=0xffffffff810375ab thread_id=3967
CPU #1: pc=0xffffffff812b2594 thread_id=3968The asterisk indicates that CPU 0 is the BSP (Boot Strap Processor).
Additional Notes on CPU Affinity
Quick CPU‑Affinity Tips:
Typical reasons to limit affinity: heavy computation, scalability testing, real‑time workloads.
Child processes inherit the parent’s affinity; using taskset to launch a process is effectively a fork‑exec with the mask set.
Programmatically set affinity with sched_setaffinity() and query with sched_getaffinity().
NGINX can bind workers via worker_cpu_affinity (e.g., worker_cpu_affinity 0010 0100 1000;).
Windows Task Manager also offers a “Set affinity” option.
Useful commands to query CPU topology:
Logical CPUs: cat /proc/cpuinfo | grep "processor" | wc -l Physical CPUs: cat /proc/cpuinfo | grep "physical id" | sort | uniq | wc -l Cores per physical CPU: cat /proc/cpuinfo | grep "cpu cores" | wc -l Hyper‑threading detection: identical core id values across logical CPUs.
While manual affinity tuning is not generally recommended for KVM guests, it can be beneficial when the hardware layout is well understood and exclusive CPU resources are required for performance or isolation.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
ITPUB
Official ITPUB account sharing technical insights, community news, and exciting events.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
