Generating Synthetic Face Masks with OpenCV and dlib
This tutorial explains how to create synthetic masked faces by using OpenCV and dlib to detect facial landmarks, draw various mask shapes, and overlay them on images, providing a practical pipeline for mask‑aware face recognition research.
Mask wearing is an effective defense against COVID‑19 but severely degrades the performance of facial‑recognition algorithms that rely on nose, mouth, and chin features. To address this, the article presents a reproducible method for generating synthetic masked faces using Python, OpenCV, and dlib.
Installation – A #requirements_facemask.txt file lists the necessary libraries, which are installed in a Python 3.7 virtual environment.
#requirements_facemask.txt
numpy == 1.18.5
pip == 20.2.2
imutils == 0.5.3
python >=3.7
dlib == 19.21.0
cmake == 3.18.0
opencv-python == 4.4.0Imports – The script imports the required modules.
# 必要的导入
import cv2
import dlib
import numpy as np
import os
import imutilsMask colors are defined in BGR order, and the working directory is set to the folder containing the input images.
# 设置目录
os.chdir('PATH_TO_DIR')
path = 'IMAGE_PATH'
# 初始化颜色 [color_type] = (Blue, Green, Red)
color_blue = (239, 207, 137)
color_cyan = (255, 200, 0)
color_black = (0, 0, 0)Image preprocessing – The input image is loaded, resized to a width of 500 px, and converted to grayscale.
# 加载图像并调整大小,将其转换为灰度
img = cv2.imread('image_path')
img = imutils.resize(img, width=500)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)Face detection and landmark extraction – dlib’s HOG‑based frontal‑face detector is initialized, and the 68‑point shape predictor ( shape_predictor_68_face_landmarks.dat ) is loaded to obtain facial landmarks.
# 初始化dlib的人脸检测器
detector = dlib.get_frontal_face_detector()
faces = detector(gray, 1)
# 加载预测器
p = 'shape_predictor_68_face_landmarks.dat'
predictor = dlib.shape_predictor(p)
for face in faces:
landmarks = predictor(gray, face)
# 示例:获取第n个关键点坐标
# x = landmarks.part(n).x
# y = landmarks.part(n).yUsing the landmark coordinates, the script constructs point lists for three mask types (wide‑high, wide‑medium, wide‑low) by concatenating jaw‑line points with additional mask‑specific points defined in NIST IR 8311.
# 构建掩码点集合
points = []
for i in range(1, 16):
point = [landmarks.part(i).x, landmarks.part(i).y]
points.append(point)
mask_a = [((landmarks.part(42).x, landmarks.part(15).y)),
((landmarks.part(27).x, landmarks.part(27).y)),
((landmarks.part(39).x, landmarks.part(1).y))]
mask_c = [((landmarks.part(29).x, landmarks.part(29).y))]
mask_e = [((landmarks.part(35).x, landmarks.part(35).y)),
((landmarks.part(34).x, landmarks.part(34).y)),
((landmarks.part(33).x, landmarks.part(33).y)),
((landmarks.part(32).x, landmarks.part(32).y)),
((landmarks.part(31).x, landmarks.part(31).y))]
fmask_a = np.array(points + mask_a, dtype=np.int32)
fmask_c = np.array(points + mask_c, dtype=np.int32)
fmask_e = np.array(points + mask_e, dtype=np.int32)
mask_type = {1: fmask_a, 2: fmask_c, 3: fmask_e}The user selects mask color (blue or black) and coverage type (high, medium, low) via input() . The chosen mask polygon is drawn with cv2.polylines and filled with cv2.fillPoly . The final image is displayed and saved.
# 用户交互获取选择
choice1 = int(input('Please select the choice of mask color\nEnter 1 for blue\nEnter 2 for black:\n'))
choice1 = color_blue if choice1 == 1 else color_black
choice2 = int(input('Please enter choice of mask type coverage\nEnter 1 for high\nEnter 2 for medium\nEnter 3 for low:\n'))
img2 = cv2.polylines(img, [mask_type[choice2]], True, choice1, thickness=2, lineType=cv2.LINE_8)
img3 = cv2.fillPoly(img2, [mask_type[choice2]], choice1, lineType=cv2.LINE_AA)
cv2.imshow('image with mask', img3)
output_path = 'output/imagetest.jpg'
print('Saving output image to', output_path)
cv2.imwrite(output_path, img3)The resulting images show original faces (e.g., Barack Obama) with five different synthetic mask styles, as well as examples on crowd scenes and non‑frontal views, demonstrating that the pipeline can generate realistic masked faces for downstream tasks such as mask‑aware attendance systems or face‑recognition model evaluation.
Conclusion – The script successfully reproduces the five mask types described in the NIST report and can be used to augment datasets for training or testing mask‑robust facial‑recognition algorithms.
References to the original dlib landmark tutorial, facial‑point annotation dataset, and OpenCV drawing documentation are provided at the end of the article.
Python Programming Learning Circle
A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.