Image Encryption, Watermarking, Detection & Green Screen Removal in Python
This tutorial walks through Python-based computer‑vision techniques—including XOR‑based image encryption, mask and ROI methods, digital watermark embedding via bit‑plane and LSB, sensitivity‑driven object detection, and HSV‑based green‑screen removal—providing complete code snippets and practical guidance for rapid AI‑assisted learning.
1. Introduction
Previously I avoided cross‑language topics for fear of time constraints and low learning efficiency. With the rise of AI, I quickly revisited and learned these techniques, organizing the material for sharing.
Purpose
Complete the overall system of computer vision, focusing on moderate usage.
Provide demo code and usage examples.
Structure
Basic image encryption/decryption (masking).
Image watermarking.
Object detection.
Green‑screen invisibility.
Future: deeper image recognition and AI processing.
2. Prerequisite Articles on Computer Vision
Follow me to start learning computer vision.
Read "Computer Vision Basics and Introduction" for a solid foundation.
3. Five‑minute Python OCR for beginners.
4. Write an AI object detection script in minutes with Python.
3. Case Studies
3.1 Image Encryption/Decryption
Simple Encryption : XOR with a generated key – decode_001.py
"""Generate or read key image"""
key = np.random.randint(0, 256, size=original_img.shape, dtype=np.uint8)
"""Encrypt image"""
encrypted_img = cv2.bitwise_xor(original_img, key)
"""Decrypt image"""
decrypted_img = cv2.bitwise_xor(encrypted_img, key)Mask Method : Encrypt specific regions using a mask – decode_002.py
original_copy = original_img.copy()
mask_3d = np.stack([mask] * 3, axis=2)
encrypted_img = np.where(mask_3d == 1, 0, original_img)
decrypted_img = np.where(mask_3d == 1, original_copy, encrypted_img)
def create_mask(image_shape, x1, y1, x2, y2):
"""Create a mask for a specified region"""
r, c = image_shape[:2]
mask = np.zeros((r, c), dtype=np.uint8)
y1 = max(0, y1)
y2 = min(r, y2)
x1 = max(0, x1)
x2 = min(c, x2)
mask[y1:y2, x1:x2] = 1
return maskROI Method : Direct XOR on a region of interest – decode_003.py
key = np.random.randint(0, 256, size=original_img.shape, dtype=np.uint8)
encrypted_full = cv2.bitwise_xor(original_img, key)
encrypted_roi = encrypted_full[y1:y2, x1:x2]
result_img = original_img.copy()
result_img[y1:y2, x1:x2] = encrypted_roi
decrypted_full = cv2.bitwise_xor(encrypted_img, key)
decrypted_roi = decrypted_full[y1:y2, x1:x2]
result_img = encrypted_img.copy()
result_img[y1:y2, x1:x2] = decrypted_roiPrinciple and Comparison:
# XOR operation
- Encryption: original image XOR key → encrypted image
- Decryption: encrypted image XOR key → original image
# Mask vs ROI
- Mask: create a mask image, extract and replace specified area
- ROI: directly process and replace a fixed region (e.g., face)Comparison Item
Mask Method
ROI Method
Implementation
Create mask image, extract and replace region
Directly operate on ROI, replace face area
Flexibility
Applicable to arbitrary shapes
Applicable to fixed shapes like faces
Complexity
Higher, requires mask creation
Lower, direct ROI processing
Use Cases
Masking any shaped area
Masking fixed shapes such as faces
3.2 Two Common Watermark Techniques
Basic digital watermark – bit‑plane embedding – watermark_001.py
# Prepare transparent rotated watermark
rotated_watermark = cv2.warpAffine(binary_watermark, rotation_matrix, (width, height))
normalized_watermark = rotated_watermark.astype(float) / 255.0 * alphaVisible watermark – higher visibility – watermark_002.py
# Create gradient alpha mask
for i in range(roi_height):
for j in range(roi_width):
center_x, center_y = roi_width/2, roi_height/2
dist_x = abs(j - center_x) / center_x
dist_y = abs(i - center_y) / center_y
alpha = 1.0 - max(dist_x, dist_y) * 0.7
mask[roi_y1 + i, roi_x1 + j] = max(0.3, alpha)
# Blend watermark
watermarked[roi_y1:roi_y2, roi_x1:roi_x2, c] = (
alpha * watermark_resized[:, :, c] + (1 - alpha) * roi)
)Digital watermark process:
# Embedding
1. Convert carrier and watermark to binary.
2. Clear LSB of carrier.
3. Embed watermark bits into carrier LSB.
# Extraction
1. Convert watermarked image to binary.
2. Extract LSB to recover watermark.3.3 Human Shape Detection (Outline.py)
# Sensitivity‑controlled parameters
min_area = int(1000 * (1 - sensitivity**2))
max_area_ratio = 0.3 + sensitivity * 0.4
min_aspect = 0.2 - sensitivity * 0.15
max_aspect = 5 + sensitivity * 10
overlap_threshold = 0.8 - sensitivity * 0.3These formulas link a single sensitivity parameter to multiple detection thresholds, allowing unified control over detection strictness.
# Multi‑threshold binarization
binary_methods = []
ret, otsu = cv2.threshold(blurred, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
binary_methods.append(otsu)
if sensitivity > 0.3:
block_size = int(25 - sensitivity * 10)
block_size = block_size if block_size % 2 == 1 else block_size + 1
c_value = int(5 - sensitivity * 3)
adaptive_gaussian = cv2.adaptiveThreshold(...)
binary_methods.append(adaptive_gaussian)Higher sensitivity enables more complex algorithms to capture additional targets.
# Morphological kernel size adjusts with sensitivity
morph_size = int(7 - sensitivity * 4)
morph_size = max(3, morph_size)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (morph_size, morph_size))3.4 Green‑Screen Invisibility (Specific Green‑Screen Matting)
# Convert to HSV
hsv = cv2.cvtColor(fore, cv2.COLOR_BGR2HSV)
# Define green range and create mask
lower_green = np.array(green_lower)
upper_green = np.array(green_upper)
mask = cv2.inRange(hsv, lower_green, upper_green)
# Denoise mask
mask = cv2.medianBlur(mask, 5)
kernel = np.ones((5, 5), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=2)
# Remove small regions
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
min_area = 500
clean_mask = np.zeros_like(mask)
for contour in contours:
if cv2.contourArea(contour) > min_area:
cv2.drawContours(clean_mask, [contour], 0, 255, -1)
mask = clean_mask
# Edge smoothing
edge_mask = np.zeros_like(mask)
for contour in contours:
if cv2.contourArea(contour) > min_area:
cv2.drawContours(edge_mask, [contour], 0, 255, 2)
edge_kernel = np.ones((3, 3), np.uint8)
edge_mask = cv2.dilate(edge_mask, edge_kernel, iterations=2)
blurred_fore = cv2.GaussianBlur(fore, (9, 9), 0)
fore_with_blur = fore.copy()
fore_with_blur[edge_mask == 255] = blurred_fore[edge_mask == 255]
# Composite result
back_region = cv2.bitwise_and(back, back, mask=mask)
mask_inv = cv2.bitwise_not(mask)
fore_region = cv2.bitwise_and(fore_with_blur, fore_with_blur, mask=mask_inv)
result = cv2.add(back_region, fore_region)4. Upcoming Topics
Future detailed demos will cover advanced image‑to‑image principles, image information recognition, and AI‑driven use cases.
Conclusion
Reference Book : "Computer Vision: 40 Cases from Beginner to Deep Learning".
Learning Tools : Trae + GPT.
AI makes previously hard‑to‑enter topics accessible; the provided cases are AI‑generated but fully debugged and runnable. Interested readers can explore the source code.
Code repository: juejin.cn/post/6941642435189538824
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
