Guided Image Filtering Explained: Insights from He Kaiming’s Classic Paper

This article reviews the Guided Image Filtering technique introduced by He Kaiming, compares it with other edge‑preserving filters, provides detailed OpenCV C++ and Python implementations, discusses the fast variant, analyzes computational complexity, and showcases visual results.

AIWalker
AIWalker
AIWalker
Guided Image Filtering Explained: Insights from He Kaiming’s Classic Paper

Guided Image Filtering, originally proposed by He Kaiming et al. in a 2010 ECCV paper and expanded in 2013, builds a filter based on a local linear model that uses a guidance image—either the input itself or a different image—to compute the output while preserving edges.

Edge‑Preserving Filter Landscape

The article lists the three major edge‑preserving filters: Guided Filter, Bilateral Filter (see Manduchi et al., ICCV 1998), and Weighted Least Squares (WLS) from the "Edge‑Preserving Decompositions for Multi‑Scale Tone and Detail Manipulation" paper. It notes that while guided filtering can act as an edge‑preserving smoother when the guidance image equals the input, its scope is broader, enabling structure transfer for applications such as dehazing and guided feathering.

Guided Filter Principle

The filter assumes a linear relationship between the guidance image I and the output q within a local window: q = a·I + b. Coefficients a and b are derived from the mean and variance of I and the input p, which are efficiently obtained via box filtering—a technique independent of the window radius and similar to integral images.

OpenCV Implementation

To deepen understanding, the article implements the guided filter using OpenCV’s core functions.

cv::Mat GuidedFilter(cv::Mat I, cv::Mat p, int r, double eps) {
    I.convertTo(I, CV_64FC1, 1.0/255);
    p.convertTo(p, CV_64FC1, 1.0/255);
    int R = 2*r + 1;
    cv::Mat mean_I, mean_p, mean_Ip, mean_II;
    cv::boxFilter(I, mean_I, CV_64FC1, cv::Size(R,R));
    cv::boxFilter(p, mean_p, CV_64FC1, cv::Size(R,R));
    cv::boxFilter(I.mul(p), mean_Ip, CV_64FC1, cv::Size(R,R));
    cv::Mat cov_Ip = mean_Ip - mean_I.mul(mean_p);
    cv::boxFilter(I.mul(I), mean_II, CV_64FC1, cv::Size(R,R));
    cv::Mat var_I = mean_II - mean_I.mul(mean_I);
    cv::Mat a = cov_Ip / (var_I + eps);
    cv::Mat b = mean_p - a.mul(mean_I);
    cv::Mat mean_a, mean_b;
    cv::boxFilter(a, mean_a, CV_64FC1, cv::Size(R,R));
    cv::boxFilter(b, mean_b, CV_64FC1, cv::Size(R,R));
    cv::Mat q = mean_a.mul(I) + mean_b;
    q.convertTo(q, CV_8UC1, 255);
    return q;
}

The equivalent Python version uses NumPy and OpenCV’s boxFilter:

def guided_filter(I, P, r, eps):
    scale = 255.0
    I = I/scale
    P = P/scale
    R = 2*r + 1
    P_mean = cv.boxFilter(I, -1, (R,R))
    I_mean = cv.boxFilter(P, -1, (R,R))
    I_sq_mean = cv.boxFilter(I*I, -1, (R,R))
    I_P_mean = cv.boxFilter(I*P, -1, (R,R))
    var_I = I_sq_mean - I_mean*I_mean
    cov_I_P = I_P_mean - I_mean*P_mean
    a = cov_I_P/(var_I + eps)
    b = P_mean - a*I_mean
    a_mean = cv.boxFilter(a, -1, (R,R))
    b_mean = cv.boxFilter(b, -1, (R,R))
    dst = a_mean*P + b_mean
    dst = dst*scale
    return dst

Fast Guided Filter

He’s 2015 "Fast Guided Filter" paper introduces a down‑sampling strategy: the image is reduced by a factor s, the coefficients a and b are computed on the low‑resolution data, and then up‑sampled. This changes the complexity from O(N) to O(N/s²), where N is the pixel count, while still preserving edge quality.

Computational Complexity

Both the original and fast versions rely on box filters, which run in linear time regardless of the radius because they reuse summed‑area tables. The fast variant adds the cost of resizing, which is modest compared to the savings from processing fewer pixels.

Visual Results

The article includes example images where the guidance image equals the input, demonstrating edge‑aware smoothing, detail enhancement, and dehazing effects.

Conclusion

Guided filtering offers a versatile, linear‑time edge‑preserving tool that extends beyond simple smoothing to structure transfer tasks. Its simplicity, speed, and the ability to incorporate arbitrary guidance images make it a valuable component in many computer‑vision pipelines. Future work may explore integrating more sophisticated local models or learned features.

References

He, K., Sun, J., & Tang, X. (2010). Guided Image Filtering. ECCV.

He, K., et al. (2013). Guided Image Filtering – further developments.

He, K. (2015). Fast Guided Filter. arXiv:1505.00996.

Manduchi, R., et al. (1998). Bilateral Filtering for Gray and Color Images. ICCV.

Additional tutorials: https://github.com/Sundrops/fast-guided-filter, OpenCV docs.

Image ProcessingOpenCVEdge PreservingGuided Filtering
AIWalker
Written by

AIWalker

Focused on computer vision, image processing, color science, and AI algorithms; sharing hardcore tech, engineering practice, and deep insights as a diligent AI technology practitioner.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.