How We Boosted Product Pair Recommendations with LLM Scoring and BERT Distillation
This article describes a two‑stage pipeline that first collects and processes product pair data, uses a large language model to score pair compatibility, and then fine‑tunes Qwen‑7B and distills its knowledge into a BERT model to enable fast, online recommendation of well‑matched product combinations.
