Why Explicit vs Implicit Feedback Matters in Recommender Systems

This article explains the difference between explicit and implicit user feedback, discusses their advantages and pitfalls, and shows how collaborative‑filtering techniques such as user‑based, item‑based, adjusted cosine similarity, and Slope One can be applied to build accurate recommendation engines.

StarRing Big Data Open Lab
StarRing Big Data Open Lab
StarRing Big Data Open Lab
Why Explicit vs Implicit Feedback Matters in Recommender Systems

Explicit Feedback

User feedback can be explicit, such as likes, dislikes, thumbs‑up/down, or star ratings. Examples include the "like" button on Pandora or YouTube and Amazon’s star rating system.

Implicit Feedback

Implicit feedback is derived from user behavior without explicit ratings, such as click logs, page views, or purchase history. Examples include click records on the New York Times site or Amazon’s purchase logs, which can reveal preferences like interest in technology news or a specific product.

Amazon uses such data for "people who viewed this also viewed" and "customers who bought this also bought" recommendations.

Problems with Explicit Feedback

1. Users are often lazy and do not leave ratings. 2. Users may lie or be biased in their ratings. 3. Users rarely update their reviews when their opinions change.

User‑Based Collaborative Filtering

This method compares a target user with all other users to find similar ones and recommends items they liked. It suffers from scalability and sparsity when the number of users grows.

Item‑Based Collaborative Filtering

Instead of comparing users, this approach computes similarity between items and recommends items similar to those the user already liked. It is more scalable because the item‑similarity matrix can be pre‑computed.

Adjusted Cosine Similarity

To account for different rating scales, the adjusted cosine similarity subtracts each user’s mean rating before computing the cosine of the rating vectors.

The formula uses the set U of users who rated both items i and j, and normalises by the variance of each user’s ratings.

Slope One Algorithm

Slope One is a simple item‑based collaborative filtering method that predicts a rating by adding the average difference between two items to the user’s rating of one of them. A weighted version multiplies each difference by the number of co‑ratings.

Weighted Slope One predicts a rating using the formula:

PWS1(u)_j = ( Σ_i (dev_{j,i} + r_{u,i}) * c_{j,i} ) / Σ_i c_{j,i}

where dev_{j,i} is the average rating difference between items j and i, r_{u,i} is the user’s rating for item i, and c_{j,i} is the count of users who rated both items.

Takeaways

Adjusted cosine similarity and Slope One are model‑based collaborative‑filtering techniques that scale well to large datasets. They address issues of rating bias, sparsity, and computational cost, making them suitable for real‑world recommender systems.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

collaborative filteringRecommender Systemsimplicit feedbackexplicit feedbackadjusted cosine similaritySlope One
StarRing Big Data Open Lab
Written by

StarRing Big Data Open Lab

Focused on big data technology research, exploring the Big Data era | [email protected]

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.