Product Management 7 min read

Resource Evaluation Model: Defining Metrics, Data Collection Methods, and Quantification

This article explains how to build a resource evaluation model by defining assessment dimensions, selecting metrics for attracting and retaining users, choosing objective and subjective data collection methods, and quantifying each indicator with thresholds and scoring rules, using an O2O food‑delivery example.

Baidu Intelligent Testing
Baidu Intelligent Testing
Baidu Intelligent Testing
Resource Evaluation Model: Defining Metrics, Data Collection Methods, and Quantification

In the previous installment we introduced a product quality competitiveness model that includes four dimensions: resource quality, basic quality, experience quality, and operation quality. Over the next four issues we will detail how each dimension’s evaluation model is constructed; this issue focuses on resource evaluation .

1. What is a resource? A resource is the substantive content a product provides to users, such as online videos for a streaming service, merchants for a group‑buying platform, or tickets for a ticketing service.

2. How to establish the resource evaluation model

Step 1 – Determine evaluation indicators : Identify dimensions for measuring resource quality, typically “attracting users” → “retaining users”.

Attracting users assesses the richness of resources (coverage breadth and depth). Retaining users evaluates resource quality (price advantage, high quality) and effectiveness (timely updates).

Using an O2O food‑delivery service as an example, resources are the merchants and dishes offered. Attractiveness is measured by coverage breadth (number of cities, types of merchants, districts) and depth (total merchants, merchants per city, total dishes, dishes per category).

Retention focuses on resource quality (online: price advantage, popularity, novelty; offline: delivery timeliness, consistency with description, service attitude) and effectiveness (timely updates).

Step 2 – Define statistical methods for each indicator : Use objective methods (full‑scale or sample counts) and subjective methods (internal feedback, external user crowdsourcing via questionnaires or interviews).

Objective data includes total resource counts or sampled top‑N items. Subjective data gathers qualitative feedback; internal feedback is high‑quality but limited in volume, while external crowdsourcing offers broader coverage but may be affected by questionnaire design.

Step 3 – Quantify indicator results : Assign importance levels and scoring thresholds. For example, “coverage breadth – number of covered cities” is a high‑importance objective metric with a 60% coverage threshold set as the passing score (3 points). Scores adjust above or below this baseline.

Another example is “resource novelty” (subjective). Users rate novelty of a newly added restaurant; this metric is medium importance, and scoring thresholds are derived from questionnaire satisfaction percentages.

By quantifying each indicator, a complete resource evaluation model is formed. The model is then applied to collect data, compare against competitors, previous versions, and target values to assess resource quality.

This concludes the current discussion; the next issue will delve into the experience quality dimension of the competitiveness model. Stay tuned.

data collectionuser retentionmetricsproduct evaluationQuantitative ScoringResource Quality
Baidu Intelligent Testing
Written by

Baidu Intelligent Testing

Welcome to follow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.