Fundamentals 10 min read

Dynamic Programming Essentials: Concepts, Conditions, and a Fractional Knapsack Demo

This article introduces dynamic programming as an optimization technique from operations research, explains its fundamental concepts, optimality principle, no‑after‑effect property, and overlapping subproblems, outlines the typical problem‑model steps, and demonstrates a practical C++ implementation for solving the fractional knapsack problem.

AI Code to Success
AI Code to Success
AI Code to Success
Dynamic Programming Essentials: Concepts, Conditions, and a Fractional Knapsack Demo

Introduction

Dynamic programming (DP) is a branch of operations research that solves multi‑stage decision problems by breaking them into simpler sub‑problems and solving them sequentially.

Fundamental Concepts

In DP each decision depends on the current state and causes a state transition. A sequence of decisions forms a decision sequence, and the whole process is called dynamic programming.

Basic Idea

Similar to divide‑and‑conquer, DP decomposes a problem into overlapping sub‑problems, solves each once, stores the results, and reuses them to avoid redundant computation. This “store‑and‑reuse” principle leads to a table‑filling algorithm.

Applicability Conditions

DP is applicable only when the problem satisfies two properties:

Optimal substructure (optimality principle) : an optimal solution’s sub‑solutions are themselves optimal.

No after‑effect (independence of stages) : the future decision depends only on the current state, not on how that state was reached.

Overlapping sub‑problems : the same sub‑problem appears many times in the recursion.

Because DP stores intermediate results, it usually requires more memory than other algorithms.

Typical DP Model

The standard steps to formulate a DP problem are:

Identify the decision objects.

Divide the decision process into stages.

Define state variables for each stage.

Formulate the cost function and objective function.

Derive the state transition equations.

For a forward recursion the transition can be written as:

f[U_k] = opt{ f[U_{k-1}] + L[U_{k-1}, X_{k-1}] }

For a backward recursion:

f[U_k] = opt{ f[U_{k+1}] + L[U_k, X_k] }

Example: Fractional Knapsack Problem

Given a knapsack of capacity M = 180 and seven items with weights and values, the goal is to maximize total value without exceeding the capacity. Because items can be divided, the optimal strategy is to sort items by value‑per‑weight ratio and fill the knapsack greedily.

Item data:

Item A: weight 35, value 10

Item B: weight 30, value 40

Item C: weight 60, value 30

Item D: weight 50, value 50

Item E: weight 40, value 35

Item F: weight 10, value 40

Item G: weight 25, value 30

The DP recurrence for the 0‑1 knapsack (illustrative) is:

f[i][v] = max{ f[i‑1][v], f[i‑1][v‑w[i]] + v[i] }

The following C++ program implements the greedy fractional‑knapsack solution:

#include <iostream>
#include <algorithm>
using namespace std;

struct Bag {
    int weight;   // total weight
    int value;    // total value
    float ratio;  // value per weight
    float rate;   // fraction taken (1 = whole item)
} bags[50];

bool compare(const Bag &bag1, const Bag &bag2) {
    return bag1.ratio > bag2.ratio;
}

int main() {
    int sum = 0, n;
    float M;
    cout << "Enter knapsack capacity and number of items:" << endl;
    cin >> M >> n;
    for (int i = 0; i < n; ++i) {
        cin >> bags[i].weight >> bags[i].value;
        bags[i].ratio = (float)bags[i].value / bags[i].weight;
        bags[i].rate = 0;
    }
    sort(bags, bags + n, compare);
    int j = 0;
    for (j = 0; j < n; ++j) {
        if (bags[j].weight <= M) {
            bags[j].rate = 1;
            sum += bags[j].weight;
            M -= bags[j].weight;
            cout << "Weight: " << bags[j].weight
                 << " Value: " << bags[j].value
                 << " placed in knapsack" << endl;
            cout << "Fraction: " << bags[j].rate << endl;
        } else break;
    }
    if (j < n) {
        bags[j].rate = M / bags[j].weight;
        sum += bags[j].rate * bags[j].weight;
        cout << "Weight: " << bags[j].weight
             << " Value: " << bags[j].value
             << " partially placed" << endl;
        cout << "Fraction: " << bags[j].rate << endl;
    }
    return 0;
}

Conclusion

Dynamic programming is widely used in production management, scheduling, optimal control, and many other fields. While it excels at time‑indexed multi‑stage problems, it can also be adapted to static optimization problems by artificially introducing a stage dimension.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

OptimizationC++dynamic programmingDPfractional knapsack
AI Code to Success
Written by

AI Code to Success

Focused on hardcore practical AI technologies (OpenClaw, ClaudeCode, LLMs, etc.) and HarmonyOS development. No hype—just real-world tips, pitfall chronicles, and productivity tools. Follow to transform workflows with code.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.