ALGORITHM ANALYSIS

Dynamic Programming Patterns

Comprehensive guide to dynamic programming patterns including memoization, tabulation, common problems, and optimization techniques.

dynamic-programmingmemoizationtabulationoptimizationrecursionoverlapping-subproblems

Cheat Sheet Overview

Quick reference and study metrics

Advanced
30 minutes

Reading Time

7

Sections

0

Downloads

📚algorithm-analysis

Category

Enhanced SEO fields

Dynamic Programming

🎯 Core Principles

The Three Requirements

  1. Optimal Substructure: Solution can be constructed from optimal solutions of subproblems
  2. Overlapping Subproblems: Same subproblems appear multiple times
  3. Memoization Opportunity: We can store and reuse results

Key Insight: DP = Recursion + Memoization + Optimal Substructure

🔄 Top-Down vs Bottom-Up

Top-Down (Memoization)

def fibonacci_memo(n, memo={}):
    if n in memo:
        return memo[n]
    if n <= 1:
        return n
    memo[n] = fibonacci_memo(n-1, memo) + fibonacci_memo(n-2, memo)
    return memo[n]

Bottom-Up (Tabulation)

def fibonacci_dp(n):
    if n <= 1:
        return n
    dp = [0] * (n + 1)
    dp[1] = 1
    for i in range(2, n + 1):
        dp[i] = dp[i-1] + dp[i-2]
    return dp[n]

📝 Problem Recognition Patterns

Pattern 1: Decision Making

Keywords: "choose", "pick", "select", "include/exclude" Template: At each step, decide whether to include current item

# 0/1 Knapsack Pattern
def knapsack(weights, values, capacity):
    n = len(weights)
    dp = [[0] * (capacity + 1) for _ in range(n + 1)]

    for i in range(1, n + 1):
        for w in range(capacity + 1):
            # Don't include item i-1
            dp[i][w] = dp[i-1][w]

            # Include item i-1 if possible
            if weights[i-1] <= w:
                dp[i][w] = max(dp[i][w],
                              dp[i-1][w - weights[i-1]] + values[i-1])

    return dp[n][capacity]

Pattern 2: Optimal Path/Sequence

Keywords: "minimum/maximum path", "longest sequence", "edit distance" Template: Build solution by extending optimal solutions

# Longest Common Subsequence
def lcs(text1, text2):
    m, n = len(text1), len(text2)
    dp = [[0] * (n + 1) for _ in range(m + 1)]

    for i in range(1, m + 1):
        for j in range(1, n + 1):
            if text1[i-1] == text2[j-1]:
                dp[i][j] = dp[i-1][j-1] + 1
            else:
                dp[i][j] = max(dp[i-1][j], dp[i][j-1])

    return dp[m][n]

Pattern 3: Counting Ways

Keywords: "number of ways", "how many ways", "count" Template: Sum all possible ways to reach each state

# Coin Change (counting ways)
def coin_change_ways(coins, amount):
    dp = [0] * (amount + 1)
    dp[0] = 1  # One way to make amount 0

    for coin in coins:
        for i in range(coin, amount + 1):
            dp[i] += dp[i - coin]

    return dp[amount]

Pattern 4: Game Theory

Keywords: "optimal play", "minimax", "both players play optimally" Template: Consider all possible moves, choose optimal

# Stone Game
def stone_game(piles):
    n = len(piles)
    dp = [[0] * n for _ in range(n)]

    # Base case: single pile
    for i in range(n):
        dp[i][i] = piles[i]

    # Fill for all lengths
    for length in range(2, n + 1):
        for i in range(n - length + 1):
            j = i + length - 1
            dp[i][j] = max(piles[i] - dp[i+1][j],
                          piles[j] - dp[i][j-1])

    return dp[0][n-1] > 0

📊 Classic Problems Comparison

ProblemTimeSpaceOptimizationPattern
FibonacciO(n)O(n)O(n)O(1)O(n) \to O(1)Rolling variablesSequence
0/1 KnapsackO(nW)O(nW)O(nW)O(W)O(nW) \to O(W)1D arrayDecision
LCSO(mn)O(mn)O(mn)O(min(m,n))O(mn) \to O(\min(m,n))Rolling arrayMatching
Edit DistanceO(mn)O(mn)O(mn)O(min(m,n))O(mn) \to O(\min(m,n))Rolling arrayTransform
Coin ChangeO(nS)O(nS)O(S)O(S)Natural 1DCounting
LISO(n2)O(nlogn)O(n^2) \to O(n \log n)O(n)O(n)Binary searchSequence
Matrix ChainO(n3)O(n^3)O(n2)O(n^2)Interval DPOptimization

🚀 Space Optimization Techniques

Rolling Array Technique

Before: O(mn)O(mn) space

dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(1, m + 1):
    for j in range(1, n + 1):
        dp[i][j] = func(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])

After: O(n)O(n) space

prev = [0] * (n + 1)
curr = [0] * (n + 1)
for i in range(1, m + 1):
    for j in range(1, n + 1):
        curr[j] = func(prev[j], curr[j-1], prev[j-1])
    prev, curr = curr, prev

In-Place Optimization (Knapsack)

Standard: O(nW)O(nW) space

dp = [[0] * (W + 1) for _ in range(n + 1)]

Optimized: O(W)O(W) space

dp = [0] * (W + 1)
for i in range(n):
    for w in range(W, weights[i] - 1, -1):  # Reverse order!
        dp[w] = max(dp[w], dp[w - weights[i]] + values[i])

🎨 State Design Patterns

Pattern 1: Index-Based States

# dp[i] = optimal solution using first i elements
def rob_houses(nums):
    if not nums: return 0
    if len(nums) == 1: return nums[0]

    dp = [0] * len(nums)
    dp[0] = nums[0]
    dp[1] = max(nums[0], nums[1])

    for i in range(2, len(nums)):
        dp[i] = max(dp[i-1], dp[i-2] + nums[i])

    return dp[-1]

Pattern 2: Multi-Dimensional States

# dp[i][j] = optimal solution for first i items with constraint j
def knapsack_2d(weights, values, capacity):
    n = len(weights)
    dp = [[0] * (capacity + 1) for _ in range(n + 1)]

    for i in range(1, n + 1):
        for w in range(capacity + 1):
            dp[i][w] = dp[i-1][w]  # Don't take item
            if weights[i-1] <= w:
                dp[i][w] = max(dp[i][w],
                              dp[i-1][w - weights[i-1]] + values[i-1])

    return dp[n][capacity]

Pattern 3: Range-Based States

# dp[i][j] = optimal solution for range [i, j]
def matrix_chain_multiplication(p):
    n = len(p) - 1
    dp = [[0] * n for _ in range(n)]

    for length in range(2, n + 1):
        for i in range(n - length + 1):
            j = i + length - 1
            dp[i][j] = float('inf')
            for k in range(i, j):
                cost = dp[i][k] + dp[k+1][j] + p[i] * p[k+1] * p[j+1]
                dp[i][j] = min(dp[i][j], cost)

    return dp[0][n-1]

🎯 Advanced Techniques

Bitmask DP

Use when: Small set of items (≤ 20), need to track subsets

def traveling_salesman(dist):
    n = len(dist)
    dp = {}

    def solve(mask, pos):
        if mask == (1 << n) - 1:  # All cities visited
            return dist[pos][0]  # Return to start

        if (mask, pos) in dp:
            return dp[(mask, pos)]

        ans = float('inf')
        for city in range(n):
            if mask & (1 << city) == 0:  # City not visited
                new_mask = mask | (1 << city)
                ans = min(ans, dist[pos][city] + solve(new_mask, city))

        dp[(mask, pos)] = ans
        return ans

    return solve(1, 0)  # Start from city 0

Digit DP

Use when: Counting numbers with constraints

def count_numbers_with_digit_sum(n, target_sum):
    s = str(n)
    memo = {}

    def dp(pos, sum_so_far, tight, started):
        if pos == len(s):
            return 1 if started and sum_so_far == target_sum else 0

        if (pos, sum_so_far, tight, started) in memo:
            return memo[(pos, sum_so_far, tight, started)]

        limit = int(s[pos]) if tight else 9
        result = 0

        for digit in range(0, limit + 1):
            new_tight = tight and (digit == limit)
            new_started = started or (digit > 0)
            new_sum = sum_so_far + digit if new_started else 0

            if new_sum <= target_sum:
                result += dp(pos + 1, new_sum, new_tight, new_started)

        memo[(pos, sum_so_far, tight, started)] = result
        return result

    return dp(0, 0, True, False)

🛠️ Problem-Solving Framework

Step 1: Identify DP Signals

  • Optimization: "minimum", "maximum", "optimal"
  • Counting: "number of ways", "how many"
  • Decision: "can we", "is it possible"

Step 2: Define State

  • What information do we need to solve subproblems?
  • What are the changing parameters?

Step 3: State Transition

  • How do we build current state from previous states?
  • What choices do we have at each step?

Step 4: Base Cases

  • What are the simplest subproblems?
  • What happens when parameters reach boundaries?

Step 5: Order of Computation

  • Which states depend on which others?
  • Can we compute in a simple loop order?

⚡ Performance Tips

When to Use DP

Good for DP:

  • Overlapping subproblems exist
  • Optimal substructure property
  • Exponential brute force can be optimized

Not good for DP:

  • No overlapping subproblems
  • Greedy algorithm works
  • Problem has special structure (graph algorithms, etc.)

Common Optimizations

  1. Space optimization: Use rolling arrays when possible
  2. Early termination: Stop when answer is found
  3. Pruning: Skip impossible states
  4. Precomputation: Calculate expensive operations once
  5. State compression: Use bitmasks for small sets

Share this cheat sheet