Using what we have covered so far, you can analyze a dynamic programming problem and solve it using DP techniques and concepts. But after solving a few problems, you might still feel like you have to invent a new solution each time. In the introduction to this year’s project, I promised to explain a process for designing Dynamic Programming algorithms. This week, I’ll fill in a few details that make solving DP problems more of a repeatable process.

## States

We have seen that solving a dynamic programming problem requires breaking the original problem into subproblems and using the solutions to those subproblems to solve the original problem. Each subproblem can be expressed as a function call with particular parameters. This leads naturally to the top-down solution implementation, which you can convert to bottom-up form if necessary.

To formalize this process and make it more repeatable, we’ll use the concept of a *state*. Think of a state as a way to uniquely identify a subproblem using a set of variables, the *state variables*. In a top-down implementation, state variables are parameters you pass to your recursive function.

For basic dynamic programming problems on LeetCode, there is often just one state variable. For example, Coin Change problem represents the state of each subproblem by $a$, the amount we need to make for that subproblem. So the solution to a subproblem is $f(a)$.

To solve Coin Change, we also need the set of coin denominations from the input data. But these coin values are the same for every subproblem. This means they aren’t part of the state, since they don’t help distinguish one subproblem from another. Dynamic programming problems often have parameters that are common to all states, so it’s important to learn the difference between state variables from other input data.

## State Transition

Once you identify the state variables, the next step is to figure out how different states related to each other. The optimal substructure property of a dynamic programming problem says we can use the solutions to subproblems to find the solution to the original problem. But it doesn’t tell us *how* to use those subproblem solutions.

In the top-down approach, we calculate the value of each subproblem using the values of one or more smaller subproblems. Similarly, if we’re using a bottom-up implementation, we combine the solutions to one or more subproblems to make a larger subproblem, eventually reaching the original problem. Either way, we can think of combining subproblems as a process of moving between states using a *state transition* equation.

For Coin Change, $f(a)$ represents the minimum number of coins required to make the amount $a$. Each subproblem of $f(a)$ has the form $f(a – c_i)$, where $c_i$ is one of the coin denominations. So there are as many subproblems of $f(a)$ as there are different coins. To calculate $f(a)$, we need to select the optimal $f(a – c_i)$ value, meaning the subproblem that uses the minimum number of coins. Then we add $1$ to that value, since we use a single coin, the one with denomination $c_i$, to get from $f(a – c_i)$ to $f(a)$. Putting that together, we get the state transition equation for Coin Change:

$$f(a) = \min(f(a – c_1), f(a – c_2), …, f(a – c_n)) + 1$$

There’s no cookbook process to find a state transition equation. Practice helps, since each dynamic programming problem illustrates a pattern that you can use for future problems. But even when a new problem demands a unique approach, the concepts of states and state transitions give you a way to break it down into familiar pieces.

(Image credit: DALLĀ·E 3)

*For an introduction to this year’s project, see A Project for 2024*.