To solve a problem using dynamic programming, you have to break it into subproblems, solve those subproblems, and use the results to solve the original problem. So the first step in solving a dynamic programming problem is defining what subproblems you need to solve.
The standard way to define a dynamic programming problem and subproblems is by using a recursive function. As we saw with the Coin Change problem, we can calculate the target value $f(a)$ if we know the values $f(a-c_i)$ for each of the coin denominations $c_i$. And we can calculate each value $f(a-c_i)$ the same way, repeating this process until we get to one of the base cases: $f(0)=0$ or $f(x)=-1$.
Top-Down
Once you define a problem recursively using mathematical notation, it translates naturally into a recursive function in your target programming language. For Coin Change, the recursive function works as follows:
If the amount is a base case, return a constant.
Otherwise, for each coin denomination, make a recursive call.
Once all the recursive calls complete, return the final result.
You could implement a recursive solution to Coin Change exactly like this, but it would only be practical for small values. Dynamic programming comes to the rescue by adding one more idea: memoization. When you find an optimal result, save it in an array. Then, after checking the two base cases, check the array to see if you have already saved the answer. This way, you won’t make an exponential number of recursive calls and take an exponential amount of time to return the result.
A recursive dynamic programming implementation is called top-down because we start with the original problem (the top) and make recursive calls using smaller and smaller parameter values until we get to a base case (the bottom). The recursive call graph is often represented as a tree, with the root (the original problem) at the top and the leaves (the base cases) at the bottom. The child nodes of a node $A$ show which subproblems need to be solved to calculate the answer for the value of $A$.
Top-down is usually the easiest way to implement a dynamic programming solution because it follows directly from the mathematical definition of a subproblem.
Bottom-Up
The top-down approach starts with the largest input value, recursively calculates the answer for smaller and smaller inputs, and uses those results to find the answer to the original problem.
The alternative approach is to start with the base cases and calculate the answers for larger and larger inputs until you get to the target input. This approach is called bottom-up because it proceeds from the leaves of the tree upwards to the root.
A bottom-up dynamic programming solution uses iteration instead of recursion. For Coin Change, we can use two nested loops. One loop iterates through the coin denominations, and the other iterates through amounts. In the implementation shown in my article, the coin denominations are the outer loop. The inner loop iterates through each amount from the current coin amount to the target amount. At each step, we decide if it’s better to use the current coin to make the amount, or whether to stick with a previous result that used fewer coins.
Although memoization is often associated with the top-down approach, it may also be necessary for bottom-up implementations. For Coin Change, we need an array of size amount + 1
, just as we did in the top-down implementation. For other dynamic programming problems, we may only need the last one or two values, in which case we don’t need a full memo table.
Top-down and bottom-up implementations generally have the same asymptotic time complexity. So if LeetCode accepts one implementation, it should also accept the other. If you measure the actual running times, a bottom-up implementation might be faster because it avoids the overhead of recursion and function calls. Or the top-down implementation might be faster if it avoids evaluating subproblems that aren’t necessary to solve the original problem. In performance-critical real-world programming, you would carry out performance tests to find out which is faster in practice. In an interview, you might be expected to know the trade-offs for the two approaches, in terms of code complexity, storage requirements, and runtime performance.
(Image credit: DALLĀ·E 3)
For an introduction to this year’s project, see A Project for 2024.