As we have seen in previous weeks, a key step in analyzing a dynamic programming problem is selecting state variables that identify the subproblems to be solved. The solutions to these subproblems are stored in a table, which allows us to retrieve them efficiently when they come up again. So far, we have looked at problems with only one state variable. But the dynamic programming process also works when more than one state variable is required.
LeetCode 300: Longest Increasing Subsequence
Last week, we looked at a problem involving a subarray. A subarray is a contiguous sequence of elements from an array, meaning we can’t skip any elements when building the answer. A subsequence, on the other hand, allows us to skip elements, though the elements in the subsequence still need to stay in their original order.
LeetCode 300: Longest Increasing Subsequence says:
Given an integer array
nums
, return the length of the longest strictly increasing subsequence.
So we’re allowed to skip elements, but we can’t change the order of the elements, and for this problem, the elements in the solution must be strictly increasing.
LeetCode 53: Maximum Subarray
The maximum subarray problem is famous enough to have its own Wikipedia page, and a named algorithm you can use to solve it (Kadane’s Algorithm). But rather than looking up the algorithm, we’ll derive the solution to LeetCode 53: Maximum Subarray using principles of dynamic programming.
The problem statement is simple. Here’s the LeetCode version:
Given an integer array
nums
, find the subarray with the largest sum, and return its sum.
A subarray is a contiguous and non-empty sequence of elements from the input array. So we’re looking for the largest sum of all values from nums[i]
to nums[j]
(inclusive), where i <= j
.
Dynamic Programming States and State Transitions
Using what we have covered so far, you can analyze a dynamic programming problem and solve it using DP techniques and concepts. But after solving a few problems, you might still feel like you have to invent a new solution each time. In the introduction to this year’s project, I promised to explain a process for designing Dynamic Programming algorithms. This week, I’ll fill in a few details that make solving DP problems more of a repeatable process.
Dynamic Programming vs. Greedy Algorithms
Last week, we looked at a dynamic programming problem for the Jump Game problem. If you implement that solution and run it on LeetCode, you’ll notice that your runtime and memory scores are very low compared to other users. Let’s see why that is.
LeetCode 55: Jump Game
Let’s solve LeetCode problem 55 (Jump Game) using dynamic programming. Next week, we’ll consider a sneaky aspect of this problem.
The problem:
You are playing a game on a number line where the aim is to reach position n-1
starting from position 0
. As input, you get an array nums
of size n
containing non-negative integers. From position i
, you can move forward any distance between 0
and nums[i]
(inclusive). Can you reach the last position to win the game?
Top-Down and Bottom-Up Dynamic Programming
To solve a problem using dynamic programming, you have to break it into subproblems, solve those subproblems, and use the results to solve the original problem. So the first step in solving a dynamic programming problem is defining what subproblems you need to solve.
Dynamic Programming Elements for LeetCode 322: Coin Change
To see how the elements of dynamic programming come together in a real problem, let’s explore the classic dynamic programming problem Coin Change (LeetCode 322).
What Is Dynamic Programming?
Binary search is an algorithm. It provides step-by-step instructions for efficiently finding a value in a sorted list. A hash table is a data structure. It provides an efficient way to map keys to values. Dynamic programming is an algorithmic technique, or more formally, an algorithmic paradigm. Like divide and conquer, backtracking, or greedy, it refers to a class of algorithms, not a specific algorithm.
Although the term “dynamic programming” doesn’t refer to a specific algorithm, dynamic programming solutions use algorithms and data structures. To solve a LeetCode problem using dynamic programming, you have to write an algorithm to process the input, and you need a data structure to store intermediate results. But the specific algorithm and data structure are not identified for you. This differs from solving a problem tagged as binary search or hash table, where the algorithm or data structure is in the tag. However, dynamic programming problems have some similarities with each other. We will learn about strategies, guidelines, and patterns for solving them.
Dynamic programming (the word programming refers not to coding, but to mathematical optimization) allows us to efficiently solve certain types of problems that can be broken down into subproblems. If you find when solving a problem that you are repeatedly calculating the same value or passing the same parameters to the same function, that’s a sign that dynamic programming may apply. The efficiency of dynamic programming comes from retrieving the results of previous calculations rather than recalculating them.
In technical terms, we say that dynamic programming is a good choice when a problem has these two characteristics:
Overlapping subproblems: When you break the problem into subproblems, you find the same subproblems coming up repeatedly.
Optimal substructure: You can use the optimal solution to the subproblems to find the optimal solution to the overall problem.
Some problems have only overlapping subproblems or only optimal substructure. Dynamic programming is only efficient when a problem has both.
Our goal in studying dynamic programming this year is to learn to solve most Easy and Medium LeetCode problems tagged with that topic. To reach that goal, we will:
Understand the elements of dynamic programming.
Apply that understanding to solve specific LeetCode problems.
Use the model problem/solution approach to select model dynamic programming problems, write model solutions for them, and use spaced repetition to understand and remember what we learn.
Reference: Introduction to Algorithms (CLRS) Chapter 14 covers dynamic programming.
(Image credit: DALL·E 3. The knapsack problem can be solved using dynamic programming.)
For an introduction to this year’s project, see A Project for 2024.
A Project for 2024
During 2023, I published 50 tips for effective LeetCode practice.
The tips cover LeetCode practice fundamentals, ideas for fine-tuning the practice process, and how results from learning research can apply to learning algorithms. I think they cover all the ideas you would need to create a detailed, personalized learning plan.
The fundamental tips are organized around a model problem/solution process. The basic process: find a good problem; write a detailed explanation and a code solution for that problem; and use the solution as a model for solving similar problems.
When you’re first learning the model problem/solution process, you might rely too much on applying a solution template to solve problems. That’s fine for Easy and Easy-Medium problems, which are straightforward knowledge tests. But it can break down on harder problems, which require more creativity.
To address this, I suggested in Tip 40 that studying dynamic programming is a way to learn the process of finding the solution rather than just learning the solution. Unlike other LeetCode algorithm topics, Dynamic Programming is a process for designing algorithms, not a step-by-step process for finding a solution.
So we’ll start the year with an overview of Dynamic Programming, and in the coming weeks, we’ll cover the background required to solve DP problems. Our goal will be to learn enough about Dynamic Programming to solve most Easy and Medium problems, and to practice the tips with specific problems from the LeetCode problem library.
(Image credit: DALL·E 3)
- « Previous Page
- 1
- 2
- 3
- 4
- …
- 49
- Next Page »