Let’s solve LeetCode problem 55 (Jump Game) using dynamic programming. Next week, we’ll consider a sneaky aspect of this problem.

The problem:

You are playing a game on a number line where the aim is to reach position `n-1`

starting from position `0`

. As input, you get an array `nums`

of size `n`

containing non-negative integers. From position `i`

, you can move forward any distance between `0`

and `nums[i]`

(inclusive). Can you reach the last position to win the game?

## Subproblems

To solve a dynamic programming problem, we need to break it down into subproblems and use the subproblems to solve the original problem.

The original problem asks if we can get to position `n-1`

from position `0`

. We could answer this by calculating the largest position we can reach from position `0`

, and checking if it’s at least `n-1`

. Consider a function $f(x)$ that gives us the largest position we can reach from position $x$. Then, if $f(0) \geq n-1$, we win.

If `nums[0]`

is `0`

, then it’s impossible to win the game. We can’t leave the starting position. But if `nums[0]`

is anything else, we can get from position `0`

to positions `1`

, `2`

, `3`

, up to position `nums[0]`

. Using our function $f$, let $a_x =$ `nums[x]`

. Then, if $a_x = 0$, $f(x) = x$. Otherwise, $f(x) = \max(f(x+1), f(x+2), …, f(x+a_x))$.

This definition of $f$ shows us the subproblems: $f(x+1), f(x+2)$, and so on. The subproblems are *overlapping* because we will need some of the same $f(x)$ values more than once. For example, if we can get from position `0`

to positions `1`

and `2`

, then we’ll need the solutions to $f(1)$ and $f(2)$. And if we can get from position `1`

to position `2`

, then we’ll need the solution to $f(2)$ again.

Jump Game has *optimal substructure* because we can use optimal solutions to the subproblems to find an optimal solution to the original problem. Specifically, we need the largest member of a set of optimal subproblem solutions. Together, *overlapping subproblems* and *optimal substructure* are the characteristics of a dynamic programming problem.

## Design

As usual, a top-down DP implementation comes naturally from the subproblem analysis. Let’s take the top-down approach.

Let `farthest(pos, nums)`

be a recursive function that returns the farthest point reachable from `pos`

, and let `n`

be the length of `nums`

. There are two base cases to check: if `nums[pos] == 0`

, we will return `pos`

, since we can’t leave position `pos`

. And if `pos == n-1`

we will return `n-1`

, since we can always get to the last position from itself. After the base cases, we check subproblems `farthest(pos+1, nums)`

through `farthest(pos+nums[pos], nums)`

stopping early if `pos`

reaches the last element of `nums`

.

So far, this is just a recursive solution, not a dynamic programming solution. To get acceptable performance, we will save each `farthest`

result in an array `dp`

of the same size as `nums`

. When possible, we’ll look up the result in `dp`

instead of calling `farthest`

.

## Implementation

Here’s a pseudocode implementation:

```
n = length of nums
define an array dp of size n
dp[n-1] = n-1 // if we get to the last position, we're done
if farthest(0, nums) >= n-1, return true
else return false
farthest(pos, nums)
n = length of nums
if nums[pos] == 0, update dp and return pos // base case
if dp[pos] is defined, return dp[pos] // memoization
end = min(n-1, pos + nums[pos]) // stop if we get to the end
for each i from pos+1 to end
// update dp if we can get farther
dp[pos] = max(dp[pos], farthest(i, nums))
return dp[pos]
```

## Reference

For a Java implementation with some optimizations and a preview of next week’s topic, see Prachi Sharma’s solution on LeetCode.

Thanks to Daniel Zingaro for suggesting an improvement to this article.

(Image credit: DALLĀ·E 3)

*For an introduction to this year’s project, see A Project for 2024*.