Last week, we looked at a dynamic programming problem for the Jump Game problem. If you implement that solution and run it on LeetCode, you’ll notice that your runtime and memory scores are very low compared to other users. Let’s see why that is.

## Simplifying the Solution

As we learned earlier, dynamic programming problems can be solved using two approaches: top-down (using recursion) and bottom-up (using iteration). As a rule, both approaches have the same asymptotic time complexity, but one or the other may be faster in practice.

Besides improving real-world performance, another reason to prefer one approach over the other is to better understand the problem and look for more significant performance improvements. Let’s do that for Jump Game.

In last week’s solution, we use a recursive function `farthest`

that returned the farthest position we could reach from a given position. Let’s simplify that. The problem only asks whether we can get to the last position, so the main function `CanJump`

returns a boolean. Let’s use a new recursive function `CanJumpFrom(start)`

that returns `true`

if we can get to the last position from a given starting position. Then the main function can just return `CanJumpFrom(0)`

.

For our memo table, `dp`

, we’ll need one more state for cases where we don’t know yet whether we can get to the last position from a given `dp[i]`

. To avoid having to explicitly initialize `dp`

, we can use `0`

for `don't know`

. Then `success`

(can get to the end) can be `1`

and `failure`

(can’t get to the end) can be `2`

. We can initialize `dp[n-1]`

to `1`

, since we can always get from the end position to itself.

As we fill out the `dp`

array, we’ll find starting positions we can mark as `success`

. Using these positions, we can find more positions: if we can get from a position `i`

to any position `j`

where `dp[j] == 1`

, then we can set `dp[i] = 1`

since we know we can get from `j`

to the end position.

## The Bottom-Up Approach

We can transform our simplified top-down solution into a bottom-up solution. Bottom-up solutions are often more difficult to design than top-down solutions, because they require that you specify the order in which the subproblems will be solved. For the top-down version of Jump Game, we just pass `0`

to the recursive function, since the problem tells us we always start at position `0`

. The compiler takes it from there, deciding how to make all the recursive calls based on the program logic.

A top-down solution starts with the “largest” problem, the original problem. A bottom-up solution starts from the other end, meaning the “smallest” problem. Here, the smallest subproblem occurs when we’re already at the end position, `n-1`

. But we’re hard-coding the solution to that problem by initializing the result to `success`

. So our real starting point is the next smallest problem, `n-2`

. That will be the start index of our outer loop. Our inner loop will be the same as in the top-down solution.

Here’s the updated pseudocode, with everything now in the main function and no recursive calls:

```
n = length of nums
define an array dp of size n
dp[n-1] = SUCCESS // if we get to the end position, we're done
for i from n-2 to 0
end = min(n-1, i + nums[i]) // how far can we get from i in one jump
for j from i+1 to end
if d[j] == SUCCESS
set dp[i] = SUCCESS
break // as soon as we find a valid position, we can stop
return dp[0] == SUCCESS // did we make it from the start position?
```

## The Greedy Approach

Like a dynamic programming algorithm, a greedy algorithm is applicable when a problem has optimal substructure, meaning we can use optimal solutions to subproblems to find the optimal solution to the original problem. But unlike dynamic programming, the greedy approach only uses the information it has for the current subproblem. It never goes back to use or update the results for previous subproblems, like we did with `dp[pos] = max(dp[pos], farthest(i, nums))`

from last week. Therefore, there is no need to maintain a memo table in a greedy solution. But this means it can’t solve every problem that dynamic programming can solve.

For a new problem, it’s not always obvious that a greedy solution will work. Dynamic programming is a safer approach because it considers the optimal solution to every subproblem, so it won’t miss any. But let’s consider Jump Problem from a greedy perspective, using our bottom-up solution as a starting point.

For the bottom-up approach, we start at the destination and move backwards toward the start, keeping track of which positions are valid starting positions (i.e., positions where we can make it to the end). In addition to the outer backwards loop, we have an inner forwards loop that searches for one of these valid starting positions. If we find one, that gives us a new valid starting position that’s closer to position `0`

.

But do we really have to remember every valid starting position? In the bottom-up code, we have an outer loop that goes in reverse order from the end to the start, and an inner loop that goes forward from the current position. But as soon as we find a winning position, we can exit the inner loop. So we aren’t using the full `dp`

array. This should give us a clue that dynamic programming, though it is fairly efficient and produces the correct result, is overkill for this problem. Maybe there’s a more efficient solution.

What if we get rid of `dp`

array and the inner loop, keeping only the backward-running outer loop? Instead of the array, we just maintain a single integer representing the best (lowest) starting position we have found so far. At first, it’s the end position, `n-1`

. But if `n-2`

can get to `n-1`

, then `n-2`

becomes the best starting position. Now we only have to get to `n-2`

. At each position, we check if we can get at least as far as the best starting position. From there, we know we can get to the end. At the end of the process, if our best starting position is position `0`

, then we win.

This makes the solution very simple (though not easy to think of):

```
n = length of nums
startPos = n-1
for i from n-2 to 0
if i + nums[i] >= startPos
startPos = i
return startPos == 0
```

## References

For the progression from less to more efficient implementations, see Prachi Sharma’s solution.

The short greedy solution above is based on Stefan Pochmann’s solution.

Thanks to Daniel Zingaro for suggesting an improvement to this article.

(Image credit: DALLĀ·E 3)

*For an introduction to this year’s project, see A Project for 2024*.