How to Tell if Dynamic Programming Should be Used
Before we get into the heuristics for deciding whether or not to apply dynamic programming, it is actually quite helpful to understand where the term came from.
If you've already read different sources, then you probably know that the term came about from the work of Richard Bellman in the 1950s on mathematical optimization. Optimization requires an objective function which is either maximized or minimized. An optimization problem takes in a function, or a list/array/set of inputs, and reduces it to a single number. Whenever you see a problem description that asks for "the most", "the biggest", "the least amount", etc you should immediately think of dynamic programming.
Greedy Algorithm vs Dynamic Programming
Greedy algorithms seek to find a solution by choosing the best local solution without comparing against previous solutions. For a comparison between both methods see this article.
Disproving a Greedy Approach
Greedy algorithms are quite useful when they are feasible and correct, but proving either property is quite difficult. For this reason it is generally easier to show that a greedy approach is not correct by producing a counter-example. We already gave an example in the previous section, but we can come up with some guidelines to help produce a counter-example quickly:
- An input length of 3 should be sufficient
- The constraint (in the case of the 0/1 knapsack problem it's the knapsack capacity) should accommodate 2 items from the input
- Choose input values such that the greedy approach can only choose one (or two) item, whereas the optimal solution can choose 2 whose combined values are greater than the largest choice.
If your main goal is passing interviews, then as a strategy I wouldn't imagine the odds of being given a problem that is optimally solved by a greedy algorithm would be high. In fact, I'd feel like the odds would be quite low. Dynamic programming problems are perennial favorites because they tend to be conceptually difficult. Also, proving a greedy algorithm's feasibility and correctness and solving the problem all in 45 minutes to an hour is probably not going to happen, so you can probably be sure that an optimization problem is going to require dynamic programming.
Problems That Do Not Sound Like an Optimization Problem
Check out this article, The problem here is as follows
- You’re given a flat runway with a bunch of spikes in it. The runway is represented by a boolean array which indicates if a particular (discrete) spot is clear of spikes. It is True for clear and False for not clear.
Example array representation:
[T, F, T, T, T, F, T, T, F, T, T]
. - You’re given a starting speed S. S is a non-negative integer at any given point and it indicates how much you will move forward with the next jump.
- Every time you land on a spot, you can adjust your speed by up to 1 unit before the next jump.
- You want to safely stop anywhere along the runway (does not need to be at the end of the array). You stop when your speed becomes 0. However, if you land on a spike at any point, your crazy bouncing ball bursts and it’s a game over.
- The output of your function should be a boolean indicating whether we can safely stop anywhere along the runway.
Perhaps this doesn't sound like a problem suitable for dynamic programming. Another way to check is to ask yourself if you can say something like this about the problem:
For each item in the input, I can either do X, or do Y, or do Z, etc ...
In this problem, we can either increase the speed by 1, decrease the speed by 1, or not change the speed. In the 0/1 knapsack problem we can either taken an item, or not take it.
Discussion
In this section we went over some common properties between most problems that are solved by dynamic programming, by and large they tend to be optimization problems. Some problems don't appear to be optimization problems. For those problems we gave a handy technique to determine if dynamic programming is suitable.