The Dynamic Programming Workbook
By blank
Please submit any comments, suggestions, etc at the_github_repo
Introduction
Welcome to The Dynamic Programming Workbook. The purpose of this book is to collect disparate resources on the internet and in printed materials and present it in a way that will allow readers to learn1 how to solve dynamic programming problems and check their work without having to search around or seek help on forums and such.
Motivation
Problems that require, or are easily solved by, dynamic programming tend to be difficult. There are many reasons why this could be so. My personal experience has been that unless you know to use dynamic programming from the outset, the realization to utilize the strategy does not come easy. It is entirely possible to take an approach that will not necessarily illuminate the presence of either optimal substructure2 or overlapping subproblems3.
What this book is not
This book is not meant to teach data structure and algorithm fundamentals. While relevant concepts will be discussed, there will be lots of insights and connections to other areas of computer science that will not be covered. These would be best learned from a textbook.
Language
I have chosen to use Python for any code snippets due to how easy it is to read, as well as its expressiveness. With that said, I will try to make an effort to keep things readable for a wide audience by documenting when a Python idiom or feature is used.
Textbooks
Put links to textbooks here
Hopefully, the experience should be similar to that described by the Moore method
When an optimal solution can be assembled from optimal solutions of subproblems
When a subproblem is solved more than once, instead of generating only new subproblems
What is Dynamic Programming?
In this chapter we will:
- Review the 0/1 knapsack problem as a motivating example
- Identify classes of problems well suited for a dynamic programming solution
- Come up with strategies to detect a dynamic programming problem from its description
The 0/1 Knapsack Problem
The General Problem
The general knapsack problem is a problem in combinatorial optimization. The premise goes like this: given a set of items where each item has an associated weight and value, and a knapsack that can hold a given amount of weight, how do we find the highest value from items while still fitting in the knapsack? Put another way:
We seek the maximize the objective of finding a total value from a combination
of items subject to the constraint that the items must fit in the knapsack.
There is a reason why I wrote that description in that way. It will become clear why later on in the chapter.
Specializing to the 0/1 Case
Now that we formulated the general knapsack problem, let us place an additional constraint: there is only a single copy of any given item available. This is known as the 0/1 variation of the knapsack problem, the reason being that we may choose to include either 0 or 1 of a given item.
Let's look at a couple of examples to put this into more concrete terms. Suppose we had a set of items where the values are
values = [1, 6, 10, 16]
and the weights are
weights = [1, 2, 3, 5]
If we have a knapsack with capacity of 7 then the maximum value is 22. This is a simple example because the input size is small, so it is easy to enumerate combinations and figure out the answer by inspection.
So how do we proceed to solve it? The brute force way is straightforward so let's go with that: we enumerate all the combinations of items, sum up their values and weights,
and pick out the max value that fits in the capacity. Python provides convenience functions for this kind of task in itertools
, see here.
The first one is itertools.combinations(iterable, r)
, where r
is an integer. It will return an iterable that yields tuples that are combinations of r
items from the input iterable. Let's try it out in the Python console
>>> import itertools as it
>>> s = list(range(5))
>>> s
[0, 1, 2, 3, 4]
>>> list(it.combinations(s, 4))
[(0, 1, 2, 3), (0, 1, 2, 4), (0, 1, 3, 4), (0, 2, 3, 4), (1, 2, 3, 4)]
We need to generate the combinations for all r
. There is a convenient function called chain
with an alternate constructor called chain.from_iterable
which will
allow us to chain all the generated iterables into a single iterable.
>>> list(it.chain.from_iterable(combinations(s, r) for r in range(len(s) + 1)))
[(), (0,), (1,), (2,), (3,), (4,), (0, 1), (0, 2), (0, 3), (0, 4), (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), ...
Now that we have these figured out, let's write out the solution:
def knapsack_01_brute_combinatorial(values, weights, weight):
from itertools import chain, combinations
from functools import reduce
s = zip(values, weights)
combos = chain.from_iterable(combinations(s, r) for r in range(1, len(s) + 1))
result = filter(lambda x: x[1] <= weight,
map(lambda z: reduce(lambda a, y: (a[0] + y[0], a[1] + y[1]),
z), combos))
result = max(result, key=lambda x: x[0])[0]
return result
Functional programmers will be right at home with this kind of code. For those that aren't, let's go over it and make it comprehensible.
First, we zip
values
and weights
together, so s
is an iterable whose items are tuples of integers of length two, ie (int, int)
.
Now, combos
is an iterable whose items are tuples of varying length, and the items of those tuples are the same as in s
. Note the range for r
is 1..len(s) + 1
. We do not need the empty tuple from r=0
,
but we need the combination from r=len(s)
, so we need len(s) + 1
in range()
.
Now we arrive at map
, filter
, and reduce
, the holy trinity (of sorts) of functional programming. They all have a trait in common,
in that they all accept a function and an iterable as input and apply that function to each element of the iterable. Let's go over what they do:
map(func, iterable)
: returns an iterable and maps each elementx
ofiterable
such that each element isfunc(x)
. Easily re-written with a list comprehension[func(x) for x in iterable]
filter(func, iterable)
: returns an iterable with elementsx
ofiterable
such thatfunc(x)
returnsTrue
. Similar to[x for x in iterable if func(x)]
reduce(func, iterable, initializer=None)
: here,func
needs to accept two arguments. The first is the accumulator value, it can optionally be given an initial value withinitializer
. The second argument is an element ofiterable
. The output offunc
will be used as the value for the accumulator variable for the next element ofiterable
. The output ofreduce
is the final value of the accumulator variable.
It looks a little difficult to follow, but we can make sense by looking where combos
is passed in. It is being passed in as an argument to the map
. Let's look at that line
map(lambda z: reduce(lambda a, y: (a[0] + y[0], a[1] + y[1]), z), combos)
Recall that the elements of combos
are tuples of tuples. The inner tuples represent combinations of items, so we need to sum up the values and weights respectively. That's what the reduce
call is for.
reduce(lambda a, y: (a[0] + y[0], a[1] + y[1]), z)
where z
is an element of combos
. This reduce
will gives a tuple of type (int, int)
. So the result of the lambda
in the map
will by one of those tuples. Then, the result of the map
will be
an iterable whose elements are the sums of the values and weights for each combination of items. We therefore just need to discard the elements whose weight exceeds the capacity. That is what filter
does
for us. We then find the tuple with the highest value and that is our solution. Note, max
returns the element, which is a tuple, and we only care about the value which is at the 0 index.
TODO: maybe add a python unittest test case
Discussion
In this section, we looked at the knapsack problem as an example to set the stage for other dynamic programming problems. We solved the 0/1 variation with a brute force naive approach. In the next section we will take a closer look at the 0/1 knapsack variant and see how we can try and optimize our solution.
Breaking the 0/1 Knapsack Problem Further Down
Complexity of our Naive Solution
Let's take another look at our naive solution:
def knapsack_01_brute_combinatorial(values, weights, weight):
from itertools import chain, combinations
from functools import reduce
s = zip(values, weights)
combos = chain.from_iterable(combinations(s, r) for r in range(1, len(s) + 1))
result = filter(lambda x: x[1] <= weight,
map(lambda z: reduce(lambda a, y: (a[0] + y[0], a[1] + y[1]),
z), combos))
result = max(result, key=lambda x: x[0])[0]
return result
Proving the time complexity of this solution in rigorous detail is not really in the scope of this tutorial. Suffice it to say, the big O
complexity for generating all combinations of r
items is n choose r
and we do that n
times. This is O(2n)
as per wikipedia.
Properties of Dynamic Programming Problems
Now that we know what the time complexity of the naive solution, we should start thinking about how to optimize it. Before we do, let's review the salient properties of dynamic programming problems that are typically presented in an algorithms class. I'm talking of course about overlapping subproblems and optimal substructure.
Overlapping Subproblems
According to Wikipedia:
In computer science, a problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems.
The simplest class of problems that exhibit this are those that are defined recursively. The prototypical example is the Fibonacci sequence. Here is a simple Python solution:
def fib_basic_recursive(n):
if n == 0 or n == 1:
return n
return fib_basic_recursive(n - 1) + fib_basic_recursive(n - 2)
If we write Fib(n) = Fib(n-1) + Fib(n-2)
, then Fib(n-1) = Fib(n-2) + Fib(n-3)
. But we're already computing Fib(n-2)
when we're computing Fib(n)
, and so forth. This is a very obvious example of course
Optimal Substructure
Again, according to Wikipedia:
In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems.
If you've studied dynamic programming in an algorithms class, or somewhere else, then you know that dynamic programming can be used to compute the Fibonacci sequence. In the Fibonacci case, there's only one way achieve a solution to any given subproblem, so the overall solution is somewhat trivially constructed from "optimal" solutions of the subproblems.
Applying to our Naive Solution
Ok, we've reminded ourselves of the two properties relevant to dynamic programming. It's 10 minutes into the tech interview and we need to optimize our solution or we won't get the job offer. Where do we start?
Let's start with trying to identify the overlapping subproblems. Given that our time complexity is exponential, and as stated above that is due to generating all the combinations, we should start there. Let's look at this line a little closer
combos = chain.from_iterable(combinations(s, r) for r in range(1, len(s) + 1))
Here, we're calling combinations
multiple times. In fact we're calling it len(s)
, aka N
the input size, times. Since we're calling it multiple times, it has to generate the combinations from scratch without using the results from the r-1
iteration.
But what do we do with that insight if we're calling a standard library function? We're pretty much going to have to write that out ourselves, aren't we? Let's hold that thought for now. Instead let's see how we can exploit optimal substructure.
Optimal substructure mean we're computing an optimal solution from optimal solutions of subproblems. But what about our approach? What we're actually doing is computing the solution to every subproblem and then searching for the optimal one.
At this point we should be beginning to realize that our approach is hopeless, as far as optimizing goes. If you've studied dynamic programming before, and you've solved the 0/1 knapsack problem that way, then you might have been wondering why I even bothered presenting this solution if I knew it was going to lead to an optimization dead end.
The reason is simply to illustrate the conceptual trap of dynamic programming problems in tech interviews. If you don't already realize it, it's very possible to waste time coming up with a solution that will need to be thrown away.
Discussion
In this section we saw how taking the wrong approach can lead to a dead end when solving problems that are best suited to dynamic programming. In doing so, we reviewed the key properties of problems that lead themselves to dynamic programming solutions: overlapping subproblems and optimal substructure.
In the next section, we will go over the canonical way of solving the 0/1 knapsack problem.
Solving the 0/1 Knapsack Problem with Dynamic Programming
In the last section we saw that our initial naive approach was no good for further optimizations. Let's start from the beginning and think more carefully about the problem to come up with a better approach that will be more amenable to optimizing.
Thinking About the Problem
It's always a good idea to ask some questions that will allow us to understand the problem better.
-
Do we need to iterate over every item? Put another way, would a greedy algorithm work here? The answer is no, we need to iterate over all items. All we need to do is provide a counter-example. A simple one would be
values = [60, 100, 120]
andweights = [10, 20, 30]
. If the knapsack has a capacity of 50, then a greedy algorithm choosing the items with highest value per unit weight would choose the first two items for a total value of 160, but the optimal solution has a total value of 220. -
Do we care about the items themselves? Recalling back to our naive solution, we generated all the combinations of items and carried those around until the very end. But really, we don't care about that. We only care about the total value and the remaining capacity of the knapsack as we go through each item.
-
What is the decision process as we iterate through each item? Now that we've established we will be visiting each item at least once, what do we actually do when we visit an item? It's simple really, we have two choices: we either take the item (and put it in the knapsack) or we don't.
-
When do we stop? We stop when we've either run out of items or the capacity of our knapsack is at or below 0.
So we have a terminating condition and a decision process for getting to the next step. Does this approach sound familiar? I hope so, it's recursion.
Defining the Recursion Relation
Let's define m[i, c]
to be the max value for item i
with capacity c
. We have m[-1, c] = 0
. Simply, if we don't have any items to consider, our max value is 0. Let's also denote
the value and weight of item i
to be v[i]
and w[i]
respectively. If w[i] > c
then m[i, c] = m[i-1, c]
. That is, if the weight of item i
exceeds the capacity and we do not
include it in the knapsack. Our max value is therefore the same as when we visited item i-1
. On the other hand, if w[i] ≤ c
then m[i, c] = max(m[i-1, c], m[i-1, c-w[i]] + v[i])
.
Put another way, before we take the item we need to know which is greater, the total value we have accumulated up until now or the total value we can get with reduced capacity c-w[i]
and then adding v[i]
to the total value. The greater of those two values will determine whether we take item i
or not.
TODO: enable mathjax and write it out in a more compact form
A First Solution
Now that we have defined the recursion relation, let's translate that to code. Here is a candidate solution:
def knapsack_01_brute_recursive(the_values, the_weights, the_capacity):
def aux(n, values, weights, capacity):
if capacity <= 0 or n < 0:
return 0
q = aux(n - 1, values, weights, capacity)
if weights[n] <= capacity:
q = max(q, aux(n - 1, values, weights, capacity - weights[n]) + values[n])
return q
return aux(len(the_values) - 1, the_values, the_weights, the_capacity)
Here I defined the function that gets recursively called within another function. It's simply a way of keeping things organized in my IDE. If you decided to go this route, you might get asked
about it in an interview. You should be aware that there is a performance penalty and be prepared to talk about it.
Next, it's generally not a good idea to shadow outer variables in a nested function, hence why I prepended the_
to the outer variables to keep things organized. Finally, I iterate in reverse instead of from 0 and forward.
Doing so helps me visualize the recursion from a top-down perspective a bit better, but it's trivial to switch around, just remember to change the n < 0
check to n > len(values) - 1
.
So let's see how this code implements our recursion relation. First we check if we run out of capacity or items, if we do then we cannot gain anymore value so we return 0. Next, we need the total value without
including the current item we are visiting. This is aux(n-1, ...)
which we store in q
. We do this first since if the current item we are visiting, item n
, does not fit we simply return q
and keep going.
If it does fit, we need to check which gives us more value: q
(we don't take the item), or aux(n-1, values, weights, capacity - weights[n] + values[n])
(we take the item). When we take the item,
our capacity goes from capacity
to capacity - weights[n]
. In a certain sense, every time a decision is made for a given item we have to backtrack to see how that decision would affect previous decisions made.
The time complexity of this solution is also O(2n)
because each function calls itself twice. Space complexity is O(n)
as it goes depth first, so there can be only n
calls on the stack at once.
You should ingrain this solution into your head. Just about every dynamic programming problem will be solvable in this manner.
A First Pass at Optimization
I am going to assume that you know what memoization, top-down, and bottom-up programming mean. If you don't, you should read up on those concepts either from the internet, or one of the linked textbooks at the beginning of this book.
An easy way to optimize our simple recursive solution is to use memoization. Note that this is different from memorization. Both derive from Latin and are cognates, but memoization was derived from "memorandum" which is shortened to "memo" in English, hence memoization.
The whole point of memoization is to cache the results of subproblems along the way and then look them up when they are about to be recomputed. Let's consider a quick example where weights = [10, 20, 30]
and capacity = 50
. We don't care about values right now as I just want to show what the tree of recursive calls looks like.
c:50, n:2
+ +
| |
| |
| |
+------------+ c:20, n:1 <------------+ +-----------> c:50, n:1 +------------------+
| + + |
| | | |
| | +--+ |
v v v v
c:0, n:0 ++ c:20, n:0 ++ +-+ c:30, n:0 +----+ c:50, n:0 +--------+
| | | | + |
| | | | | |
v v v v v v
c:10, n:-1 c:20, n:-1 c:20, n:-1 c:30, n:-1 c:40, n:-1 c:50, n:-1
Note that this is a small example as enumerating all the combinations would get quite tedious. We can see that c:20, n:-1
gets called twice. We have n:-1
because we're starting at n=2
and proceeding in reverse.
Granted, this isn't the greatest of examples to show that we are saving time because n:-1
is a terminating condition so the actual amount of work in those steps is minimal.
Now, memoization is essentially caching the output of a function based on the input. We need to know which inputs are varying as we go through each item. The changing inputs are: n
and capacity
.
If we didn't know what types the inputs are, we would store them in a dictionary, or a hash map in some other language, and be done with it. Here, though, we know that both n
and capacity
are integers, so we can use lists, or arrays.
The strategy is simple, after we check the termination condition, we look up memo[n][capacity]
to see if we already computed that combination. If we did we return the value, otherwise we proceed as before. Then, prior to returning the value, we store it in the memo for future computations.
def knapsack_01_brute_recursive_memoized(the_values, the_weights, the_capacity):
the_memo = [[None] * (the_capacity + 1) for x in range(len(the_values))]
def aux(n, values, weights, capacity, memo):
if capacity <= 0 or n < 0:
return 0
val_get = memo[n][capacity]
if val_get is not None:
return val_get
q = aux(n - 1, values, weights, capacity, memo)
if weights[n] <= capacity:
q = max(q, aux(n - 1, values, weights, capacity - weights[n], memo) + values[n])
memo[n][capacity] = q
return q
return aux(len(the_values) - 1, the_values, the_weights, the_capacity, the_memo)
In Python the is
operator is for testing identity, as opposed to equality. Since None
is a singleton object it is idiomatic to test for identity as opposed to equality. Also, objects can implement __eq__
which could lead to a situation where testing for equality with None
could evaluate to True
.
As we can see, this is quite straightforward and doesn't really change the solution much. In fact, we could use functools.lru_cache
in the standard library to automatically memoize via Python decorators.
Converting to Bottom-Up
Why is it even called top-down or bottom-up? In the bottom-up approach, which usually involves a tabulation, we start at the bottom state n=0
and solve subproblems until we reach the desired n
.
Tabulation is exactly what it sounds like, literally filling out a table. In our case, we will represent that table by a list of lists.
In the top-down approach, which usually involves memoization, we start at the desired state n
and solve the subproblems that will let us reach the desired state.
So how do we go from the top-down to the bottom-up approach? Instead of a memo cache, we will have a table, let's call it dp[n][c]
. It's customary to use that as the variable name. I like
to call the memo cache memo[n][c]
to differentiate the two. It seems that dp
has been settled on as the accepted name for the bottom-up table on the internet. Any given entry in dp
represents the maximum value for a capacity c
from the first n
items. This is also the case in the top-down approach in our memo
.
We're starting at n=0
and we're visiting every item, but we also have to start at capacity=0
. Since we don't already know which path through the states gets us to the final answer, we have to iterate through all values of capacity
while filling in values in the table.
In the recursive solution, if we didn't take the item, our value was aux(n-1, values, weights, capacity)
, if we did take the item it was aux(n-1, values, weights, capacity - weights[n]) + values[n]
.
Here in the bottom up approach, all our values are already in dp
, so the analogs are dp[n-1][capacity]
and dp[n-1][capacity - weights[n]] + values[n]
.
def knapsack_01_bottomup_naive(values, weights, capacity):
val_len = len(values)
dp = [[values[0] if (item_idx == 0 and weights[0] <= cap) else 0 for cap in range(capacity + 1)] for item_idx in
range(val_len)]
for n in range(1, val_len):
for cap in range(1, capacity + 1):
val_take, val_no_take = 0, 0
if weights[n] <= cap:
val_take = dp[n - 1][cap - weights[n]] + values[n]
val_no_take = dp[n - 1][cap]
dp[n][cap] = max(val_take, val_no_take)
return dp[val_len - 1][capacity]
This is fairly straightforward, but let's quickly go over this line
dp = [[values[0] if (item_idx == 0 and weights[0] <= cap) else 0 for cap in range(capacity + 1)] for item_idx in
range(val_len)]
Normally, you would initialize all values in the table to be 0
. But we can do a quick optimization ahead of time by realizing that when we're iterating through the first row, ie the first item whose index is 0
, if we have enough capacity to hold its weight, then we will automatically take it.
Similarly, the first column of our table, when capacity = 0
, the values will also be 0
because if we don't have any capacity, we can't take an item. Of course, we initialize all other values of the table to be 0
so we don't need to do anything about this specifically. It's possible this might be the case with other problems, so do not forget. This is a common source of error in these kinds of problems.
Let W=|capacity|
, then our space complexity is O(NW)
and same with the time complexity. This is referred to as pseudo-polynomial time. For a great explanation see this SO post.
The simple explanation is that time complexity is formally defined in terms of the number of bits in the input. If you are sorting an array of 32 bit integers, then the size of the input is 32n
where n
is the number of entries in the array.
Now, we're using a number, here W
, as an input. It takes log(W)
bits to store the number. If you add a single significant bit, you end up doubling the number in the worst case. For example, assuming least significant bit first, 101
is5
and adding a significant bit gives us 1011
which is 13
.
If we let s=log(W)
then our time complexity is actually O(2s)
and 2s=2log(W)=W
.
Optimizing the Bottom-Up solution
The first immediate optimization is the following: for any given n
we only care about the n
th row in the table and the n-1
th row.
This allows us to reduce our space complexity to O(W)
. But as it turns out we can make another space optimization and reduce it down to a single row with the following observation: the columns we are interested in in the previous row are either directly above, dp[n-1][cap]
, or to the left dp[n-1][cap-weights[n]]
.
How do we take advantage of this? We cannot iterate from left to right, otherwise we would be overwriting values we need to use later on. If we iterate right to left, then we don't don't overwrite values. This leads us to this very succinct solution:
def knapsack_01_bottomup_optimized_succinct(values, weights, capacity):
vals = [0] * (capacity + 1)
for n in range(len(values)):
for cap in range(capacity, weights[n] - 1, -1):
vals[cap] = max(vals[cap], vals[cap - weights[n]] + values[n])
return vals[capacity]
The line range(capacity, weights[n] - 1, -1)
means starting from capacity
inclusive, until weights[n] - 1
exclusive, with a step value of -1
. Let's look at a concrete example in the Python console
>>> list(range(5,0,-1))
[5, 4, 3, 2, 1]
which is exactly what we want. We don't need to consider cap < weights[n]
because we no longer have any capacity to take the item, so it will always just be the value it already is.
Top-down vs Bottom-up
We've seen both approaches, but which one is "better"? Like a lot of things, there is no clear answer, but we can list some general advice.
- Strive for the bottom-up approach in an interview. If you can't, then apply memoization to the recursive solution. If you can't get the recursive solution, then a brute force approach, like at the beginning of the chapter, is the bare minimum
- Top-down + memoization can lead to fewer subproblems being solved because it solves only those that are needed to reach the
n
th state. - If all subproblems must be solved at least once then bottom-up will usually be faster.
- In bottom-up approaches, the table is filled in one by one, whereas with top-down approaches, the memo is filled in as needed.
- Bottom-up has the perception of being faster compared to top-down due to the overhead associated with function calls in the latter.
Discussion
In this section we solved the 0/1 knapsack problem in the canonical way. We started with a recursive solution, which is usually referred to as the top-down approach. We applied memoization to reduce the number of subproblems being solved along the way. We converted to a bottom-up approach and optimized it as much as we could reasonably do so.
In the next section we will take what we learned and see what kinds of problems are suited for dynamic programming and how to detect whether or not to use dynamic programming from a problem description.
How to Tell if Dynamic Programming Should be Used
Before we get into the heuristics for deciding whether or not to apply dynamic programming, it is actually quite helpful to understand where the term came from.
If you've already read different sources, then you probably know that the term came about from the work of Richard Bellman in the 1950s on mathematical optimization. Optimization requires an objective function which is either maximized or minimized. An optimization problem takes in a function, or a list/array/set of inputs, and reduces it to a single number. Whenever you see a problem description that asks for "the most", "the biggest", "the least amount", etc you should immediately think of dynamic programming.
Greedy Algorithm vs Dynamic Programming
Greedy algorithms seek to find a solution by choosing the best local solution without comparing against previous solutions. For a comparison between both methods see this article.
Disproving a Greedy Approach
Greedy algorithms are quite useful when they are feasible and correct, but proving either property is quite difficult. For this reason it is generally easier to show that a greedy approach is not correct by producing a counter-example. We already gave an example in the previous section, but we can come up with some guidelines to help produce a counter-example quickly:
- An input length of 3 should be sufficient
- The constraint (in the case of the 0/1 knapsack problem it's the knapsack capacity) should accommodate 2 items from the input
- Choose input values such that the greedy approach can only choose one (or two) item, whereas the optimal solution can choose 2 whose combined values are greater than the largest choice.
If your main goal is passing interviews, then as a strategy I wouldn't imagine the odds of being given a problem that is optimally solved by a greedy algorithm would be high. In fact, I'd feel like the odds would be quite low. Dynamic programming problems are perennial favorites because they tend to be conceptually difficult. Also, proving a greedy algorithm's feasibility and correctness and solving the problem all in 45 minutes to an hour is probably not going to happen, so you can probably be sure that an optimization problem is going to require dynamic programming.
Problems That Do Not Sound Like an Optimization Problem
Check out this article, The problem here is as follows
- You’re given a flat runway with a bunch of spikes in it. The runway is represented by a boolean array which indicates if a particular (discrete) spot is clear of spikes. It is True for clear and False for not clear.
Example array representation:
[T, F, T, T, T, F, T, T, F, T, T]
. - You’re given a starting speed S. S is a non-negative integer at any given point and it indicates how much you will move forward with the next jump.
- Every time you land on a spot, you can adjust your speed by up to 1 unit before the next jump.
- You want to safely stop anywhere along the runway (does not need to be at the end of the array). You stop when your speed becomes 0. However, if you land on a spike at any point, your crazy bouncing ball bursts and it’s a game over.
- The output of your function should be a boolean indicating whether we can safely stop anywhere along the runway.
Perhaps this doesn't sound like a problem suitable for dynamic programming. Another way to check is to ask yourself if you can say something like this about the problem:
For each item in the input, I can either do X, or do Y, or do Z, etc ...
In this problem, we can either increase the speed by 1, decrease the speed by 1, or not change the speed. In the 0/1 knapsack problem we can either taken an item, or not take it.
Discussion
In this section we went over some common properties between most problems that are solved by dynamic programming, by and large they tend to be optimization problems. Some problems don't appear to be optimization problems. For those problems we gave a handy technique to determine if dynamic programming is suitable.
Looking Ahead
In this chapter we looked at the 0/1 knapsack problem as a model problem for dynamic programming. We saw how a naive solution can lead to difficulties when trying to optimize further. These difficulties can be almost impossible to overcome in an interview unless the right approach is taken from the start. We went over the canonical method for solving the 0/1 knapsack problem for both top-down and bottom-up approaches. Finally we extrapolated some of the properties if the 0/1 knapsack problem and used them as heuristics for detecting dynamic programming problems from their description.
In the next chapter, we will go over common variations of the 0/1 knapsack problem. The most obvious one, for example, is the case when values == weights
. This is known as the subset sum problem.
Variations of the 0/1 Knapsack Problem
In this chapter, we will look at several variations of the 0/1 knapsack problem. As we will see, they mostly revolve around questions involving properties of subsets subject to a knapsack constraint. We will examine the following variations:
- Subset sum, can we find a subset whose items sum to a certain value
- Existence of integer partitions
- Subsets that sum to the same value
- Subsets that minimize their difference
- Counting subsets
Privacy Policy for Dynamic Programming Workbook
At Dynamic Programming Workbook, accessible from dpworkbook.com, one of our main priorities is the privacy of our visitors. This Privacy Policy document contains types of information that is collected and recorded by Dynamic Programming Workbook and how we use it.
If you have additional questions or require more information about our Privacy Policy, do not hesitate to contact us.
Log Files
Dynamic Programming Workbook follows a standard procedure of using log files. These files log visitors when they visit websites. All hosting companies do this and a part of hosting services' analytics. The information collected by log files include internet protocol (IP) addresses, browser type, Internet Service Provider (ISP), date and time stamp, referring/exit pages, and possibly the number of clicks. These are not linked to any information that is personally identifiable. The purpose of the information is for analyzing trends, administering the site, tracking users' movement on the website, and gathering demographic information.
Google DoubleClick DART Cookie
Google is one of a third-party vendor on our site. It also uses cookies, known as DART cookies, to serve ads to our site visitors based upon their visit to www.website.com and other sites on the internet. However, visitors may choose to decline the use of DART cookies by visiting the Google ad and content network Privacy Policy at the following URL – https://policies.google.com/technologies/ads
Privacy Policies
You may consult this list to find the Privacy Policy for each of the advertising partners of Dynamic Programming Workbook. Our Privacy Policy was created with the help of the Free Privacy Policy Generator and the Privacy Policy Generator Online.
Third-party ad servers or ad networks uses technologies like cookies, JavaScript, or Web Beacons that are used in their respective advertisements and links that appear on Dynamic Programming Workbook, which are sent directly to users' browser. They automatically receive your IP address when this occurs. These technologies are used to measure the effectiveness of their advertising campaigns and/or to personalize the advertising content that you see on websites that you visit.
Note that Dynamic Programming Workbook has no access to or control over these cookies that are used by third-party advertisers.
Third Party Privacy Policies
Dynamic Programming Workbook's Privacy Policy does not apply to other advertisers or websites. Thus, we are advising you to consult the respective Privacy Policies of these third-party ad servers for more detailed information. It may include their practices and instructions about how to opt-out of certain options. You may find a complete list of these Privacy Policies and their links here: Privacy Policy Links.
You can choose to disable cookies through your individual browser options. To know more detailed information about cookie management with specific web browsers, it can be found at the browsers' respective websites. What Are Cookies?
Children's Information
Another part of our priority is adding protection for children while using the internet. We encourage parents and guardians to observe, participate in, and/or monitor and guide their online activity.
Dynamic Programming Workbook does not knowingly collect any Personal Identifiable Information from children under the age of 13. If you think that your child provided this kind of information on our website, we strongly encourage you to contact us immediately and we will do our best efforts to promptly remove such information from our records.
Online Privacy Policy Only
This Privacy Policy applies only to our online activities and is valid for visitors to our website with regards to the information that they shared and/or collect in Dynamic Programming Workbook. This policy is not applicable to any information collected offline or via channels other than this website.
Consent
By using our website, you hereby consent to our Privacy Policy and agree to its Terms and Conditions.