Breaking the 0/1 Knapsack Problem Further Down

Complexity of our Naive Solution

Let's take another look at our naive solution:

def knapsack_01_brute_combinatorial(values, weights, weight):
    from itertools import chain, combinations
    from functools import reduce
    s = zip(values, weights)
    combos = chain.from_iterable(combinations(s, r) for r in range(1, len(s) + 1))
    result = filter(lambda x: x[1] <= weight,
                    map(lambda z: reduce(lambda a, y: (a[0] + y[0], a[1] + y[1]),
                                         z), combos))

    result = max(result, key=lambda x: x[0])[0]
    return result

Proving the time complexity of this solution in rigorous detail is not really in the scope of this tutorial. Suffice it to say, the big O complexity for generating all combinations of r items is n choose r and we do that n times. This is O(2n) as per wikipedia.

Properties of Dynamic Programming Problems

Now that we know what the time complexity of the naive solution, we should start thinking about how to optimize it. Before we do, let's review the salient properties of dynamic programming problems that are typically presented in an algorithms class. I'm talking of course about overlapping subproblems and optimal substructure.

Overlapping Subproblems

According to Wikipedia:

In computer science, a problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems.

The simplest class of problems that exhibit this are those that are defined recursively. The prototypical example is the Fibonacci sequence. Here is a simple Python solution:

def fib_basic_recursive(n):
    if n == 0 or n == 1:
        return n
    return fib_basic_recursive(n - 1) + fib_basic_recursive(n - 2)

If we write Fib(n) = Fib(n-1) + Fib(n-2), then Fib(n-1) = Fib(n-2) + Fib(n-3). But we're already computing Fib(n-2) when we're computing Fib(n), and so forth. This is a very obvious example of course

Optimal Substructure

Again, according to Wikipedia:

In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems.

If you've studied dynamic programming in an algorithms class, or somewhere else, then you know that dynamic programming can be used to compute the Fibonacci sequence. In the Fibonacci case, there's only one way achieve a solution to any given subproblem, so the overall solution is somewhat trivially constructed from "optimal" solutions of the subproblems.

Applying to our Naive Solution

Ok, we've reminded ourselves of the two properties relevant to dynamic programming. It's 10 minutes into the tech interview and we need to optimize our solution or we won't get the job offer. Where do we start?

Let's start with trying to identify the overlapping subproblems. Given that our time complexity is exponential, and as stated above that is due to generating all the combinations, we should start there. Let's look at this line a little closer

combos = chain.from_iterable(combinations(s, r) for r in range(1, len(s) + 1))

Here, we're calling combinations multiple times. In fact we're calling it len(s), aka N the input size, times. Since we're calling it multiple times, it has to generate the combinations from scratch without using the results from the r-1 iteration.

But what do we do with that insight if we're calling a standard library function? We're pretty much going to have to write that out ourselves, aren't we? Let's hold that thought for now. Instead let's see how we can exploit optimal substructure.

Optimal substructure mean we're computing an optimal solution from optimal solutions of subproblems. But what about our approach? What we're actually doing is computing the solution to every subproblem and then searching for the optimal one.

At this point we should be beginning to realize that our approach is hopeless, as far as optimizing goes. If you've studied dynamic programming before, and you've solved the 0/1 knapsack problem that way, then you might have been wondering why I even bothered presenting this solution if I knew it was going to lead to an optimization dead end.

The reason is simply to illustrate the conceptual trap of dynamic programming problems in tech interviews. If you don't already realize it, it's very possible to waste time coming up with a solution that will need to be thrown away.

Discussion

In this section we saw how taking the wrong approach can lead to a dead end when solving problems that are best suited to dynamic programming. In doing so, we reviewed the key properties of problems that lead themselves to dynamic programming solutions: overlapping subproblems and optimal substructure.

In the next section, we will go over the canonical way of solving the 0/1 knapsack problem.