企业🤖AI Agent构建引擎,智能编排和调试,一键部署,支持私有化部署方案 广告
## 5.7 The 0-1 Knapsack Problem We solved this problem using dynamic programming in [Section 4.5](LiB0036.html#427). Here we solve it using backtracking. After that we compare the backtracking algorithm with the dynamic programming algorithm. ### 5.7.1 A Backtracking Algorithm for the 0–1 Knapsack Problem Recall that in this problem we have a set of items, each of which has a weight and a profit. The weights and profits are positive integers. A thief plans to carry off stolen items in a knapsack, and the knapsack will break if the total weight of the items placed in it exceeds some positive integer *W*. The thief's objective is to determine a set of items that maximizes the total profit under the constraint that the total weight cannot exceed *W*. We can solve this problem using a state space tree exactly like the one in the Sum-of-Subsets problem (see [Section 5.4](LiB0072.html#848)). That is, we go to the left from the root to include the first item, and we go to the right to exclude it. Similarly, we go to the left from a node at level 1 to include the second item, and we go to the right to exclude it, and so on. Each path from the root to a leaf is a candidate solution. This problem is different from the others discussed in this chapter in that it is an optimization problem. This means that we do not know if a node contains a solution until the search is over. Therefore, we backtrack a little differently. If the items included up to a node have a greater total profit than the best solution so far, we change the value of the best solution so far. However, wemay still find a better solution at one of the node's descendants (by stealing more items). Therefore, for optimization problems we always visit a promising node's children. The following is a general algorithm for backtracking in the case of optimization problems. ``` void checknode (node v) { node u; if (value(v) is better than best) best = value(v); if (promising(v)) for (each child u of v) checknode(u); } ``` The variable *best* has the value of the best solution found so far, and *value (v)* is the value of the solution at the node. After *best* is initialized to a value that is worse than the value of any candidate solution, the root is passed at the top level. Notice that a node is promising only if we should expand to its children. Recall that our other algorithms also call a node promising if there is a solution at the node. Next we apply this technique to the 0–1 Knapsack problem. First let's look for signs telling us that a node is nonpromising. An obvious sign that a node is nonpromising is that there is no capacity left in the knapsack for more items. Therefore, if ***weight*** is the sum of the weights of the items that have been included up to some node, the node is nonpromising if ![](https://box.kancloud.cn/f9d6da142e09f95d9203bf25f0cafd17_103x23.jpg) It is nonpromising even if *weight* equals *W* because, in the case of optimization problems, "promising" means that we should expand to the children. We can use considerations from the greedy approach to find a less obvious sign. Recall that this approach failed to give an optimal solution to this problem in [Section 4.5](LiB0036.html#427). Here we will only use greedy considerations to limit our search; we will not develop a greedy algorithm. To that end, we first order the items in nonincreasing order according to the values of *pi*/*wi*, where *wi* and *pi* are the weight and profit, respectively, of the *i*th item. Suppose we are trying to determine whether a particular node is promising. No matter how we choose the remaining items, we cannot obtain a higher profit than we would obtain if we were allowed to use the restrictions in the Fractional Knapsack problem from this node on. (Recall that in this problem the thief can steal any fraction of an item taken.) Therefore, we can obtain an upper bound on the profit that could be obtained by expanding beyond that node as follows. Let ***profit*** be the sum of the profits of the items included up to the node. Recall that *weight* is the sum of the weights of those items. We initialize variables ***bound*** and ***totweight*** to *profit* and *weight*, respectively. Next we greedily grab items, adding their profits to *bound* and their weights to *totweight*, until we get to an item that if grabbedwould bring *totweight* above *W*. We grab the fraction of that item allowed by the remaining weight, and we add the value of that fraction to *bound*. If we are able to get only a fraction of this last weight, this node cannot lead to a profit equal to *bound*, but *bound* is still an upper bound on the profit we could achieve by expanding beyond the node. Suppose the node is at level *i*, and the node at level *k* is the one that would bring the sum of the weights above *W*. Then ![](https://box.kancloud.cn/5783e9804d67d06599d9e5df5aa9006c_400x146.jpg) If *maxprofit* is the value of the profit in the best solution found so far, then a node at level *i* is nonpromising if ![](https://box.kancloud.cn/cc96f49907aafbed78f3e95d7587ee1a_151x21.jpg) We are using greedy considerations only to obtain a bound that tells us whether we should expand beyond a node. We are not using it to greedily grab items with no opportunity to reconsider later (as is done in the greedy approach). Before presenting the algorithm, we show an example. Example 5.6**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**Suppose that *n* = 4, *W* = 16, and we have the following: ![](https://box.kancloud.cn/3168d66f15a1f10fff7ee37abd31aa87_151x134.jpg) We have already ordered the items according to *pi*/*wi*. For simplicity, we chose values of *pi* and *wi* that make *p*i/*wi* an integer. In general, this need not be the case. [Figure 5.14](#ch05fig14) shows the pruned state space tree produced by using the backtracking considerations just discussed. The total profit, total weight, and bound are specified from top to bottom at each node. These are the values of the variables *profit, weight*, and *bound* mentioned in the previous discussion. The maximum profit is found at the node shaded in color. Each node is labeled with its level and its position from the left in the tree. For example, the shaded node is labeled (3, 3) because it is at level 3 and it is the third node from theleft at that level. Next we present the steps that produced the pruned tree. In these steps we refer to a node by its label. **![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)** [![Click To expand](https://box.kancloud.cn/a4d2b6bddca32dd865581897de70d3f5_350x305.jpg)](fig5-14_0.jpg) Figure 5.14: The pruned state space tree produced using backtracking in [Example 5.6](#ch05ex12). Stored at each node from top to bottom are the total profit of the items stolen up to the node, their total weight, and the bound on the total profit that could be obtained by expanding beyond the node. The optimal solution is found at the node shaded in color. Each nonpromising node is marked with a cross. 1. Set *maxprofit* to $0. 2. Visit node (0, 0) (the root). 1. Compute its profit and weight. ![](https://box.kancloud.cn/f3e26fecddabbc8b8a45053b94576a3a_97x46.jpg) 2. Compute its bound. Because 2 + 5 + 10 = 17, and 17 > 16, the value of *W*, the third item would bring the sum of the weights above *W*. Therefore, *k* = 3, and we have ![](https://box.kancloud.cn/e3ff4b691074f92c359c94345d4c791b_400x159.jpg) 3. Determine that the node is promising because its weight 0 is less than 16, the value of *W*, and its bound $115 is greater than $0, the value of *maxprofit*. 3. Visit node (1, 1). 1. Compute its profit and weight. ![](https://box.kancloud.cn/912d0ae189d5405f529516db11795568_196x47.jpg) 2. Because its weight 2 is less than or equal to 16, the value of *W*, and its profit $40 is greater than $0, the value of *maxprofit*, set *maxprofit* to $40. 3. Compute its bound. Because 2 + 5 + 10 = 17, and 17 > 16, the value of *W*, the third item would bring the sum of the weights above *W*. Therefore, *k* = 3, and we have ![](https://box.kancloud.cn/a4e4c3f9aacd6502fe0f42b0c5db0d98_400x156.jpg) 4. Determine that the node is promising because its weight 2 is less than 16, the value of *W*, and its bound $115 is greater than $0, the value of *maxprofit*. 4. Visit node (2, 1). 1. Compute its profit and weight. ![](https://box.kancloud.cn/45eeaad4cdd163062e43eef84e792149_206x47.jpg) 2. Because its weight 7 is less than or equal to 16, the value of *W*, and its profit $70 is greater than $40, the value of *maxprofit*, set *maxprofit* to $70. 3. Compute its bound. ![](https://box.kancloud.cn/39565c448e00506c1f6e2a29da8516c5_327x105.jpg) 4. Determine that the node is promising because its weight 7 is less than 16, the value of *W*, and its bound $115 is greater than $70, the value of *maxprofit*. 5. Visit node (3, 1). 1. Compute its profit and weight. ![](https://box.kancloud.cn/f0ed14db8d21c008cc5f450ad6c8d47b_215x47.jpg) 2. Because its weight 17 is greater than 16, the value of *W, maxprofit* does not change. 3. Determine that it is nonpromising because its weight 17 is greater than or equal to 16, the value of *W*. 4. The bound for this node is not computed, because its weight has determined it to be nonpromising. 6. Backtrack to node (2, 1). 7. Visit node (3, 2). 1. Compute its profit and weight. Because we are not including item 3, ![](https://box.kancloud.cn/deffb7ca373021f2094cbef7dfef02fb_106x47.jpg) 2. Because its profit $70 is less than or equal to $70, the value of *maxprofit, maxprofit* does not change. 3. Compute its bound. The fourth weight would not bring the sum of the items above *W*, and there are only four items. Therefore, *k* = 5, and ![](https://box.kancloud.cn/40bdf5295710fe0735b4b9f5d362105b_360x58.jpg) 4. Determine that the node is promising because its weight 7 is less than 16, the value of *W*, and its bound $80 is greater than $70, the value of *maxprofit*. (From now on we leave the computations of profits, weights, and bounds as exercises. Furthermore, when *maxprofit* does not change, we will not mention it.) 8. Visit node (4, 1). 1. Compute its profit and weight to be $80 and 12. 2. Because its weight 12 is less than or equal to 16, the value of *W*, and its profit $80 is greater than $70, the value of *maxprofit*, set *maxprofit* to $80. 3. Compute its bound to be $80. 4. Determine that it is nonpromising because its bound $80 is less than or equal to $80, the value of *maxprofit*. Leaves in the state space tree are automatically nonpromising because their bounds are always less than or equal to *maxprofit*. 9. Backtrack to node (3, 2). 10. Visit node (4, 2). 1. Compute its profit and weight to be $70 and 7. 2. Compute its bound to be $70. 3. Determine that the node is nonpromising because its bound $70 is less than or equal to $80, the value of *maxprofit*. 11. Backtrack to node (1, 1). 12. Visit node (2, 2). 1. Compute its profit and weight to be $40 and 2. 2. Compute its bound to be $98. 3. Determine that it is promising because its weight 2 is less than 16, the value of *W*, and its bound $98 is greater than $80, the value of *maxprofit*. 13. Visit node (3, 3). 1. Compute its profit and weight to be $90 and 12. 2. Because its weight 12 is less than or equal to 16, the value of *W*, and its profit $90 is greater than $80, the value of *maxprofit*, set *maxprofit* to $90. 3. Compute its bound to be $98. 4. Determine that it is promising because its weight 12 is less than 16, the value of *W*, and its bound $98 is greater than $90, the value of *maxprofit*. 14. Visit node (4, 3). 1. Compute its profit and weight to be $100 and 17. 2. Determine that it is nonpromising because its weight 17 is greater than or equal to 16, the value of *W*. 3. The bound for this node is not computed because its weight has determined it to be nonpromising. 15. Backtrack to node (3, 3). 16. Visit node (4, 4). 1. Compute its profit and weight to be $90 and 12. 2. Compute its bound to be $90. 3. Determine that it is nonpromising because its bound $90 is less than or equal to $90, the value of *maxprofit*. 17. Backtrack to node (2, 2). 18. Visit node (3, 4). 1. Compute its profit and weight to be $40 and 2. 2. Compute its bound to be $50. 3. Determine that the node is nonpromising because its bound $50 is less than or equal to $90, the value of *maxprofit*. 19. Backtrack to root. 20. Visit node (1, 2). 1. Compute its profit and weight to be $0 and 0. 2. Compute its bound to be $82. 3. Determine that the node is nonpromising because its bound $82 is less than or equal to $90, the value of *maxprofit*. 21. Backtrack to root. 1. Root has no more children. We are done. There are only 13 nodes in the pruned state space tree, whereas the entire state space tree has 31 nodes. Next we present the algorithm. Because this is an optimization problem, we have the added task of keeping track of the current best set of items and the total value of their profits. We do this in an array *bestset* and a variable *maxprofit*. Unlike the other problems in this chapter, we state this problem so as to find just one optimal solution. Algorithm 5.7: The Backtracking Algorithm for the 0–1 Knapsack Problem**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**Problem: Let *n* items be given, where each item has a weight and a profit. The weights and profits are positive integers. Furthermore, let a positive integer *W* be given. Determine a set of items with maximum total profit, under the constraint that the sum of their weights cannot exceed *W*. Inputs: Positive integers *n* and *W*; arrays *w* and *p*, each indexed from 1 to *n*, and each containing positive integers sorted in nonincreasing order according to the values of *p*\[*i*\]/*w*\[*i*\]. Outputs: an *array bestset* indexed from 1 to n, where the values of *bestset*\[*i*\] is "yes" if the *i*th item is included in the optimal set and is "no" otherwise; an integer *maxprofit* that is the maximum profit. ``` void knapsack (index i, int profit, int weight) { if (weight <= W&& profit > maxprofit){ // This set is best maxprofit = profit; // so far. numbest = i; // Set numbest to bestset = include; // number of items } // considered. Set // bestset to this // solution. if (promising(i)){ include [i + 1] = "yes"; // Include w[i + 1]. knapsack(i + 1, profit + p[i + 1], weight + w[i + 1]); include [i + 1] = "no"; // Do not include knapsack (i + 1, profit, weight); // w[i + 1]. } } bool promising (index i) { index j, k; int totweight; float bound; if (weight >= W) // Node is promising only return false; // if we should expand to else { // its children. There must j = i + 1; // be some capacity left for bound = profit; // the children. totweight = weight; while (j <= n && totweight + w[j] < = W){ // Grab as many items as totweight = totweight + w[j]; // possible. bound = bound + p[j]; j++; } k = j; // Use k for consistency if (k <=n) // with formula in text. bound = bound + (W - totweight) * p[k]/w[k]; // Grab fraction of kth return bound > maxprofit; // item. } } ``` **![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)** Following our usual convention, *n, w, p, W, maxprofit, include, bestset*, and *numbest* are not inputs to either routine. If these variables were defined globally, the following code would produce the maximum profit and a set that has that profit: ``` numbest = 0; maxprofit = 0; knapsack(0, 0, 0); cout << maxprofit; // Write the maximum profit. for (j = 1; j <= numbest; j++) // Show an optimal set of items. cout << bestset[i]; ``` Recall that leaves in the state space tree are automatically nonpromising because their bounds cannot be greater than *maxprofit*. Therefore, we should not need a check for the terminal condition that *i = n* in function *promising*. Let's confirm that our algorithm does not need to do this check. If *i = n, bound* does not change from its initial value *profit*. Because *profit* is less than or equal to *maxprofit*, the expression *bound>maxprofit* is false, which means that function *promising* returns false. Our upper bound does not change value as we repeatedly proceed to the left in the state space tree until we reach the node at level *k*. (This can he seen by looking again at the first few steps in [Example 5.6](#ch05ex12).) Therefore, each time a value of *k* is established, we can save its value and proceed to the left without calling function *promising* until we reach the node at the (*k* − 1)st level. We know that the left child of this node is nonpromising because including the *k*th item would bring the value of *weight* above *W*. Therefore, we proceed only to the right from this node. It is only after a move to the right that we need to call function *promising* and determine a new value of *k*. In the exercises you are asked to write this improvement. The state space tree in the 0–1 Knapsack problem is the same as that in the Sum-of-Subsets problem. As shown in [Section 5.4](LiB0072.html#848), the number of nodes in that tree is ![](https://box.kancloud.cn/488fe6265845f1d4eb916877933a90b3_76x21.jpg) [Algorithm 5.7](#ch05ex12) checks all nodes in the state space tree for the following instances. For a given *n*, let *W = n*, and ![](https://box.kancloud.cn/6580a17df4e61e1da960ee22104dcafc_312x47.jpg) The optimal solution is to take only the *n*th item, and this solution will not be found until we go all the way to the right to a depth of *n* − 1 and then go left. Before the optimal solution is found, however, every nonleaf will be found to be promising, which means that all nodes in the state space tree will be checked. Because the Monte Carlo technique applies in this problem, it can be used to estimate the efficiency of the algorithm for a particular instance. ### 5.7.2 Comparing the Dynamic Programming Algorithm and the Backtracking Algorithm for the 0–1 Knapsack Problem Recall from [Section 4.4](LiB0036.html#427) that the worst-case number of entries that is computed by the dynamic programming algorithm for the 0–1 Knapsack problem is in *O* (*minimum* (2*n, nW*)). In the worst case, the backtracking algorithm checks Θ (2*n*) nodes. Owing to the additional bound of *nW*, it may appear that the dynamic programming algorithm is superior. However, in backtracking algorithms the worst case gives little insight into how much checking is usually saved by backtracking. With so many considerations, it is difficult to analyze theoretically the relative efficiencies of the two algorithms. In cases such as this, the algorithms can be compared by running them on many sample instances and seeing which algorithm usually performs better. Horowitz and Sahni (1978) did this and found that the backtracking algorithm is usually more efficient than the dynamic programming algorithm. Horowitz and Sahni (1974) coupled the divide-and-conquer approach with the dynamic programming approach to develop an algorithm for the 0–1 Knapsack problem that is *O* (2*n*/2) in the worst case. They showed that this algorithm is usually more efficient than the backtracking algorithm.