## Exercises
#### Section 4.1
1. Show that the greedy approach always finds an optimal solution for the Change problem when the coins are in the denominations *D0*, *D′*, *D2*, …, *Di*, for some integers *i* > 0 and *D* > 0.
2. Use Prim's algorithm ([Algorithm 4.1](LiB0033.html#378)) to find a minimum spanning tree for the following graph. Show the actions step by step.
[![Click To expand](https://box.kancloud.cn/dae1a05b7cf9faabd872411bdf05ef27_350x252.jpg)](figu182_1_0.jpg)
3. Draw a graph that has more than one minimum spanning tree.
4. Implement Prim's algorithm ([Algorithm 4.1](LiB0033.html#378)) on your system, and study its performance using different graphs.
5. Modify Prim's algorithm ([Algorithm 4.1](LiB0033.html#378)) to check if an undirected, weighted graph is connected. Analyze your algorithm and show the results using order notation.
6. Use Kruskal's algorithm ([Algorithm 4.2](LiB0033.html#389)) to find a minimum spanning tree for the graph in Exercise 2. Show the actions step by step.
7. Implement Kruskal's algorithm ([Algorithm 4.2](LiB0033.html#389)) on your system, and study its performance using different graphs.
8. Do you think it is possible for a minimum spanning tree to have a cycle? Justify your answer.
9. Assume that in a network of computers any two computers can be linked. Given a cost estimate for each possible link, should [Algorithm 4.1](LiB0033.html#378) (Prim's algorithm) or [Algorithm 4.2](LiB0033.html#389) (Kruskal's algorithm) be used? Justify your answer.
10. Apply [Lemma 4.2](LiB0033.html#393) to complete the proof of [Theorem 4.2](LiB0033.html#395).
#### Section 4.2
1. Use Dijkstra's algorithm ([Algorithm 4.3](LiB0034.html#403)) to find the shortest paths from the vertex *v*4 to all the other vertices of the graph in Exercise 2. Show the actions step by step. Assume that each undirected edge represents two directed edges with the same weight.
2. Implement Dijkstra's algorithm ([Algorithm 4.3](LiB0034.html#403)) on your system, and study its performance using different graphs.
3. Modify Dijkstra's algorithm ([Algorithm 4.3](LiB0034.html#403)) so that it computes the lengths of the shortest paths. Analyze the modified algorithm and show the results using order notation.
4. Modify Dijkstra's algorithm ([Algorithm 4.3](LiB0034.html#403)) so that it checks if a directed graph has a cycle. Analyze your algorithm and show the results using order notation.
5. Can Dijkstra's algorithm ([Algorithm 4.3](LiB0034.html#403)) be used to find the shortest paths in a graph with some negative weights? Justify your answer.
6. Use induction to prove the correctness of Dijkstra's algorithm ([Algorithm 4.3](LiB0034.html#403)).
#### Section 4.3
1. Consider the following jobs and service times. Use the algorithm in [Section 4.3.1](LiB0035.html#406) to minimize the total amount of time spent in the system.
*Job*
*Service Time*
1
7
2
3
3
10
4
5
2. Implement the algorithm in [Section 4.3.1](LiB0035.html#406) on your system, and run it on the instance in Exercise 17.
3. Write an algorithm for the generalization of the Single-Server Scheduling problem to the Multiple-Server Scheduling problem in [Section 4.3.1](LiB0035.html#406). Analyze your algorithm and show the results using order notation.
4. Consider the following jobs, deadlines, and profits. Use the Scheduling with Deadlines algorithm ([Algorithm 4.4](LiB0035.html#420)) to maximize the total profit.
*Job*
*Deadline*
*Profit*
1
2
40
2
4
15
3
3
60
4
2
20
5
3
10
6
1
45
7
1
55
5. Consider procedure *schedule* in the Scheduling with Deadlines algorithm ([Algorithm 4.4](LiB0035.html#420)). Let *d* be the maximum of the deadlines for *n* jobs. Modify the procedure so that it adds a job as late as possible to the schedule being built, but no later than its deadline. Do this by initializing *d*+1 disjoint sets, containing the integers 0,1,…, *d.* Let *small* (*S*) be the smallest member of set *S.* When a job is scheduled, find the set *S* containing the minimum of its deadline and *n.* If *small* (*S*) = 0, reject the job. Otherwise, schedule it at time *small* (*S*), and merge *S* with the set containing *small* (*S*) - 1. Assuming we use Disjoint Set Data Structure III in [Appendix C](LiB0108.html#1464), show that this version is θ(*n* lg *m*), where *m* is the minimum of *d* and *n.*
6. Implement the algorithm developed in Exercise 21.
7. Suppose we minimize the average time to store *n* files of lengths *l*1, *l*2, …, *ln* on a tape. If the probability of requesting file *k* is given by *pk*, the expected access time to load these *n* files in the order *k*1, *k*2, …, *kn* is given by the formula
![](https://box.kancloud.cn/94bc6fd510c57ad4404d4fddc7a147f5_237x61.jpg)
The constant *C* represents parameters such as the speed of the drive and the recording density.
1. In what order should a greedy approach store these files to guarantee minimum average access time?
2. Write the algorithm that stores the files, analyze your algorithm, and show the results using order notation.
#### Section 4.4
1. Use Huffman's algorithm to construct an optimal binary prefix code for the letters in the following table.
Letter
:
A
B
I
M
S
X
Z
Frequency
:
12
7
18
10
9
5
2
2. Use Huffman's algorithm to construct an optimal binary prefix code for the letters in the following table.
Letter
:
c
e
i
r
s
t
x
Probability
:
0\.11
0\.22
0\.16
0\.12
0\.15
0\.10
0\.14
3. Decode each bit string using the binary code in Exercise 24.
1. 01100010101010
2. 1000100001010
3. 11100100111101
4. 1000010011100
4. Encode each word using the binary code in Exercise 25.
1. rise
2. exit
3. text
4. exercise
5. A code for *a, b, c, d, e* is given by *a*:00, *b*:01, *c*:101, *d*:*x*10, *e*:*yz* 1, where *x, y, z* are in 0,1. Determine *x, y* and *z* so that the given code is a prefix code.
6. Implement Huffman's algorithm, and run it on the problem instances of Exercise 24 and 25.
7. Show that a binary tree corresponding to an optimal binary prefix code must be full. A *full binary tree* is a binary tree such that every node is either a leaf or it has two children.
8. Prove that for an optimal binary prefix code, if the characters are ordered so that their frequencies are nonincreasing, then their codeword lengths are nondecreasing.
9. Given the binary tree corresponding to a binary prefix code, write an algorithm that determines the codewords for all the characters. Determine the time complexity of your algorithm.
#### Section 4.5
1. Write the dynamic programming algorithm for the 0-1 Knapsack problem.
2. Use a greedy approach to construct an optimal binary search tree by considering the most probable key, *Keyk*, for the root, and constructing the left and right subtrees for *Key*1, *Key*2, …, *Keyk*−1 and *Keyk*+1, *Keyk*+2, …, *Keyn* recursively in the same way.
1. Assuming the keys are already sorted, what is the worst-case time complexity of this apporach? Justify your answer.
2. Use an example to show that this greedy approach does not always find an optimal binary search tree.
3. Suppose we assign *n* persons to *n* jobs. Let *Cij* be the cost of assigning the *i*th person to the *j*th job. Use a greedy approach to write an algorithm that finds an assignment that minimizes the total cost of assigning all *n* persons to all *n* jobs. Analyze your algorithm and show the results using order notation.
4. Use the dynamic programming approach to write an algorithm for the problem of Exercise 26. Analyze your algorithm and show the results using order notation.
5. Use a greedy approach to write an algorithm that minimizes the number of record moves in the problem of merging *n* files. Use a two-way merge pattern. (Two files are merged during each merge step.) Analyze your algorithm, and show the results using order notation.
6. Use the dynamic programming approach to write an algorithm for Exercise 28. Analyze your algorithm and show the results using order notation.
7. Prove that the greedy approach to the Fractional Knapsack problem yields an optimal solution.
8. Show that the worst-case number of entries computed by the refined dynamic programming algorithm for the 0–1 Knapsack problem is in Ω (2*n*). Do this by considering the instance in which *W* = 2*n* − 2 and *wi* = 2*i−1* for 1 ≤ *i* ≤ *n.*
9. Show that in the refined dynamic programming algorithm for the 0-1 Knapsack problem, the total number of entries computed is about (*W* + 1) × (*n* + 1)/2, when *n* = *wi* = 1 for all *i.*
#### Additional Exercises
1. Show with a counterexample that the greedy approach does not always yield an optimal solution for the Change problem when the coins are U.S. coins and we do not have at least one of each type of coin.
2. Prove that a complete graph (a graph in which there is an edge between every pair of vertices) has *nn*-2 spanning trees. Here *n* is the number of vertices in the graph.
3. Use a greedy approach to write an algorithm for the Traveling Salesperson problem. Show that your algorithm does not always find a minimum-length tour.
4. Prove that the algorithm for the Multiple-Server Scheduling problem of Exercise 19 always finds an optimal schedule.
5. Without constructing a Huffman tree, generate Huffman code for a given set of characters.
6. Generalize Huffman's algorithm to ternary code words and prove that it yields optimal ternary codes.
7. Show that if the characters are sorted according to their frequencies, then the Huffman tree can be constructed in linear time.
- Table of Contents
- BackCover
- Foundations of Algorithms Using C++ Pseudocode, Third Edition
- Preface
- Chapter Contents
- Pedagogy
- Course Outlines
- Acknowledgments
- Errors
- Chapter 1: Algorithms - Efficiency, Analysis, and Order
- 1.2 The Importance of Developing Efficient Algorithms
- 1.3 Analysis of Algorithms
- 1.4 Order
- 1.5 Outline of This Book
- Exercises
- Chapter 2: Divide-and-Conquer
- 2.1 Binary Search
- 2.2 Mergesort
- 2.3 The Divide-and-Conquer Approach
- 2.4 Quicksort (Partition Exchange Sort)
- 2.5 Strassen's Matrix Multiplication Algorithm
- 2.6 Arithmetic with Large Integers
- 2.7 Determining Thresholds
- 2.8 When Not to Use Divide-and-Conquer
- Exercises
- Chapter 3: Dynamic Programming
- 3.1 The Binomial Coefficient
- 3.2 Floyd's Algorithm for Shortest Paths
- 3.3 Dynamic Programming and Optimization Problems
- 3.4 Chained Matrix Multiplication
- 3.5 Optimal Binary Search Trees
- 3.6 The Traveling Salesperson Problem
- Exercises
- Chapter 4: The Greedy Approach
- 4.1 Minimum Spanning Trees
- 4.2 Dijkstra's Algorithm for Single-Source Shortest Paths
- 4.3 Scheduling
- 4.4 Huffman Code
- 4.5 The Greedy Approach versus Dynamic Programming: The Knapsack Problem
- Exercises
- Chapter 5: Backtracking
- 5.2 The n-Queens Problem
- 5.3 Using a Monte Carlo Algorithm to Estimate the Efficiency of a Backtracking Algorithm
- 5.4 The Sum-of-Subsets Problem
- 5.5 Graph Coloring
- 5.6 The Hamiltonian Circuits Problem
- 5.7 The 0-1 Knapsack Problem
- Exercises
- Chapter 6: Branch-and-Bound
- 6.1 Illustrating Branch-and-Bound with the 0 - 1 Knapsack problem
- 6.2 The Traveling Salesperson Problem
- 6.3 Abductive Inference (Diagnosis)
- Exercises
- Chapter 7: Introduction to Computational Complexity - The Sorting Problem
- 7.2 Insertion Sort and Selection Sort
- 7.3 Lower Bounds for Algorithms that Remove at Most One Inversion per Comparison
- 7.4 Mergesort Revisited
- 7.5 Quicksort Revisited
- 7.6 Heapsort
- 7.6.1 Heaps and Basic Heap Routines
- 7.6.2 An Implementation of Heapsort
- 7.7 Comparison of Mergesort, Quicksort, and Heapsort
- 7.8 Lower Bounds for Sorting Only by Comparison of Keys
- 7.8.1 Decision Trees for Sorting Algorithms
- 7.8.2 Lower Bounds for Worst-Case Behavior
- 7.8.3 Lower Bounds for Average-Case Behavior
- 7.9 Sorting by Distribution (Radix Sort)
- Exercises
- Chapter 8: More Computational Complexity - The Searching Problem
- 8.1 Lower Bounds for Searching Only by Comparisons of Keys
- 8.2 Interpolation Search
- 8.3 Searching in Trees
- 8.4 Hashing
- 8.5 The Selection Problem: Introduction to Adversary Arguments
- Exercises
- Chapter 9: Computational Complexity and Interactability - An Introduction to the Theory of NP
- 9.2 Input Size Revisited
- 9.3 The Three General Problem Categories
- 9.4 The Theory of NP
- 9.5 Handling NP-Hard Problems
- Exercises
- Chapter 10: Number-Theoretic Algorithms
- 10.1 Number Theory Review
- 10.2 Computing the Greatest Common Divisor
- 10.3 Modular Arithmetic Review
- 10.4 Solving Modular Linear Equations
- 10.5 Computing Modular Powers
- 10.6 Finding Large Prime Numbers
- 10.7 The RSA Public-Key Cryptosystem
- Exercises
- Chapter 11: Introduction to Parallel Algorithms
- 11.1 Parallel Architectures
- 11.2 The PRAM Model
- Exercises
- Appendix A: Review of Necessary Mathematics
- A.2 Functions
- A.3 Mathematical Induction
- A.4 Theorems and Lemmas
- A.5 Logarithms
- A.6 Sets
- A.7 Permutations and Combinations
- A.8 Probability
- Exercises
- Appendix B: Solving Recurrence Equations - With Applications to Analysis of Recursive Algorithms
- B.2 Solving Recurrences Using the Characteristic Equation
- B.3 Solving Recurrences by Substitution
- B.4 Extending Results for n, a Power of a Positive Constant b, to n in General
- B.5 Proofs of Theorems
- Exercises
- Appendix C: Data Structures for Disjoint Sets
- References
- Index
- Index_B
- Index_C
- Index_D
- Index_E
- Index_F
- Index_G
- Index_H
- Index_I
- Index_J
- Index_K
- Index_L
- Index_M
- Index_N
- Index_O
- Index_P
- Index_Q
- Index_R
- Index_S
- Index_T
- Index_U
- Index_V
- Index_W-X
- Index_Y
- Index_Z
- List of Figures
- List of Tables
- List of Algorithms, Examples, and Theorems
- List of Sidebars