## 7.8.2 Lower Bounds for Worst-Case Behavior
To obtain a bound for the worst-case number of comparisons of keys, we need the following lemma.
Lemma 7.2**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**The worst-case number of comparisons done by a decision tree is equal to its depth.
Proof: Given some input, the number of comparisons done by a decision tree is the number of internal nodes on the path followed for that input. The number of internal nodes is the same as the length of the path. Therefore, the worstcase number of comparisons done by a decision tree is the length of the longest path to a leaf, which is the depth of the decision tree.
**![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**
By [Lemmas 7.1](LiB0064.html#722)and [7.2](LiB0064.html#723), we need only find a lower bound on the depth of a binary tree containing *n*! leaves to obtain our lower bound for the worst-case behavior. The required lower bound on depth is found by means of the following lemmas and theorems.
Lemma 7.3**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**If *m* is the number of leaves in a binary tree and *d* is the depth, then
![](https://box.kancloud.cn/88391c16321a756ee4fb79afcbd8fdba_90x21.jpg)
Proof: Using induction on *d*, we show first that
![](https://box.kancloud.cn/1f38e13ea19d868f2bc9a9b5169e36b0_189x26.jpg)
Induction base: A binary tree with depth 0 has one node that is both the root and the only leaf. Therefore, for such a tree, the number of leaves **m** equals 1, and
![](https://box.kancloud.cn/dd3fc0d6e57e3bad561578bbfd54ecde_55x23.jpg)
Induction hypothesis: Assume that, for any binary tree with depth *d*,
![](https://box.kancloud.cn/b39e0f760ac7f2ee708742e5846dcfea_63x23.jpg)
where *m* is the number of leaves.
Induction step: We need to show that, for any binary tree with depth *d* + 1,
![](https://box.kancloud.cn/170d57255d00978247c00f1ab8d1d0da_86x24.jpg)
where *m*1 is the number of leaves. If we remove all the leaves from such a tree, we have a tree with depth *d* whose leaves are the parents of the leaves in our original tree. If *m* is the number of these parents, then, by the induction hypothesis,
![](https://box.kancloud.cn/6a026fa2925d573b2ad63ce90e6f5490_63x22.jpg)
Because each parent can have at most two children,
![](https://box.kancloud.cn/b02c2f871a7179dd47a0e34310ad15cb_75x21.jpg)
Combining these last two inequalities yields
![](https://box.kancloud.cn/e9da42ddd0c238dfd241d0cbd6b8f925_135x24.jpg)
which completes the induction proof.
Taking the lg of both sides of Inequality [7.1](#ch07eqn01)yields
![](https://box.kancloud.cn/a36beaf71989cccdc76c17d9dc469617_73x21.jpg)
Because *d* is an integer, this implies
![](https://box.kancloud.cn/e595f05c84ac1d2530fa2d812e7ce491_94x27.jpg)
**![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**
Theorem 7.2**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**Any deterministic algorithm that sorts *n* distinct keys only by comparisons of keys must in the worst case do at least
![](https://box.kancloud.cn/401dc301a9545f0031e6a7ee151c4ac1_224x21.jpg)
Proof: By [Lemma 7.1](LiB0062.html#709), to any such algorithm there corresponds a pruned, valid, binary decision tree containing *n*! leaves. By [Lemma 7.3](LiB0064.html#727), the depth of that tree is greater than or equal to \[lg (*n*!)\]. The theorem now follows, because [Lemma 7.2](#ch07ex10) says that any decision tree's worst-case number of comparisons is given by its depth.
**![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**
How does this bound compare with the worst-case performance of Mergesort—namely, *n* lg *n* – (*n* – 1)? [Lemma 7.4](LiB0064.html#729) enables us to compare the two.
Lemma 7.4**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**For any positive integer *n*,
![](https://box.kancloud.cn/692acb8721464a637593e8c912beba4e_184x23.jpg)
Proof:The proof requires knowledge of integral calculus. We have
![](https://box.kancloud.cn/b95b8e9868df5cc52d21eb4fafe139a5_400x130.jpg)
**![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**
Theorem 7.3**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**Any deterministic algorithm that sorts *n* distinct keys only by comparisons of keys must in the worst case do at least
![](https://box.kancloud.cn/99c5632e85c8d7de6a1e996e7793d7f1_290x23.jpg)
Proof:The proof follows from [Theorem 7.2](LiB0064.html#727)and [Lemma 7.4](#ch07ex13).
**![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**
We see that Mergesort's worst-case performance of *n* lg *n* – (*n* – 1) is close to optimal. Next we show that this also holds for its average-case performance.
- Table of Contents
- BackCover
- Foundations of Algorithms Using C++ Pseudocode, Third Edition
- Preface
- Chapter Contents
- Pedagogy
- Course Outlines
- Acknowledgments
- Errors
- Chapter 1: Algorithms - Efficiency, Analysis, and Order
- 1.2 The Importance of Developing Efficient Algorithms
- 1.3 Analysis of Algorithms
- 1.4 Order
- 1.5 Outline of This Book
- Exercises
- Chapter 2: Divide-and-Conquer
- 2.1 Binary Search
- 2.2 Mergesort
- 2.3 The Divide-and-Conquer Approach
- 2.4 Quicksort (Partition Exchange Sort)
- 2.5 Strassen's Matrix Multiplication Algorithm
- 2.6 Arithmetic with Large Integers
- 2.7 Determining Thresholds
- 2.8 When Not to Use Divide-and-Conquer
- Exercises
- Chapter 3: Dynamic Programming
- 3.1 The Binomial Coefficient
- 3.2 Floyd's Algorithm for Shortest Paths
- 3.3 Dynamic Programming and Optimization Problems
- 3.4 Chained Matrix Multiplication
- 3.5 Optimal Binary Search Trees
- 3.6 The Traveling Salesperson Problem
- Exercises
- Chapter 4: The Greedy Approach
- 4.1 Minimum Spanning Trees
- 4.2 Dijkstra's Algorithm for Single-Source Shortest Paths
- 4.3 Scheduling
- 4.4 Huffman Code
- 4.5 The Greedy Approach versus Dynamic Programming: The Knapsack Problem
- Exercises
- Chapter 5: Backtracking
- 5.2 The n-Queens Problem
- 5.3 Using a Monte Carlo Algorithm to Estimate the Efficiency of a Backtracking Algorithm
- 5.4 The Sum-of-Subsets Problem
- 5.5 Graph Coloring
- 5.6 The Hamiltonian Circuits Problem
- 5.7 The 0-1 Knapsack Problem
- Exercises
- Chapter 6: Branch-and-Bound
- 6.1 Illustrating Branch-and-Bound with the 0 - 1 Knapsack problem
- 6.2 The Traveling Salesperson Problem
- 6.3 Abductive Inference (Diagnosis)
- Exercises
- Chapter 7: Introduction to Computational Complexity - The Sorting Problem
- 7.2 Insertion Sort and Selection Sort
- 7.3 Lower Bounds for Algorithms that Remove at Most One Inversion per Comparison
- 7.4 Mergesort Revisited
- 7.5 Quicksort Revisited
- 7.6 Heapsort
- 7.6.1 Heaps and Basic Heap Routines
- 7.6.2 An Implementation of Heapsort
- 7.7 Comparison of Mergesort, Quicksort, and Heapsort
- 7.8 Lower Bounds for Sorting Only by Comparison of Keys
- 7.8.1 Decision Trees for Sorting Algorithms
- 7.8.2 Lower Bounds for Worst-Case Behavior
- 7.8.3 Lower Bounds for Average-Case Behavior
- 7.9 Sorting by Distribution (Radix Sort)
- Exercises
- Chapter 8: More Computational Complexity - The Searching Problem
- 8.1 Lower Bounds for Searching Only by Comparisons of Keys
- 8.2 Interpolation Search
- 8.3 Searching in Trees
- 8.4 Hashing
- 8.5 The Selection Problem: Introduction to Adversary Arguments
- Exercises
- Chapter 9: Computational Complexity and Interactability - An Introduction to the Theory of NP
- 9.2 Input Size Revisited
- 9.3 The Three General Problem Categories
- 9.4 The Theory of NP
- 9.5 Handling NP-Hard Problems
- Exercises
- Chapter 10: Number-Theoretic Algorithms
- 10.1 Number Theory Review
- 10.2 Computing the Greatest Common Divisor
- 10.3 Modular Arithmetic Review
- 10.4 Solving Modular Linear Equations
- 10.5 Computing Modular Powers
- 10.6 Finding Large Prime Numbers
- 10.7 The RSA Public-Key Cryptosystem
- Exercises
- Chapter 11: Introduction to Parallel Algorithms
- 11.1 Parallel Architectures
- 11.2 The PRAM Model
- Exercises
- Appendix A: Review of Necessary Mathematics
- A.2 Functions
- A.3 Mathematical Induction
- A.4 Theorems and Lemmas
- A.5 Logarithms
- A.6 Sets
- A.7 Permutations and Combinations
- A.8 Probability
- Exercises
- Appendix B: Solving Recurrence Equations - With Applications to Analysis of Recursive Algorithms
- B.2 Solving Recurrences Using the Characteristic Equation
- B.3 Solving Recurrences by Substitution
- B.4 Extending Results for n, a Power of a Positive Constant b, to n in General
- B.5 Proofs of Theorems
- Exercises
- Appendix C: Data Structures for Disjoint Sets
- References
- Index
- Index_B
- Index_C
- Index_D
- Index_E
- Index_F
- Index_G
- Index_H
- Index_I
- Index_J
- Index_K
- Index_L
- Index_M
- Index_N
- Index_O
- Index_P
- Index_Q
- Index_R
- Index_S
- Index_T
- Index_U
- Index_V
- Index_W-X
- Index_Y
- Index_Z
- List of Figures
- List of Tables
- List of Algorithms, Examples, and Theorems
- List of Sidebars