## 7.5 Quicksort Revisited
As a refresher, let's first repeat the algorithm for Quicksort:
```
void quicksort (index low, index high)
{
index pivotpoint;
if (high > low){
partition(low, high, pivotpoint);
quicksort (low, pivotpoint - 1);
quicksort (pivotpoint + 1, high);
}
}
```
Although its worst-case time complexity is quadratic, we saw in [Section 2.4](LiB0066.html#752) that *the average-case time complexity of Quicksort's number of comparisons of keys* is given by
![](https://box.kancloud.cn/39b70925cbbca4bb5c9d807297877aed_187x19.jpg)
which is not much worse than Mergesort. Quicksort has the advantage over Mergesort that no extra array is needed. However, it is still not an in-place sort because, while the algorithm is sorting the first subarray, the first and last indices of the other subarray need to be stored in the stack of activation records. Unlike Mergesort, we have no guarantee that the array will always be split in the middle. In the worse case, *partition* may repeatedly split the array into an empty subarray on the right and a subarray with one less item on the left. In this way, *n* –1 pairs of indices will end up being stacked, which means that the worst-case extra space usage is in Θ(*n*). It is possible to modify Quicksort so that the extra space usage is at most about lg *n*. Before showing this and other improvements of Quicksort, let's discuss the time complexity of the number of assignments of records by Quicksort.
In the exercises, you are asked to establish that the average number of exchanges performed by Quicksort is about 0.69 (*n* + 1) lg *n*. Assuming that three assignments are done to perform one exchange, *the average-case time complexity of the number of assignments of records done by Quicksort* is given by
![](https://box.kancloud.cn/2fe61ac77921da61c17b80a41eb9fd43_192x18.jpg)
#### Improvements of the Basic Quicksort Algorithm
We can reduce the extra space usage of Quicksort in five different ways. First, in procedure *quicksort*, we determine which subarray is larger and always stack that one while the other is sorted. The following is an analysis of the space used by this version of *quicksort*.
Analysis of Extra Space Usage (Improved Quicksort)**![Start example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**In this version, the worst-case space usage occurs when *partition* splits the array exactly in half each time, resulting in a stack depth about equal to lg *n*. Therefore, the worst-case space usage is in Θ (lg *n*) indices.
**![End example](https://box.kancloud.cn/e95df9d604ab2d89febe370ae4d88fb1_1x1.gif)**
Second, as discussed in the exercises, there is a version of *partition* that cuts the average number of assignments of records significantly. For that version, the *average-case time complexity of the number of assignments of records done by Quicksort* is given by
![](https://box.kancloud.cn/5052ac87a1415eeeb26f64f3f9eef542_191x19.jpg)
Third, each of the recursive calls in procedure *quicksort* causes *low, high*, and *pivotpoint* to be stacked. A good deal of the pushing and popping is unnecessary.
While the first recursive call to *quicksort* is being processed, only the values of *pivotpoint* and *high* need to be saved on the stack. While the second recursive call is being processed, nothing needs to be saved. We can avoid the unnecessary operations by writing *quicksort* iteratively and manipulating the stack in the procedure. That is, we do explicit stacking instead of stacking by recursion. You are asked to do this in the exercises.
Fourth, as discussed in [Section 2.7](LiB0021.html#223), recursive algorithms such as Quicksort can be improved by determining *A* threshold value at which the algorithm calls an iterative algorithm instead of dividing an instance further.
Finally, as mentioned in the worst-case analysis of [Algorithm 2.6](LiB0018.html#177) (Quicksort), the algorithm is least efficient when the input array is already sorted. The closer the input array is to being sorted, the closer we are to this worst-case performance. Therefore, if there is reason to believe that the array may be close to already being sorted, we can improve the performance by not always choosing the first item as the pivot item. One good strategy to use in this case is to choose the median of *S \[low\], S\[∟L (low + high)* /2∟\], and *S \[high\]* for the pivot point. Of course, if we have no reason to believe that there is any particular structure in the input array, choosing any item for the pivot item is, on the average, just as good as choosing any other item. In this case, all we really gain by taking the median is the guarantee that one of the subarrays will not be empty (as long as the three values are distinct).
- Table of Contents
- BackCover
- Foundations of Algorithms Using C++ Pseudocode, Third Edition
- Preface
- Chapter Contents
- Pedagogy
- Course Outlines
- Acknowledgments
- Errors
- Chapter 1: Algorithms - Efficiency, Analysis, and Order
- 1.2 The Importance of Developing Efficient Algorithms
- 1.3 Analysis of Algorithms
- 1.4 Order
- 1.5 Outline of This Book
- Exercises
- Chapter 2: Divide-and-Conquer
- 2.1 Binary Search
- 2.2 Mergesort
- 2.3 The Divide-and-Conquer Approach
- 2.4 Quicksort (Partition Exchange Sort)
- 2.5 Strassen's Matrix Multiplication Algorithm
- 2.6 Arithmetic with Large Integers
- 2.7 Determining Thresholds
- 2.8 When Not to Use Divide-and-Conquer
- Exercises
- Chapter 3: Dynamic Programming
- 3.1 The Binomial Coefficient
- 3.2 Floyd's Algorithm for Shortest Paths
- 3.3 Dynamic Programming and Optimization Problems
- 3.4 Chained Matrix Multiplication
- 3.5 Optimal Binary Search Trees
- 3.6 The Traveling Salesperson Problem
- Exercises
- Chapter 4: The Greedy Approach
- 4.1 Minimum Spanning Trees
- 4.2 Dijkstra's Algorithm for Single-Source Shortest Paths
- 4.3 Scheduling
- 4.4 Huffman Code
- 4.5 The Greedy Approach versus Dynamic Programming: The Knapsack Problem
- Exercises
- Chapter 5: Backtracking
- 5.2 The n-Queens Problem
- 5.3 Using a Monte Carlo Algorithm to Estimate the Efficiency of a Backtracking Algorithm
- 5.4 The Sum-of-Subsets Problem
- 5.5 Graph Coloring
- 5.6 The Hamiltonian Circuits Problem
- 5.7 The 0-1 Knapsack Problem
- Exercises
- Chapter 6: Branch-and-Bound
- 6.1 Illustrating Branch-and-Bound with the 0 - 1 Knapsack problem
- 6.2 The Traveling Salesperson Problem
- 6.3 Abductive Inference (Diagnosis)
- Exercises
- Chapter 7: Introduction to Computational Complexity - The Sorting Problem
- 7.2 Insertion Sort and Selection Sort
- 7.3 Lower Bounds for Algorithms that Remove at Most One Inversion per Comparison
- 7.4 Mergesort Revisited
- 7.5 Quicksort Revisited
- 7.6 Heapsort
- 7.6.1 Heaps and Basic Heap Routines
- 7.6.2 An Implementation of Heapsort
- 7.7 Comparison of Mergesort, Quicksort, and Heapsort
- 7.8 Lower Bounds for Sorting Only by Comparison of Keys
- 7.8.1 Decision Trees for Sorting Algorithms
- 7.8.2 Lower Bounds for Worst-Case Behavior
- 7.8.3 Lower Bounds for Average-Case Behavior
- 7.9 Sorting by Distribution (Radix Sort)
- Exercises
- Chapter 8: More Computational Complexity - The Searching Problem
- 8.1 Lower Bounds for Searching Only by Comparisons of Keys
- 8.2 Interpolation Search
- 8.3 Searching in Trees
- 8.4 Hashing
- 8.5 The Selection Problem: Introduction to Adversary Arguments
- Exercises
- Chapter 9: Computational Complexity and Interactability - An Introduction to the Theory of NP
- 9.2 Input Size Revisited
- 9.3 The Three General Problem Categories
- 9.4 The Theory of NP
- 9.5 Handling NP-Hard Problems
- Exercises
- Chapter 10: Number-Theoretic Algorithms
- 10.1 Number Theory Review
- 10.2 Computing the Greatest Common Divisor
- 10.3 Modular Arithmetic Review
- 10.4 Solving Modular Linear Equations
- 10.5 Computing Modular Powers
- 10.6 Finding Large Prime Numbers
- 10.7 The RSA Public-Key Cryptosystem
- Exercises
- Chapter 11: Introduction to Parallel Algorithms
- 11.1 Parallel Architectures
- 11.2 The PRAM Model
- Exercises
- Appendix A: Review of Necessary Mathematics
- A.2 Functions
- A.3 Mathematical Induction
- A.4 Theorems and Lemmas
- A.5 Logarithms
- A.6 Sets
- A.7 Permutations and Combinations
- A.8 Probability
- Exercises
- Appendix B: Solving Recurrence Equations - With Applications to Analysis of Recursive Algorithms
- B.2 Solving Recurrences Using the Characteristic Equation
- B.3 Solving Recurrences by Substitution
- B.4 Extending Results for n, a Power of a Positive Constant b, to n in General
- B.5 Proofs of Theorems
- Exercises
- Appendix C: Data Structures for Disjoint Sets
- References
- Index
- Index_B
- Index_C
- Index_D
- Index_E
- Index_F
- Index_G
- Index_H
- Index_I
- Index_J
- Index_K
- Index_L
- Index_M
- Index_N
- Index_O
- Index_P
- Index_Q
- Index_R
- Index_S
- Index_T
- Index_U
- Index_V
- Index_W-X
- Index_Y
- Index_Z
- List of Figures
- List of Tables
- List of Algorithms, Examples, and Theorems
- List of Sidebars