## 7.6.1 Heaps and Basic Heap Routines
Recall that the depth of a node in a tree is the number of edges in the unique path from the root to that node, the depth *d* of a tree is the maximum depth of all nodes in the tree, and a leaf in a tree is any node with no children (see [Section 3.5](LiB0029.html#308)). ***An internal node*** in a tree is any node that has at least one child. That is, it is any node that is not a leaf. A ***complete binary tree*** is a binary tree that satisfies the following conditions:
- All internal nodes have two children.
- All leaves have depth *d.*
An ***essentially complete binary tree*** is a binary tree that satisfies the following conditions:
- It is a complete binary tree down to a. depth of *d*–1.
- The nodes with depth *d* are as far to the left as possible.
Although essentially complete binary trees are difficult to define, it is straightforward to grasp their properties from a picture. [Figure 7.4](#ch07fig04) shows an essentially complete binary tree.
[![Click To expand](https://box.kancloud.cn/7d8b8da088badfa6031e61f5290667e1_300x219.jpg)](fig7-4_0.jpg)
Figure 7.4: An essentially complete binary tree.
We can now define a heap. A *heap* is an essentially complete binary tree such that
- The values stored at the nodes come from an ordered set.
- The value stored at each node is greater than or equal to the values stored at its children. This is called the heap property.
[Figure 7.5](#ch07fig05)shows a heap. Because we are presently interested in sorting, we will refer to the items stored in the heap as *keys.*
[![Click To expand](https://box.kancloud.cn/812e68d89c8cc4b5d98f8d74c1fd4db2_300x223.jpg)](fig7-5_0.jpg)
Figure 7.5: A heap.
Suppose that somehow we have arranged the keys that are to be sorted in a heap. If we repeatedly remove the key stored at the root while maintaining the heap property, the keys will be removed in nonincreasing sequence. If, while removing them, we place them in an array starting with the *n*th slot and going down to the first slot, they will be sorted in nondecreasing sequence tn the array. After removing the key at the root, we can restore the heap property by replacing the key at the root with the key stored at the bottom node (by "bottom node" we mean the far-right leaf), deleting the bottom node, and calling a procedure *siftdown* that "sifts" the key now at the root down the heap until the heap property is restored. The sifting is accomplished by initially comparing the key at the root with the larger of the keys at the children of the root. If the key at the root is smaller, the keys are exchanged. This process is repeated down the tree until the key at a node is not smaller than the larger of the keys at its children. [Figure 7.6](#ch07fig06) illustrates this procedure. High-level pseudocode for it is as follows:
```
void siftdown (heap& H) // H starts out having the
{ // heap property for all
node parent, largerchild; // nodes except the root.
// H ends up a heap.
parent = root of H;
largerchild = parent's child containing larger key;
while (key at parent is smaller than key at largerchild){
exchange key at parent and key at largerchild;
parent = largerchild;
largerchild = parent's child containing larger key;
}
}
```
[![Click To expand](https://box.kancloud.cn/ed4f7eb9f62d41f9b51236ab28a622d8_350x100.jpg)](fig7-6_0.jpg)
Figure 7.6: Procedure *siftdown* sifts 6 down until the heap property is restored.
High-level pseudocode for a function that removes the key at the root and restores the heap property is as follows:
```
keytype root (heap& H)
{
keytype keyout;
keyout = key at the root;
move the key at th bottom node to the root; // Bottom node is,
delete the bottom node; // far-right leaf.
siftdown (H); // Restore the
return keyout; // heap property.
}
```
Given a heap of *n* keys, the following is high-level pseudocode for a procedure that places the keys in sorted sequence into an array *S.*
```
void removekeys (int n,
heap H,
keytype S[])
{
index i;
for (i = n; i >= 1; i--)
S[i] = root (H);
}
```
The only task remaining is to arrange the keys in a heap in the first place. Let's assume that they have been arranged in an essentially complete binary tree that does not necessarily have the heap property (we will see how to do this in the next subsection). We can transform the tree into a heap by repeatedly calling *siftdown* to perform the following operations: First, transform all subtrees whose roots have depth *d*–1 into heaps; second, transform all subtrees whose roots have depth *d* – 2 into heaps; … finally, transform the entire tree (the only subtree whose root has depth 0) into a heap.
This process is illustrated in [Figure 7.7](#ch07fig07) and is implemented by the procedure outlined in the following high-level pseudocode.
```
void makeheap (int n, heap& H) // H ends-up a heap.
{
index i;
heap Hsub; // Hsub ends up a heap.
for (i = d - 1; i >= 0; i--) // Tree has depth d.
for (all subtrees Hsub whose roots have depth i)
siftdown (Hsub);
}
```
[![Click To expand](https://box.kancloud.cn/fc9e4f85e3a60cb5c025fc3083985a3f_350x405.jpg)](fig7-7_0.jpg)
Figure 7.7: Using *siftdown* to make a heap from an essentially complete binary tree. After the steps shown, the right subtree, whose root has depth *d* — 2, must be made into a heap, and finally the entire tree must be made into a heap.
Finally, we present high-level pseudocode for Heapsort (it is assumed that the keys are already arranged in an essentially complete binary tree in *H*):
```
void heapsort (int n,
heap H, // H ends up a heap.
keytype S[])
{
makeheap (n, H);
removekeys (n, H, S);
}
```
It may seem that we were not telling the truth earlier because this Heapsort algorithm does not appear to be an in-place sort. That is, we need extra space for the heap. However, next we implement a heap using an array. We show that the same array that stores the input (the keys to be sorted) can be used to implement the heap, and that we never simultaneously need the same array slot for more than one purpose.
- Table of Contents
- BackCover
- Foundations of Algorithms Using C++ Pseudocode, Third Edition
- Preface
- Chapter Contents
- Pedagogy
- Course Outlines
- Acknowledgments
- Errors
- Chapter 1: Algorithms - Efficiency, Analysis, and Order
- 1.2 The Importance of Developing Efficient Algorithms
- 1.3 Analysis of Algorithms
- 1.4 Order
- 1.5 Outline of This Book
- Exercises
- Chapter 2: Divide-and-Conquer
- 2.1 Binary Search
- 2.2 Mergesort
- 2.3 The Divide-and-Conquer Approach
- 2.4 Quicksort (Partition Exchange Sort)
- 2.5 Strassen's Matrix Multiplication Algorithm
- 2.6 Arithmetic with Large Integers
- 2.7 Determining Thresholds
- 2.8 When Not to Use Divide-and-Conquer
- Exercises
- Chapter 3: Dynamic Programming
- 3.1 The Binomial Coefficient
- 3.2 Floyd's Algorithm for Shortest Paths
- 3.3 Dynamic Programming and Optimization Problems
- 3.4 Chained Matrix Multiplication
- 3.5 Optimal Binary Search Trees
- 3.6 The Traveling Salesperson Problem
- Exercises
- Chapter 4: The Greedy Approach
- 4.1 Minimum Spanning Trees
- 4.2 Dijkstra's Algorithm for Single-Source Shortest Paths
- 4.3 Scheduling
- 4.4 Huffman Code
- 4.5 The Greedy Approach versus Dynamic Programming: The Knapsack Problem
- Exercises
- Chapter 5: Backtracking
- 5.2 The n-Queens Problem
- 5.3 Using a Monte Carlo Algorithm to Estimate the Efficiency of a Backtracking Algorithm
- 5.4 The Sum-of-Subsets Problem
- 5.5 Graph Coloring
- 5.6 The Hamiltonian Circuits Problem
- 5.7 The 0-1 Knapsack Problem
- Exercises
- Chapter 6: Branch-and-Bound
- 6.1 Illustrating Branch-and-Bound with the 0 - 1 Knapsack problem
- 6.2 The Traveling Salesperson Problem
- 6.3 Abductive Inference (Diagnosis)
- Exercises
- Chapter 7: Introduction to Computational Complexity - The Sorting Problem
- 7.2 Insertion Sort and Selection Sort
- 7.3 Lower Bounds for Algorithms that Remove at Most One Inversion per Comparison
- 7.4 Mergesort Revisited
- 7.5 Quicksort Revisited
- 7.6 Heapsort
- 7.6.1 Heaps and Basic Heap Routines
- 7.6.2 An Implementation of Heapsort
- 7.7 Comparison of Mergesort, Quicksort, and Heapsort
- 7.8 Lower Bounds for Sorting Only by Comparison of Keys
- 7.8.1 Decision Trees for Sorting Algorithms
- 7.8.2 Lower Bounds for Worst-Case Behavior
- 7.8.3 Lower Bounds for Average-Case Behavior
- 7.9 Sorting by Distribution (Radix Sort)
- Exercises
- Chapter 8: More Computational Complexity - The Searching Problem
- 8.1 Lower Bounds for Searching Only by Comparisons of Keys
- 8.2 Interpolation Search
- 8.3 Searching in Trees
- 8.4 Hashing
- 8.5 The Selection Problem: Introduction to Adversary Arguments
- Exercises
- Chapter 9: Computational Complexity and Interactability - An Introduction to the Theory of NP
- 9.2 Input Size Revisited
- 9.3 The Three General Problem Categories
- 9.4 The Theory of NP
- 9.5 Handling NP-Hard Problems
- Exercises
- Chapter 10: Number-Theoretic Algorithms
- 10.1 Number Theory Review
- 10.2 Computing the Greatest Common Divisor
- 10.3 Modular Arithmetic Review
- 10.4 Solving Modular Linear Equations
- 10.5 Computing Modular Powers
- 10.6 Finding Large Prime Numbers
- 10.7 The RSA Public-Key Cryptosystem
- Exercises
- Chapter 11: Introduction to Parallel Algorithms
- 11.1 Parallel Architectures
- 11.2 The PRAM Model
- Exercises
- Appendix A: Review of Necessary Mathematics
- A.2 Functions
- A.3 Mathematical Induction
- A.4 Theorems and Lemmas
- A.5 Logarithms
- A.6 Sets
- A.7 Permutations and Combinations
- A.8 Probability
- Exercises
- Appendix B: Solving Recurrence Equations - With Applications to Analysis of Recursive Algorithms
- B.2 Solving Recurrences Using the Characteristic Equation
- B.3 Solving Recurrences by Substitution
- B.4 Extending Results for n, a Power of a Positive Constant b, to n in General
- B.5 Proofs of Theorems
- Exercises
- Appendix C: Data Structures for Disjoint Sets
- References
- Index
- Index_B
- Index_C
- Index_D
- Index_E
- Index_F
- Index_G
- Index_H
- Index_I
- Index_J
- Index_K
- Index_L
- Index_M
- Index_N
- Index_O
- Index_P
- Index_Q
- Index_R
- Index_S
- Index_T
- Index_U
- Index_V
- Index_W-X
- Index_Y
- Index_Z
- List of Figures
- List of Tables
- List of Algorithms, Examples, and Theorems
- List of Sidebars