💎一站式轻松地调用各大LLM模型接口,支持GPT4、智谱、豆包、星火、月之暗面及文生图、文生视频 广告
# Chapter 7: Introduction to Computational Complexity鈥擳he Sorting Problem **W**e presented a quadratic-time sorting algorithm (Exchange Sort) in [Section 1.1](LiB0008.html#19). If computer scientists had been content with this sorting algorithm, many applications would now be running significantly slower and others would not be possible at all. Recall from [Table 1.4](LiB0011.html#82) that it would take years to sort 1 billion keys using a quadratic-time algorithm. More efficient sorting algorithms have been developed. In particular, in [Section 2.2](LiB0009.html#43) we saw Mergesort, which is 螛 (*n* lg *n*) in the worst case. Although this algorithm could not sort 1 billion items so quickly that the time would be imperceptible to a human, [Table 1.4](LiB0011.html#82) shows that it could sort the items in an amount of time that would be tolerable in an offline application. Suppose someone wanted 1 billion items to be sorted almost immediately in an online application. That person might labor for many hours or even many years trying to develop a linear-time or better sorting algorithm. Wouldn't that individual be distraught to learn, after devoting a lifetime of work to this effort, that such an algorithm was not possible? There are two approaches to attacking a problem. One is to try to develop a more efficient algorithm for the problem. The other is to try to prove that a more efficient algorithm is not possible. Once we have such a proof, we know that we should quit trying to obtain a faster algorithm. As we shall see, for a large class of sorting algorithms, we have proven that an algorithm better than 螛 (*n* lg *n*) is not possible. ## 7.1 Computational Complexity The preceding chapters were concerned with developing and analyzing algorithms for problems. We often used different approaches to solve the same problem with the hope of finding increasingly efficient algorithms for the problem. When we analyze a specific algorithm, we determine its time (or memory) complexity or the order of its time (or memory) complexity. We do not analyze the *problem* that the algorithm solves. For example, when we analyzed [Algorithm 1.4](LiB0008.html#34) (Matrix Multiplication), we found that its time complexity was *n*3. However, this does not mean that the problem of matrix multiplication *requires* a 螛 (*n*3) algorithm. The function *n*3 is a property of that one algorithm; it is not necessarily a property of the problem of matrix multiplication. In [Section 2.5](LiB0019.html#193) we developed Strassen's matrix multiplication algorithm with a time complexity in 螛 (*n*2.81). Furthermore, we mentioned that a 螛 (*n*2.38) variation of the algorithm has been developed. An important question is whether it is possible to find an even more efficient algorithm. ***Computational complexity***, which is a field that runs hand-in-hand with algorithm design and analysis, is the study of all possible algorithms that can solve a given problem. A *computational complexity analysis* tries to determine a lower bound on the efficiency of all algorithms for a given problem. At the end of [Section 2.5](LiB0019.html#193), we mentioned that it has been proven that the problem of matrix multiplication requires an algorithm whose time complexity is in 惟 (*n*2). It was a computational complexity analysis that determined this. We state this result by saying that a ***lower bound*** for the problem of matrix multiplication is 惟 (*n*2). This does not mean that it must be possible to create a 螛 (*n*2) algorithm for matrix multiplication. It means only that is impossible to create one that is better than 螛 (*n*2). Because our best algorithm is 螛 (*n*2.38) and our lower bound is 惟 (*n*2), it is worthwhile to continue investigating the problem. The investigation can proceed in two directions. On one hand, we can try to find a more efficient algorithm using algorithm design methodology, while on the other hand we can try to obtain a greater lower bound using computational complexity analysis. Perhaps someday we will develop an algorithm that is better than 螛 (*n*2.38), or perhaps someday we will prove that there is a lower bound greater than 惟 (*n*2). In general, our goal for a given problem is to determine a lower bound of 惟 (*f* (*n*)) and develop a 螛 (*f* (*n*)) algorithm for the problem. Once we have done this, we know that, except for improving the constant, we cannot improve on the algorithm any further. Some authors use the term "computational complexity analysis" to include both algorithm and problem analysis. Thoughout this text, when we refer to computational complexity analysis, we mean just problem analysis. We introduce computational complexity analysis by studying the Sorting problem. There are two reasons for choosing this problem. First, quite a few algorithms have been devised that solve the problem. By studying and comparing these algorithms, we can gain insight into how to choose among several algorithms for the same problem and how to improve a given algorithm. Second, the problem of sorting is one of the few problems for which we have been successful in developing algorithms whose time complexities are about as good as our lower bound. That is, for a large class of sorting algorithms, we have determined a lower bound of 惟 (*n* lg *n*) and we have developed 惟 (*n* lg *n*) algorithms. Therefore, we can say tht we have solved the Sorting problem as far as this class of algorithms is concerned. The class of sorting algorithms for which we have obtained algorithms about as good as our lower bound includes all algorithms that sort only by comparison of keys. As discussed at the beginning of [Chapter 1](LiB0008.html#16), the word "key" is used because records obtain a unique identifier, called ***key***, that is an element of an ordered set. Given that records are arranged in some arbitrary sequence, the ***sorting task*** is to rearrange them so that they are in order according to the values of the keys. In our algorithms, the keys are stored in an array, and we do not refer to the nonkey fields. However, it is assumed that these fields are rearranged along with the key. Algorithms that ***sort only by comparisons of keys*** can compare two keys to determine which is larger, and can copy keys, but cannot do other operations on them. The sorting algorithms we have encountered so far ([Algorithm 1.3](LiB0008.html#32), [2.4](LiB0016.html#169), and [2.6](LiB0018.html#177)) fall into this class. In [Sections 7.2](LiB0053.html#633) through [7.8](LiB0066.html#744), we discuss algorithms that sort only by comparisons of keys. Specifically, [Section 7.2](LiB0053.html#633) discusses Insertion Sort and Selection Sort, two of the most efficient quadratic-time sorting algorithms. In [Section 7.3](LiB0054.html#647) we show that, as long as we limit ourselves to a restriction in Insertion Sort and Selection Sort, we cannot improve on quadratic time. [Sections 7.4](LiB0055.html#651) and [7.5](LiB0056.html#665) revisit our 螛 (*n* lg *n*) sorting algorithms, Mergesort and Quicksort. [Section 7.6](LiB0057.html#670) presents another 螛 (*n* lg *n*) sorting algorithm, Heapsort. In [Section 7.7](LiB0060.html#697) we compare our three 螛 (*n* lg *n*) sorting algorithms. [Section 7.8](LiB0066.html#744) shows the proof that 惟 (*n* lg *n*) is a lower bound for algorithms that sort by comparing keys. In [Section 7.9](LiB0065.html#733) we discuss Radix Sort, which is a sorting algorithm that does not sort by comparing keys. We analyze the algorithms in terms of both the number of comparisons of keys and the number of assignments of records. For example, in [Algorithm 1.3](LiB0008.html#32) (Exchange Sort), the exchange of *S* \[*i*\] and *S* \[*j*\] can be implemented as follows: ``` temp = S[i]; S[i] = S[j]; S[j] = temp; ``` This means that three assignments of records are done to accomplish one exchange. We analyze the number of assignments of records because, when the records are large, the time taken to assign a record is quite costly. We also analyze how much extra space the algorithms require besides the space needed to store the input. When the extra space is a constant (that is, when it does not increase with *n*, the number of keys to be sorted), the algorithm is called an ***in-place sort***. Finally, we assume that we are always sorting in nondecreasing order.