Statistics
7
Views
0
Downloads
0
Donations
Support
Share
Uploader

高宏飞

Shared on 2026-05-05

AuthorTim Roughgarden

Algorithms Illuminated (Part 3) Tim Roughgarden About book Comments 3 5.0 / 5.0 Paperback Tags Accessible, no-nonsense, and programming language-agnostic introduction to algorithms. Part 3 covers greedy algorithms (scheduling, minimum spanning trees, clustering, Huffman codes) and dynamic programming (knapsack, sequence alignment, shortest paths, optimal search trees).

Tags
No tags
ISBN: 0999282948
Publish Year: 2019
Language: 英文
Pages: 230
File Format: PDF
File Size: 5.2 MB
Support Statistics
¥.00 · 0times
Text Preview (First 20 pages)
Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

Algorithms Illuminated Tim Roughgarden Part 3: Greedy Algorithms and Dynamic Programming
c 2019 by Tim Roughgarden First Edition ISBN: 978-0-9992829-4-6 (Paperback) ISBN: 978-0-9992829-5-3 (ebook) Library of Congress Control Number: 2017914282 Soundlikeyourself Publishing, LLC New York, NY soundlikeyourselfpublishing@gmail.com www.algorithmsilluminated.org
Contents Preface vii 13 Introduction to Greedy Algorithms 1 13.1 The Greedy Algorithm Design Paradigm 1 13.2 A Scheduling Problem 4 13.3 Developing a Greedy Algorithm 6 13.4 Proof of Correctness 12 Problems 21 14 Huffman Codes 23 14.1 Codes 23 14.2 Codes as Trees 28 14.3 Huffman’s Greedy Algorithm 32 *14.4 Proof of Correctness 41 Problems 49 15 Minimum Spanning Trees 52 15.1 Problem Definition 52 15.2 Prim’s Algorithm 57 *15.3 Speeding Up Prim’s Algorithm via Heaps 62 *15.4 Prim’s Algorithm: Proof of Correctness 69 15.5 Kruskal’s Algorithm 76 *15.6 Speeding Up Kruskal’s Algorithm via Union-Find 81 *15.7 Kruskal’s Algorithm: Proof of Correctness 91 15.8 Application: Single-Link Clustering 94 Problems 99 16 Introduction to Dynamic Programming 103 16.1 The Weighted Independent Set Problem 104 16.2 A Linear-Time Algorithm for WIS in Paths 108
16.3 A Reconstruction Algorithm 116 16.4 The Principles of Dynamic Programming 118 16.5 The Knapsack Problem 123 Problems 133 17 Advanced Dynamic Programming 137 17.1 Sequence Alignment 137 *17.2 Optimal Binary Search Trees 148 Problems 163 18 Shortest Paths Revisited 167 18.1 Shortest Paths with Negative Edge Lengths 167 18.2 The Bellman-Ford Algorithm 172 18.3 The All-Pairs Shortest Path Problem 185 18.4 The Floyd-Warshall Algorithm 187 Problems 198 Epilogue: A Field Guide to Algorithm Design 201 Hints and Solutions to Selected Problems 203 Index 211
Preface This book is the third of a four-part series based on my online algo- rithms courses that have been running regularly since 2012, which in turn are based on an undergraduate course that I taught many times at Stanford University. The first two parts of the series are not strict prerequisites for this one, though portions of this book do assume at least a vague recollection of big-O notation (covered in Chapter 2 of Part 1 or Appendix C of Part 2), divide-and-conquer algorithms (Chapter 3 of Part 1), and graphs (Chapter 7 of Part 2). What We’ll Cover Algorithms Illuminated, Part 3 provides an introduction to and nu- merous case studies of two fundamental algorithm design paradigms. Greedy algorithms and applications. Greedy algorithms solve problems by making a sequence of myopic and irrevocable decisions. For many problems, they are easy to devise and often blazingly fast. Most greedy algorithms are not guaranteed to be correct, but we’ll cover several killer applications that are exceptions to this rule. Examples include scheduling problems, optimal compression, and minimum spanning trees of graphs. Dynamic programming and applications. Few benefits of a se- rious study of algorithms rival the empowerment that comes from mastering dynamic programming. This design paradigm takes a lot of practice to perfect, but it has countless applications to problems that appear unsolvable using any simpler method. Our dynamic program- ming boot camp will double as a tour of some of the paradigm’s killer applications, including the knapsack problem, the Needleman-Wunsch genome sequence alignment algorithm, Knuth’s algorithm for opti-
mal binary search trees, and the Bellman-Ford and Floyd-Warshall shortest-path algorithms. For a more detailed look into the book’s contents, check out the “Upshot” sections that conclude each chapter and highlight the most important points. The “Field Guide to Algorithm Design” on page 201 provides a bird’s-eye view of how greedy algorithms and dynamic programming fit into the bigger algorithmic picture. The starred sections of the book are the most advanced ones. The time-constrained reader can skip these sections on a first reading without any loss of continuity. Topics covered in the other three parts. Algorithms Illumi- nated, Part 1 covers asymptotic notation (big-O notation and its close cousins), divide-and-conquer algorithms and the master method, randomized QuickSort and its analysis, and linear-time selection algo- rithms. Part 2 covers data structures (heaps, balanced search trees, hash tables, bloom filters), graph primitives (breadth- and depth-first search, connectivity, shortest paths), and their applications (ranging from deduplication to social network analysis). Part 4 is all about NP - completeness, what it means for the algorithm designer, and strategies for coping with computationally intractable problems, including the analysis of heuristics and local search. Skills You’ll Learn Mastering algorithms takes time and effort. Why bother? Become a better programmer. You’ll learn several blazingly fast subroutines for processing data as well as several useful data structures for organizing data that you can deploy directly in your own programs. Implementing and using these algorithms will stretch and improve your programming skills. You’ll also learn general algorithm design paradigms that are relevant to many different problems across different domains, as well as tools for predicting the performance of such algorithms. These “algorithmic design patterns” can help you come up with new algorithms for problems that arise in your own work. Sharpen your analytical skills. You’ll get lots of practice describ- ing and reasoning about algorithms. Through mathematical analysis,
you’ll gain a deep understanding of the specific algorithms and data structures that these books cover. You’ll acquire facility with sev- eral mathematical techniques that are broadly useful for analyzing algorithms. Think algorithmically. After you learn about algorithms, you’ll start seeing them everywhere, whether you’re riding an elevator, watching a flock of birds, managing your investment portfolio, or even watching an infant learn. Algorithmic thinking is increasingly useful and prevalent in disciplines outside of computer science, including biology, statistics, and economics. Literacy with computer science’s greatest hits. Studying al- gorithms can feel like watching a highlight reel of many of the greatest hits from the last sixty years of computer science. No longer will you feel excluded at that computer science cocktail party when someone cracks a joke about Dijkstra’s algorithm. After reading these books, you’ll know exactly what they mean. Ace your technical interviews. Over the years, countless stu- dents have regaled me with stories about how mastering the concepts in these books enabled them to ace every technical interview question they were ever asked. How These Books Are Different This series of books has only one goal: to teach the basics of algorithms in the most accessible way possible. Think of them as a transcript of what an expert algorithms tutor would say to you over a series of one-on-one lessons. There are a number of excellent more traditional and encyclopedic textbooks about algorithms, any of which usefully complement this book series with additional details, problems, and topics. I encourage you to explore and find your own favorites. There are also several books that, unlike these books, cater to programmers looking for ready-made algorithm implementations in a specific programming language. Many such implementations are freely available on the Web as well.
Who Are You? The whole point of these books and the online courses upon which they are based is to be as widely and easily accessible as possible. People of all ages, backgrounds, and walks of life are well represented in my online courses, and there are large numbers of students (high- school, college, etc.), software engineers (both current and aspiring), scientists, and professionals hailing from all corners of the world. This book is not an introduction to programming, and ideally you’ve acquired basic programming skills in a standard language (like Java, Python, C, Scala, Haskell, etc.). If you need to beef up your programming skills, there are several outstanding free online courses that teach basic programming. We also use mathematical analysis as needed to understand how and why algorithms really work. The freely available book Mathe- matics for Computer Science, by Eric Lehman, F. Thomson Leighton, and Albert R. Meyer, is an excellent and entertaining refresher on mathematical notation (like P and 8), the basics of proofs (induction, contradiction, etc.), discrete probability, and much more. Additional Resources These books are based on online courses that are currently running on the Coursera and Stanford Lagunita platforms. I’ve made several resources available to help you replicate as much of the online course experience as you like. Videos. If you’re more in the mood to watch and listen than to read, check out the YouTube video playlists available from www.algorithmsilluminated.org. These videos cover all the topics in this book series, as well as additional advanced topics. I hope they exude a contagious enthusiasm for algorithms that, alas, is impossible to replicate fully on the printed page. Quizzes. How can you know if you’re truly absorbing the concepts in this book? Quizzes with solutions and explanations are scattered throughout the text; when you encounter one, I encourage you to pause and think about the answer before reading on. End-of-chapter problems. At the end of each chapter you’ll find several relatively straightforward questions for testing your under-
standing, followed by harder and more open-ended challenge problems. Hints or solutions to all of these problems (as indicated by an “(H)” or “(S),” respectively) are included at the end of the book. Readers can interact with me and each other about the end-of-chapter problems through the book’s discussion forum (see below). Programming problems. Each of the chapters concludes with a suggested programming project whose goal is to help you develop a detailed understanding of an algorithm by creating your own working implementation of it. Data sets, along with test cases and their solutions, can be found at www.algorithmsilluminated.org. Discussion forums. A big reason for the success of online courses is the opportunities they provide for participants to help each other understand the course material and debug programs through discus- sion forums. Readers of these books have the same opportunity, via the forums available at www.algorithmsilluminated.org. Acknowledgments These books would not exist without the passion and hunger supplied by the hundreds of thousands of participants in my algorithms courses over the years. I am particularly grateful to those who supplied detailed feedback on an earlier draft of this book: Tonya Blust, Yuan Cao, Carlos Guia, Jim Humelsine, Vladimir Kokshenev, Bayram Kuliyev, and Daniel Zingaro. I always appreciate suggestions and corrections from readers. These are best communicated through the discussion forums men- tioned above. Tim Roughgarden New York, NY April 2019
Chapter 13 Introduction to Greedy Algorithms Much of the beauty in the design and analysis of algorithms stems from the interplay between general algorithm design principles and the instantiation of these principles to solve concrete computational problems. There’s no silver bullet in algorithm design—no universal technique that can solve every computational problem you’ll encounter. But there are several general design paradigms that can help you solve problems from many different application domains. Teaching you these paradigms and their most famous instantiations is one of the major goals of this book series. 13.1 The Greedy Algorithm Design Paradigm 13.1.1 Algorithm Paradigms What’s an “algorithm design paradigm?” Readers of Part 1 have already seen a canonical example, the divide-and-conquer paradigm. That paradigm went like this: The Divide-and-Conquer Paradigm 1. Divide the input into smaller subproblems. 2. Conquer the subproblems recursively. 3. Combine the solutions for the subproblems into a solution for the original problem. In Part 1 we saw numerous instantiations of this paradigm: the MergeSort and QuickSort algorithms, Karatsuba’s O(n1.59)-time al- gorithm for multiplying two n-digit integers, Strassen’s O(n2.71)-time algorithm for multiplying two n⇥ n matrices, and more.
2 Introduction to Greedy Algorithms The first half of this book is about the greedy algorithm design paradigm. What is a greedy algorithm, exactly? Much blood and ink have been spilled over this question, so we’ll content ourselves with an informal definition.1 The Greedy Paradigm Construct a solution iteratively, via a sequence of myopic decisions, and hope that everything works out in the end. The best way to get a feel for greedy algorithms is through exam- ples. We’ll see several over the next few chapters.2 13.1.2 Themes of the Greedy Paradigm Here are a few themes to watch for in our examples. (You might want to re-read this section after going through one or more examples, so that it’s less abstract.) First, for many problems, it’s surprisingly easy to come up with one or even multiple greedy algorithms that might plausibly work. This is both a bug and a feature—greedy algorithms can be a great cure for writer’s block when you’re stuck on a problem, but it can be hard to assess which greedy approach is the most promising. Second, the running time analysis is often a one-liner. For example, many greedy algorithms boil down to sorting plus a linear amount of extra processing, in which case the running time of a good implementation would be O(n log n), where n is the number of objects to be sorted.3 (Big-O notation suppresses constant 1To investigate formal definitions of greedy algorithms, start with the paper “(Incremental) Priority Algorithms,” by Allan Borodin, Morten N. Nielsen, and Charles Rackoff (Algorithmica, 2003). 2Readers of Part 2 have already seen a greedy algorithm, namely Dijkstra’s shortest-path algorithm. That algorithm iteratively computes the shortest-path distances from a starting vertex s to every other vertex of a graph. In each iteration, the algorithm irrevocably and myopically commits to an estimate of the shortest-path distance to one additional vertex, never revisiting the decision. In graphs with only nonnegative edge lengths, everything works out in the end and all the shortest-path distance estimates are correct. 3For example, two O(n log n)-time sorting algorithms are MergeSort (see Chapter 1 in Part 1) and HeapSort (see Chapter 10 in Part 2). Alternatively, randomized QuickSort (see Chapter 5 of Part 1) has an average running time of O(n log n).
13.1 The Greedy Algorithm Design Paradigm 3 factors and different logarithmic functions differ by a constant factor, so there is no need to specify the base of the logarithm.) Finally, it’s often difficult to figure out whether a proposed greedy algorithm actually returns the correct output for every possible input. The fear is that one of the algorithm’s irrevocable myopic decisions will come back to haunt you and, with full hindsight, be revealed as a terrible idea. And even when a greedy algorithm is correct, proving it can be difficult.4 Features and Bugs of the Greedy Paradigm 1. Easy to come up with one or more greedy algorithms. 2. Easy to analyze the running time. 3. Hard to establish correctness. One of the reasons why it can be hard to prove the correctness of greedy algorithms is that most such algorithms are not correct, meaning there exist inputs for which the algorithm fails to produce the desired output. If you remember only one thing about greedy algorithms, it should be this. Warning Most greedy algorithms are not always correct. This point is especially difficult to accept for clever greedy algorithms that you invented yourself. You might believe, in your heart of hearts, that your natural greedy algorithm must always solve the problem correctly. More often than not, this belief is unfounded.5 4Veterans of Part 1 know that all three themes are a big contrast to the divide- and-conquer paradigm. It’s often tricky to come up with a good divide-and-conquer algorithm for a problem, and when you do, there’s usually a “Eureka!” moment when you know that you’ve cracked the problem. Analyzing the running times of divide-and-conquer algorithms can be difficult, due to the tug-of-war between the forces of proliferating subproblems and shrinking work-per-subproblem. (All of Chapter 4 of Part 1 is devoted to this topic.) Finally, proofs of correctness for divide-and-conquer algorithms are usually straightforward inductions. 5A not-always-correct greedy algorithm can still serve as a super-fast heuristic for a problem, a point we’ll return to in Part 4.
4 Introduction to Greedy Algorithms Now that my conscience is clear, let’s look at some cherry-picked examples of problems that can be solved correctly with a judiciously designed greedy algorithm. 13.2 A Scheduling Problem Our first case study concerns scheduling, in which the goal is to sched- ule tasks on one or more shared resources to optimize some objective. For example, a resource could represent a computer processor (with tasks corresponding to jobs), a classroom (with tasks corresponding to lectures), or your calendar for the day (with tasks corresponding to meetings). 13.2.1 The Setup In scheduling, the tasks to be completed are usually called jobs, and jobs can have different characteristics. Suppose that each job j has a known length `j , which is the amount of time required to process the job (for example, the length of a lecture or meeting). Also, each job has a weight wj , with higher weights corresponding to higher-priority jobs. 13.2.2 Completion Times A schedule specifies an order in which to process the jobs. In a problem instance with n jobs, there are n! = n · (n1) · (n2) · · · 2 ·1 different schedules. That’s a lot of schedules! Which one should we prefer? Next, we need to define an objective function that assigns a nu- merical score to every schedule and quantifies what we want. First, a preliminary definition: Completion Times The completion time Cj() of a job j in a schedule is the sum of the lengths of the jobs preceding j in , plus the length of j itself. In other words, a job’s completion time in a schedule is the total time that elapses before the job has been fully processed.
13.2 A Scheduling Problem 5 Quiz 13.1 Consider a problem instance that has three jobs with `1 = 1, `2 = 2, and `3 = 3, and suppose they are scheduled in this order (with job 1 first). What are the completion times of the three jobs in this schedule? (The job weights are irrelevant for this question, so we have not specified them.) a) 1, 2, and 3 b) 3, 5, and 6 c) 1, 3, and 6 d) 1, 4, and 6 (See Section 13.2.4 for the solution and discussion.) 13.2.3 Objective Function What makes for a good schedule? We’d like jobs’ completion times to be small, but trade-offs between jobs are inevitable—in any schedule, jobs scheduled early will have short completion times while those scheduled toward the end will have long completion times. One way to make trade-offs between the jobs is to minimize the sum of weighted completion times. In math, this objective function translates to min nX j=1 wjCj(), (13.1) where the minimization is over all n! possible schedules , and Cj() denotes job j’s completion time in the schedule . This is equivalent to minimizing the weighted average of the jobs’ completion times, with the averaging weights proportional to the wj ’s. For example, consider the three jobs in Quiz 13.1 and suppose their weights are w1 = 3, w2 = 2, and w3 = 1. If we schedule the first job first, the second job second, and the third job third, the sum of the weighted completion times is 3 · 1|{z} job #1 + 2 · 3|{z} job #2 + 1 · 6|{z} job #3 = 15.
6 Introduction to Greedy Algorithms By checking all 3! = 6 possible schedules, you can verify that this is the schedule that minimizes the sum of weighted completion times. How can we solve this problem in general, given as input an arbitrary set of job lengths and weights? Problem: Minimizing the Sum of Weighted Completion Times Input: A set of n jobs with positive lengths `1, `2, . . . , `n and positive weights w1, w2, . . . , wn. Output: A job sequence that minimizes the sum of weighted completion times (13.1). With n! different schedules, computing the best one by exhaustive search is out of the question for all but the tiniest instances. We need a smarter algorithm.6 13.2.4 Solution to Quiz 13.1 Correct answer: (c). We can visualize a schedule by stacking the jobs on top of one another, with time increasing from bottom to top (Figure 13.1). The completion time of a job is the time corresponding to its topmost edge. For the first job, its completion time is just its length, which is 1. The second job must wait for the first job to complete, so its completion time is the sum of the lengths of the first two jobs, which is 3. The third job doesn’t even start until time 3, and then it takes 3 more time units to complete, so its completion time is 6. 13.3 Developing a Greedy Algorithm Greedy algorithms seem like a good fit for the problem of scheduling jobs to minimize the weighted sum of completion times. The output has an iterative structure, with jobs processed one by one. Why not 6For example, n! is bigger than 3.6 million when n = 10, bigger than 2.4 quintillion when n = 20, and bigger than the estimated number of atoms in the known universe when n 60. Thus no conceivable improvement in computer technology would transmute exhaustive search into a useful algorithm.
13.3 Developing a Greedy Algorithm 7 job #1 job #2 job #3 tim e 0 1 3 6 Figure 13.1: The completion times of the three jobs are 1, 3, and 6. use a greedy algorithm that iteratively decides which job should go next? The first step of our plan is to solve two special cases of the general problem. Our solutions to these will suggest what a greedy algorithm might look like in the general case. We’ll then narrow the field to a single candidate algorithm and prove that this candidate correctly solves the problem. The process by which we arrive at this algorithm is more important to remember than the algorithm itself; it’s a repeatable process that you can use in your own applications. 13.3.1 Two Special Cases Let’s think positive and posit that there actually is a correct greedy algorithm for the problem of minimizing the weighted sum of comple- tion times. What would it look like? For starters, what if you knew that all the jobs had the same length (but possibly different weights)? What if they all had the same weight (but possibly different lengths)? Quiz 13.2 (1) If all job lengths are identical, should we schedule smaller- or larger-weight jobs earlier? (2) If all job weights are identical, should we schedule shorter or longer jobs earlier?
8 Introduction to Greedy Algorithms a) larger/shorter b) smaller/shorter c) larger/longer d) smaller/longer (See Section 13.3.3 for the solution and discussion.) 13.3.2 Dueling Greedy Algorithms In the general case, jobs can have different weights and different lengths. Whenever our two rules-of-thumb—to prefer shorter jobs and higher-weight jobs—luckily coincide for a pair of jobs, we know which one to schedule first (the shorter, higher-weight one). But what if the two rules give conflicting advice? What should we do with one short low-weight job and one long high-weight job? What’s the simplest greedy algorithm that might work? Each job has two parameters, and the algorithm must look at both. The best-case scenario would be to come up with a formula that compiles each job’s length and weight into a single score, so that scheduling jobs from highest to lowest score is guaranteed to minimize the sum of weighted completion times. If such a formula exists, our two special cases imply that it must have two properties: (i) holding the length fixed, it should be increasing in the job’s weight; and (ii) holding the weight fixed, it should be decreasing in the job’s length. (Remember, higher scores are better.) Take a minute to brainstorm some formulas that have both of these properties. * * * * * * * * * * * Perhaps the simplest function that is increasing in weight and decreasing in length is the difference between the two: proposal #1 for score of job j: wj `j . This score might be negative, but that poses no obstacle to sequencing the jobs from highest to lowest score.
13.3 Developing a Greedy Algorithm 9 There are plenty of other options. For example, the ratio of the two parameters is another candidate: proposal #2 for score of job j: wj `j . These two scoring functions lead to two different greedy algo- rithms. GreedyDiff Schedule the jobs in decreasing order of wj `j (breaking ties arbitrarily). GreedyRatio Schedule the jobs in decreasing order of wj `j (breaking ties arbitrarily). Thus, already, our first case study illustrates the first theme of the greedy paradigm (Section 13.1.2): It is often easy to propose multiple competing greedy algorithms for a problem. Which of the two algorithms, if any, is correct? A quick way to rule out one of them is to find an instance in which the two algorithms output different schedules, with different objective function values. For whichever algorithm fares worse in this example, we can conclude that it is not always optimal. Both algorithms do the right thing in our two special cases, with equal-weight or equal-length jobs. The simplest possible example for ruling out one of them would be a problem instance with two jobs, having different weights and lengths, such that the two algorithms schedule the jobs in opposite orders. That is, we seek two jobs whose ordering by difference is the opposite of their ordering by ratio. One simple example is: Job #1 Job #2 Length `1 = 5 `2 = 2 Weight w1 = 3 w2 = 1. The first job has the larger ratio (35 vs. 1 2) but the smaller (more negative) difference (2 vs. 1). Thus the GreedyDiff algorithm schedules the second job first, while GreedyRatio does the opposite.
10 Introduction to Greedy Algorithms Quiz 13.3 What is the sum of weighted completion times in the sched- ules output by the GreedyDiff and GreedyRatio algorithms, respectively? a) 22 and 23 b) 23 and 22 c) 17 and 17 d) 17 and 11 (See Section 13.3.3 for the solution and discussion.) We’ve made progress by ruling out the GreedyDiff algorithm from further consideration. However, the outcome of Quiz 13.3 does not immediately imply that the GreedyRatio algorithm is always optimal. For all we know, there are other cases in which the algorithm outputs a suboptimal schedule. You should always be skeptical about an algorithm that does not come with a proof of correctness, even if the algorithm does the right thing in some toy examples, and extra-skeptical of greedy algorithms. In our case, the GreedyRatio algorithm is, in fact, guaranteed to minimize the sum of weighted completion times. Theorem 13.1 (Correctness of GreedyRatio) For every set of positive job weights w1, w2, . . . , wn and positive job lengths `1, `2, . . . , `n, the GreedyRatio algorithm outputs a schedule with the minimum-possible sum of weighted completion times. This assertion is not obvious and you should not trust it until I supply you with a proof. Consistent with the third theme of the greedy paradigm (Section 13.1.2), this proof occupies the entire next section. On Lemmas, Theorems, and the Like In mathematical writing, the most important tech- nical statements are labeled theorems. A lemma is a technical statement that assists with the proof of
13.3 Developing a Greedy Algorithm 11 a theorem (much as a subroutine assists with the implementation of a larger program). A corollary is a statement that follows immediately from an already- proven result, such as a special case of a theorem. We use the term proposition for stand-alone techni- cal statements that are not particularly important in their own right. The remaining theme of the greedy paradigm is the ease of running time analyses (Section 13.1.2). That’s certainly the case here. All the GreedyRatio algorithm does is sort the jobs by ratio, which requires O(n log n) time, where n is the number of jobs in the input (see footnote 3). 13.3.3 Solutions to Quiz 13.2–13.3 Solution to Quiz 13.2 Correct answer: (a). First suppose that all n jobs have the same length, say length 1. Then, every schedule has exactly the same set of completion times—{1, 2, 3, . . . , n}—and the only question is which job gets which completion time. Our semantics for job weights certainly suggests that the higher-weight jobs should receive the smaller completion times, and this is in fact the case. For example, you wouldn’t want to schedule a job with weight 10 third (with completion time 3) and one with weight 20 fifth (with completion time 5); you’d be better off exchanging the positions of these two jobs, which would decrease the sum of weighted completion times by 20 (as you should check). The second case, in which all jobs have equal weights, is a little more subtle. Here, you want to favor shorter jobs. For example, consider two unit-weight jobs with lengths 1 and 2. If you schedule the shorter job first, the completion times are 1 and 3, for a total of 4. In the opposite order, the completion times are 2 and 3, for an inferior total of 5. In general, the job scheduled first contributes to the completion times of all the jobs, as all jobs must wait for the first one to finish. All else being equal, scheduling the shortest job first minimizes this negative impact. The second job contributes