Algorithms - A Functional Programming Approach (Fethi A. Rabhi, Guy Lapalme) (Z-Library)

Author: Fethi A. Rabhi, Guy Lapalme

非小说

The design of algorithms for problem-solving lies at the heart of computer science. Concise yet authoritative, Algorithms - A Functional Programming Approach teaches the skills needed to master this essential subject. The authors challenge more traditional methods of teaching algorithms by using a functional programming context, with Haskell as the implementation language. This leads to smaller, clearer and more elegant programs which enable the programmer to understand the algorithm itself more quickly and to use that understanding to explore alternative solutions. Placing the emphasis on program development rather than the mathematical properties of algorithms, the book uses a succession of practical programming examples to develop in the reader problem-solving skills which can be easily transferred to other language paradigms.

📄 File Format: PDF
💾 File Size: 3.7 MB
95
Views
0
Downloads
0.00
Total Donations

📄 Text Preview (First 20 pages)

ℹ️

Registered users can read the full content for free

Register as a Gaohf Library member to read the complete e-book online for free and enjoy a better reading experience.

📄 Page 1
(This page has no text content)
📄 Page 2
Algorithms: a functional • programm1ng approach Second edition Fethi Rabhi Guy Lapalme • TT Addison-Wesley Harlow, England • Reading, Massachusetts • Menlo Park, California • New York Don Mills, Ontario • Amsterdam • Bonn • Sydney • Singapore Tokyo e Madrid • San Juan e Milan • Mexico City e Seoul • Taipei
📄 Page 3
© Pearson Education Limited 1999 Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world. The rights of Fethi Rabbi and Guy Lapalme to be identified as authors of this Work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a licence permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London WlP 9HE. The programs in this book have been included for their instructional value. They have been tested with care but are not guaranteed for any particular purpose. The publisher does not offer any warranties or representations nor does it accept any liabilities with respect to the programs. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Pearson Educaiton Limited has made every attempt to supply trademark information about manufacturers and their products mentioned in this book. A list of the trademark designations and their owners appears on this page. Cover designed by Designers & Partners, Oxford Transferred to digital print on demand, 2006 First published 1999 ISBN 0201-59604-0 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloguing-in-Publication Data Rabbi, Fethi Algorithms : a functional approach I Fethi Rabbi, Guy Lapalme. p. em. Includes bibliographical references and index. ISBN 0-201-59604-0 1. Computer algorithms. 2. Functional programming languages. I. Lapalme, Guy. II. Title. QA76.9.A43R34 1999 005.1-dc21 Trademark notice The following are trademarks or registered trademarks of their respective companies. Miranda is a trademark of Research Software Limited. UNIX is licensed through X/Open Company Ltd. Windows is a trademark of Microsoft Corporation. Printed and bound by Antony Rowe Ltd, Eastboume 99-10106 CIP
📄 Page 4
Contents Preface ix 1 Introduction 1 1.1 Algorithms 1 1.2 Functional languages 5 1.3 Bibliographical notes 10 2 Functional programming in Haskell 11 2.1 About the language 11 2.2 Equations and functions 12 2.3 Basic types and constructed types 15 2.4 Lists 18 2.5 Higher-order functional programming techniques 27 2.6 Algebraic types and polymorphism 33 2.7 Arrays 40 2.8 Type classes and class methods 43 Exercises 48 2.9 Bibliographical notes 51 3 The efficiency of functional programs 52 3.1 Reduction order 52 3.2 Analyzing the efficiency of programs 56 3.3 Program transformation 65 3.4 Conclusion 69 Exercises 69 3.5 Bibliographical notes 70
📄 Page 5
vi Contents 4 Concrete data types 71 4.1 Lists 71 4.2 Trees 77 4.3 Arrays 82 Exercises 84 4.4 Bibliographical notes 85 5 Abstract data types 86 5.1 Introduction 86 5.2 Stacks 87 5.3 Queues 90 5.4 Priority queues 92 5.5 Sets 93 5.6 Tables 97 5.7 Binary search trees 99 5.8 Heaps 103 5.9 AVL treest 107 Exercises 112 5.10 Bibliographical notes 113 6 Sorting 114 6.1 Introduction 114 6.2 Comparison-based sorting 115 6.3 Basic sorting algorithms 118 6.4 Tree-based sorting 122 6.5 Efficiency of comparison-based algorithmst 123 6.6 Representation-based sorting 129 Exercises 132 6.7 Bibliographical notes 133 7 Graph algorithms 135 7.1 Definitions and terminology 135 7.2 The graph ADT 136 7.3 Depth-first and breadth-first search 140 7.4 Topological sort 144 7.5 Minimum spanning tree 146 7.6 Depth-first search trees and forestst 149 7.7 Conclusion 152 Exercises 153 7.8 Bibliographical notes 154 8 Top-down design techniques 155 8.1 Divide-and-conquer 155 8.2 Backtracking search 159 8.3 Priority-first search 167 8.4 Greedy algorithms 170
📄 Page 6
Exercises 8.5 Bibliographical notes 9 Dynamic programming 9.1 Introduction 9.2 The dynamic programming higher-order function 9.3 Chained matrix multiplications 9.4 Optimal binary search trees 9.5 All-pairs shortest path 9.6 The traveling salesperson 9.7 Conclusion Exercises 9.8 Bibliographical notes 10 Advanced topics 10.1 Process networks 10.2 Monads 1 0. 3 Parallel algorithms 1 0.4 Bibliographical notes Bibliography A Haskell implementations B Mathematical background B.1 Notation B.2 Logarithms B.3 Summation formulas B.4 Solving recurrence equations Index Contents vii 174 176 178 178 180 181 184 188 191 194 194 196 197 197 202 207 216 218 227 228 228 228 229 229 230
📄 Page 7
(This page has no text content)
📄 Page 8
Preface ................................................................................................................................. Purpose/ goals This book is primarily an introduction to the design of algorithms for problem solving. Its prominent feature is to use afunctional language as an implementation language. Because of the high level of abstraction provided, functional programs tend to be shorter, clearer and faster to develop than their imperative counterparts. This contributes to a better understanding of the algorithm being implemented and makes it possible to explore alternative solutions more rapidly. Although we believe that this is true for a wide range of algorithms, the book also discusses the limitations of functional style in dealing with certain problems. This book is not about a particular functional language. The material is arranged in the same way as in a classic algorithms book: it is a succession of chapters on 'traditional' topics such as sorting and searching algorithms. In addition, the choice of a functional language permits the introduction of algorithm design strategies such as divide-and-conquer and dynamic programming by means of higher-order functions. New concepts such as process networks and parallel algorithms can also be introduced in a non-conventional way. Due to this extra material, topics such as lower bound proofs and N-P completeness theory are not covered. The emphasis of the book is on intuitive and pragmatic program development tech­ niques . This is only a small step forward, as we believe that functional programming provides a link between the study of algorithms and the study of correctness proofs and systematic derivation from formal specifications. We are hoping that more publications covering this area will emerge in the near future. Another aim of this book is to provide a useful reference of functional programs related to a variety of problems. Programmers will be able to choose (or adapt) a functional program that is relevant to their problem as they already do with other languages such as Fortran, Pascal or C. We are also hoping that this book will contribute towards making functional languages more viable as a programming tool.
📄 Page 9
x Preface Approach Each chapter addresses a specific problem area by describing the associated algorithms. Each algorithm is specified using a trivial (but sometimes inefficient) functional pro­ gram. In some cases, a more efficient (and often more obscure) version can be derived. Sections containing non-essential material are marked with a (t) symbol and can be safely skipped. Exercises are included at the end of each chapter. For an excellent companion book which provides further details about most of the algorithms presented in this book, we recommend Brassard and Bratley's Fundamentals of Algorith mics [ 1 9] . Another feature of the book i s the use of Haskell, a newly proposed functional language standard, named after the logician Haskell B. Curry. Haskell is purely func­ tional, uses lazy evaluation and incorporates most of the features of modern functional languages. It is free to use, has numerous implementations on almost all platforms, and is available from sites worldwide. Appendix A provides some information about these implementations. The Haskell source code of the programs given in this book and the answers to selected exercises are available from the following WWW site : http : //www . iro . umontreal . ca/-lapalme/Algorithms-functional . html Target audience This book is primarily targeted at undergraduate students in computer science studying algorithms. Within a traditional course that uses imperative languages, only a fraction of the known algorithms can be completely implemented. By adopting a functional language, more prototypes can be developed; this contributes to a better understand­ ing of the techniques involved, since implementing a particular algorithm requires a complete understanding of all the details related to this algorithm. This approach also allows experimentation with certain types of algorithms (such as process networks) which would be too difficult to program in an imperative language. The prerequisites are: � Students must have had some programming experience in any language, preferably a functional language. Some notion of how functional languages are implemented is also desirable. A minimum mathematical background (mostly summations) is required for the analysis of algorithms although these sections can safely be omitted by those who are not interested in complexity or efficiency analysis . The book can also be used to support an advanced functional programming course in a variety of ways: ( 1 ) by presenting examples of implementing and using data structures such as queues, trees and graphs, (2) by introducing program design techniques, (3) by increasing the awareness concerning time and space efficiency, and ( 4) by presenting some advanced topics such as process networks and parallel algorithms. As mentioned previously, the book is also a useful source of functional programs which can be regarded as executable specifications of the algorithms concerned.
📄 Page 10
Overview Preface xi The book is roughly divided into three parts. The first part contains all the background information needed before embarking on the main body of the book. Chapter 1 presents the basic concept of an algorithm and highlights key features of functional languages with a brief history of their evolution and role in recent years. Chapter 2 contains an introduction to the functional language Haskell. Chapter 3 is a review of evaluation mechanisms such as strict and lazy evaluation, with an overview of the graph reduction model. It defines what is meant by 'efficiency' and presents some simple transformations that increase the efficiency of programs. Chapter 4 examines simple concrete data structures (lists, trees, and arrays) and the efficiency of the operations used to manipulate these data structures. The second part of the book considers some basic algorithms for a variety of purposes. Chapter 5 examines abstract data types (ADTs), using them to represent for example priority queues and binary search trees. Chapter 6 presents sorting techniques such as quicksort and heapsort and Chapter 7 discusses graphs and associated algorithms such as computing the minimum spanning tree and depth-first search. The last part addresses some high-level methodologies used in the design of algo­ rithms. Chapter 8 describes top-down design techniques such as divide-and-conquer and backtracking search. Chapter 9 explains some inefficiencies in top-down algorithms and how they can be solved by adopting a bottom-up approach, namely dynamic programming. Finally, Chapter 10 examines some advanced topics such as process networks, monads, and parallel algorithms through some extensions to the Haskell language. Acknowledgements The idea of the book came while the first author was a visiting lecturer at Allegheny Col­ lege so our special thanks are due to Mike Holcombe, Bob Cupper and Ben Haytock for making it possible. We would like to thank Aster Thien, Marina Mahmood, Chris Meen, Len Bottaci and Rob Turner for their contribution to some of the programs presented in this book. We also thank Guy St-Denis for his many constructive suggestions on a preliminary version of this book, our editor Michael Strang for his continued support, and the anonymous reviewers who found many inaccuracies and gave constructive ideas on presentation. Finally, we are grateful to the (enthusiastic!) Haskell community for their efforts in designing a standard functional language and for providing the scientific community with free implementations. Feth i Rabh i (F . A . Rabhi<Odcs . hull . ac . uk) Guy Lapalme (lapalme<Oiro . umontreal . ca)
📄 Page 11
(This page has no text content)
📄 Page 12
Introd uction ................................................................................................................................ 1 .1 Algorithms 1 .2 Functional languages 1. 3 Bibliographical notes 5 10 This chapter provides some background information for reading and assimilating the material presented in the rest of the book. It introduces some basic mathematical and algorithmic concepts and the essential features of functional programming languages. tiiit Algorithms First, we give a brief introduction to the concept of an algorithm, how algorithms are implemented, and what is meant by analyzing an algorithm. 1 . 1 . 1 What is an algorithm? The word algorithm comes from the name of the famous Persian mathematician Mo­ hammed al-Khowarizmi who is credited for inventing the rules for adding, subtracting, multiplying and dividing decimal numbers. An informal definition of an algorithm would state that it is 'a step by step procedure for solving a problem or accomplishing some end' . Harel [49] defines it as 'an abstract recipe, prescribing a process which may be carried out by a human, a computer or by other means'. Although this can mean that algorithms may be applied in any context, our main interest is in how algorithms are implemented on a computer. Euclid's algorithm for finding the greatest common divisor (gcd) of two numbers is probably one of the oldest and most frequently used examples of an algorithm. Given two positive integers m and n, the gcd, which is defined as the largest positive integer that evenly divides both m and n, is obtained as follows:
📄 Page 13
2 Introduction 1. Use two counters x and y . Initialize the value of x with m and the value of y with n. 2. If x > y, then the new value of x becomes x - y. 3. If x < y, then the new value of y becomes y - x. 4. Repeat the operations 2 and 3 until the value of x equals the value of y . The final value of x (or y since i t i s the same) i s the gcd of the two original numbers m and n. This algorithm, which can be traced back to well before the time of the ancient Greeks, can also be expressed using division and modulus operations to avoid repeated subtractions. Since this book is dedicated to the study of algorithms, it contains several examples of algorithms classified either according to the problem they are solving or according to the design strategy that produced them in the first place. 1 . 1 .2 Implementing algorithms An algorithm is only useful if it can be translated into instructions that are executable. In other words, algorithms need to be implemented using a computer language. Several language paradigms, such as imperative, functional, and logic paradigms, are available to the algorithm implementor. The best-known one is the imperative (or procedural) paradigm which forms the basis for most common languages such as Pascal and C. Translating an algorithm into an imperative language involves building the solution using data structures such as variables and arrays and joining them with control struc­ tures such as conditional and loop statements. The main advantage of using imperative languages is that they have been designed with the computer architecture in mind, so the way the algorithm is executed by the computer is fairly straightforward. If another language paradigm is used, then the algorithm must be formulated using the constructs offered by this new paradigm. These constructs are then turned into machine code by the compiler for this language. For example, the functional language paradigm offers recursion as the main control structure, and non-updatable lists as one of the main data structures. The use of these higher level constructs eases the implementation of an algorithm but can make it harder to reason about the efficiency of the program. 1 . 1 . 3 The analysis of algorithms The main motivation for using computers to implement algorithms is that of speed. As millions of instructions can be executed in only a few seconds, some problems that are too large to be solved by hand can be easily tackled. Then, an important question arises: given several algorithms for solving the same problem, which one should be selected for implementation? The answer is that the most efficient one should be used, that is, the one that takes less time to execute. Given that algorithms would have to work on large problems, the payoff for designing a 'good' algorithm can be significant at execution time. At this stage, we must decide on the criteria that should be used to measure the efficiency of an algorithm. The exact execution time could constitute one possible
📄 Page 14
Algorithms 3 measure but this approach suffers from several serious defects. First, this time varies according to the computer being used. Secondly, it depends on how various language constructs are translated into machine code by the compiler. In addition, it is often desirable to compare different algorithms before implementing them so as to avoid developing code unnecessarily. The usual technique used in analyzing an algorithm is to isolate an operation fundamental to the problem under study. This is called the complexity measure. Then, a walk through the algorithm reveals how many times the operation that corresponds to our measure is invoked. The result of the analysis is often expressed as a function of the size of the input. The size of the input is often related to the size of the data structure used in the algorithm. For example, consider computing the maximum value in a list of n numbers. We assume that the complexity measure is a comparison operation between two numbers. A simple algorithm for this problem initializes the maximum value with the value of the first number in the list, and compares this maximum with every remaining number, replacing the current maximum value with any value found to be greater. The number of comparisons made by this algorithm is equal to the length of the list minus one since the first number is not compared. Therefore, the time complexity of this algorithm can be described by a function that takes the size n of the input list as an argument. This function is defined as: Tmaximum(n) = n- 1 Chapter 3 is devoted to explaining in more detail the mechanics of analyzing algo­ rithms expressed in a functional language. Worst-case, best-case, and average-case analysis Even with the same input size, an algorithm might perform a different amount of work, particularly if the work performed is dependent on data values. For example, consider the problem of searching for an item sequentially in a list of n items. We assume that the complexity measure is a comparison operation between two items. The solution might take from one step (if the item is found at the beginning of the list) to n steps (if the item is found at the end of the list) . The worst-case complexity is defined as the maximum work performed by an algo­ rithm. Similarly the best-case complexity represents the minimum amount of work and the average-case complexity represents the work done on average. For the sequential search example, the worst-case and the best-case complexities are respectively: Tsearchwc (n) = n Tsearch8c (n) = 1 In the average-case, the complexity is measured in terms of probabilities. It is defined as the sum of the work performed times the probability for all possible situations. In the previous example, if we assume that the searched item can be in any position in the list with an equal probability of 1 In, the average-case complexity is: � 1 . 1 �. 1 n(n + 1 ) n + 1 TsearchAc(n) = L..- - X 1 = - L....- 1 = -X = -- i=l n n i=I n 2 2
📄 Page 15
4 Introduction The average-case complexity is harder to carry out since we need to know the probability of success or failure of all the tests in the algorithm. The worst-case analysis is usually easier to perform so it is the one carried out in most of the algorithms in the book. Besides choosing measures that provide an idea about time complexity, we could also choose measures that give other information concerning the behavior of the program such as the space complexity. Given a suitable measure, such as the number of list cells used during the execution, the space analysis can proceed in exactly the same way as the time analysis. Asymptotic growth of complexity functions We have just seen that the analysis of algorithms only provides an approximation of the run-time behavior of a program after it has been implemented on a computer. This approximation can be used to determine which algorithm is more efficient. Supposing that two algorithms which solve a problem of size n are found to have the following complexities: Tt (n) = 2n T2 (n) = 4n It is tempting to say that the first algorithm is more efficient than the second one. However, remember that we are only counting the occurrences of a specific operation while ignoring the rest of the program so a closer approximation would be Tt (n) = 2cn where c is a constant that represents the remaining operations . Similarly, the second algorithm's equation is better approximated by T2 (n) = 4c'n where c' is another constant. In this case, it is no longer possible to take it for granted that the first algorithm is better than the second, since it all depends on the constants c and c' . Consider now the complexity of a third algorithm: T3 (n) = 0.00 1 n2 We can see that for small values of n, the third algorithm is more efficient than the previous two and that for large values of n, it is less efficient. If we add a constant as in the previous two cases, we can see that this algorithm will still be less efficient than the previous two for large values of n . Since the main motivation for comparing algorithms is to test their behavior on large problems, the comparison between algorithms is going to be on the asymptotic rate of growth of the complexity function. The growth of functions is therefore determined by ignoring constant factors and small input sizes. What matters most in algorithms 1 and 2 is the factor n so it is said that they belong to the same class, namely O (n). The notation g (n) E O (f(n)), read 'g is in big Oh of f' , provides an upper limit on the growth of g (n) which is no more than the growth of f(n) . Formally, g (n) E O (f (n)) means that there exists a positive integer no and some positive real constant c such that g (n) � cf (n) for all n > no. The three algorithms described previously can now be classified according to their asymptotic growth rate: Tt (n) E O (n)
📄 Page 16
O{logn) O(n) O(n2) O(n3) 0(2n) O(n!) T 2 (n) E O(n) T3(n) E O(n2) logarithmic linear Quadratic Cubic Exponential Factorial 2 1ogn + 3 ! +50 2n2 - 3n n3 -n2 + 2n 2n n! Function al l anguages S 23 microseconds 0.55 mill iseconds 2 seconds 16 minutes 1086 centuries I 02552 centuries According to our definition, we also have T, (n) E O(n2) and T 2 (n) E O(n 2 ). Other more precise notations (such as fJ) exist but the 'big Oh' notation is sufficient for our purposes. Table 1 . 1 shows the most common asymptotic functions in increasing growth rate, with an example and how much it would take to run on a hypothetical computer. It can be seen that the exponential and factorial growth rates for a large input size are well beyond the capacity of any computer and even beyond the age of the universe! Various mathematical concepts and formulas for analyzing the complexity of algo­ rithms are listed in Appendix B . e1 Functional languages This section introduces the basic concepts behind functional languages, most of which will be considered in more detail in the next chapter. It also presents a brief overview of the theoretical foundations and evolution history of functional languages. 1 .2 . 1 Functions, A.-calcu lus and induction A function can be loosely described as a way of computing an output value from an input value. Functions have already been used earlier for example to define complexity growth rates but it is important to look closer at the mathematical foundations of the notion of a function. A function can be represented by a graph depicting its input-output relation. For example, a function for computing the double of a natural number would be described 'extensionally' by pairs where the first element is the input value and the second element the corresponding result: (0, 0) , (1' 2) , (2 , 4) , (3 , 6) . . . But from the algorithmic point of view, this representation of a function is not very useful because we are often more interested in determining how to compute the output
📄 Page 17
6 Introduction value from a given input than in looking up an output value in a set of precomputed pairs. Some ways of computing an output are more efficient than others. For example, we could (at least conceptually) sort a list of numbers in increasing order by permuting these numbers and then checking that every number in the new list is less or equal than the preceding one. Unfortunately, this algorithm is not very efficient and is not of any practical value. Programming seeks to define efficient means for finding the output value and functional programming is no different in this respect. We are more interested in an ' intensional' definition of a function. Mathematicians have defined the A-calculus to express rules of correspondence between input and output values. The A-calculus gives a notation for defining a function by Ax .M where M is an expression involving x . The value computed by such a function using an input value is given by the 'application' of that function on that value. For example, (Ax .M)Y will compute the expression M after having replaced all occurrences of x in M by Y. For example, in the following equations we can see the results of some applications. (A.x.x * x)(4 + 2) � (4 + 2) * (4 + 2) (A.x.xx) (Ay .y) � (Ay .y) (Ay .y) � (Ay .y) More details about this formalism can be found in [7, 48] . This seemingly simple idea is very powerful and it has been shown that every computable function can be defined using the A-calculus. This model has long been used by computer scientists as a theoretical basis for computer languages. Unfortunately, as A-calculus models mathematical functions, it does not take into account the modifications of variables. Allowing such modifications was one of the fundamental ideas used by the von Neumann model for computers. It was not until the mid- 1 970s that efficient implementations of purely functional languages were developed so that they could then show their usefulness on real problems. Mathematical induction is another powerful tool that is at the root of functional programming. It is used in mathematics for proving properties by building upon a simple case, the base case, for which the property is easily proved. The next step is proving that if the property holds for some n, then it is also true for n + I. For example, for proving the first summation formulation of Appendix B.3 (the sum of the n first integers is n(nt> ), we can check that the property is true for n = I because 1 �2 = I. We then assume that the property is true for some n > 1, that is n (n + 1) 1 + 2 + . . . + n = 2 (* ) We now try to prove that the property is true for n + I, which corresponds to the following sum: 1 + 2 + . . . + n + (n + 1) Since the sum of the first n terms is given by our assumption ( *) , the expression becomes: n (n + 1 ) 1 + 2 + . . . + n + (n + 1 ) = 2 + (n + 1 ) n2 + n + 2n + 2 2
📄 Page 18
n2 + 3n + 2 = 2 (n + l ) (n + 2) = ------ 2 Function al l anguages 7 which is the formula to be proved for n + 1 . As this property is true for 1 and any n + 1 given that it is true for n, then it is true for any n � 1 . This is the simplest and least powerful form of mathematical induction. In computer science, algorithms are often defined recursively : starting with a problem dealing with n values, we solve the problem using the solution for n -1 and so on until we arrive at a simple case for which it is not necessary to subdivide further. We can see that this process is quite similar to induction which is often used for proving properties of algorithms such as the memory space used and the time it takes to execute. 1 .2.2 Why functional languages? Functional languages draw their names from the fact that their basic operations are defined using functions which are invoked using function applications and 'glued' together using function composition. Another interesting characteristic is that functions are 'first-class' objects, that is, they can be used as any other object: given as parameters, returned as results of other functions and stored in data structures. Using functions in this way gives a lot of flexibility to the programmer because of the fundamental properties of functions that make it possible to abstract or easily modularize the processing of data. Functions can also be composed to give rise to new functions. For example, the three following functions define respectively the factorial of n, the sum of integers up to n and the sum of the squares of the integers up to n: f act n n==O = 1 n>O = n * f act (n- 1 ) sumlnt n n==O = 0 n>O = n + sumlnt (n- 1 ) sumSqr n n==O = 0 n>O = n•n + sumSqr (n- 1 ) These functions are written in Haskell, the functional language we will use in this book, using recurrence equations (see Appendix B .4). As we will see later, these functions can also be defined in a simpler way but here we choose to stay close to the standard mathematical definitions. If we look closely at these definitions, we can see they all rely on the same recursion principle: that is, define the base case and then define the value of the function for n in terms of a combination of n and the value of the function for n - 1 . It would be interesting to abstract the recursion principle in terms of the base case and a function for combining values. It can be d�fined as follows:
📄 Page 19
8 Introduction induct ion base comb n n==O base n>O comb n ( induct ion base comb (n- 1 ) ) The first equation indicates that the base value is returned if n = 0 and the second equation states that the combination of values is done using a function having two parameters, the first one being the current value of n and the second being the value of the induction for n - 1 . In our notation, function application is indicated by juxtaposition. With this framework we can unify our previous examples: fact uses 1 as a base case and multiplication for combining values while sumlnt uses 0 as base case and addition for combination. In Haskell, ( * ) denotes the standard multiplication function and ( +) , the addition function; we can thus define these functions using our induction framework like this: f act n = induct ion 1 (*) n sumlnt n = induct ion 0 (+) n For the third function sumSqr, we need a function to add the square of the first argument with the second one; as this function is not predefined we can define one; we use the usual infix form for the addition and multiplication. f X y = X*X + y and define sumSqr with sumSqr n = induct ion 0 f n But as f would probably be needed only in this definition we can use an anonymous function notation resembling the lambda calculus notation (A. x y . x * x + y) which is written in Haskell as ( \x y -> x*x + y) . So sumSqr can be defined as sumSqr n = induct ion 0 ( \x y -> X*X + y) n This simple example has shown the power of expression of a functional language whose kernel language only consists of the definition and application of functions. Control structures like loops are defined using function application. In the rest of the book, we will show many examples of the power of abstraction of small functions which are combined and composed in order to build flexible modules. 1 .2 .3 History and evolution The first functional languages date from the 1 960s, for example, Lisp [80, 8 1 ] and Iswim [78] , but these languages allowed features such as assignments that made them less 'pure' . The interest in purely functional languages was renewed in 1978 by the Turing lecture of Backus [6] , the father of Fortran. Functional languages are most often characterized by the features they lack, that is, no assignment and no explicit sequencing, rather than by the features they possess: functions as first class objects and referential transparency. We will come back to these points later. Much of the important work in purely functional languages originated in the United Kingdom: in 1 976 David Turner introduced SASL (St-Andrews Static Language) [ 1 1 8]
📄 Page 20
Function al l angu ages 9 followed in 1982 by KRC (Kent Recursive Calculator) [ 1 1 7] by which programs were defined by recursive equations. As expressions were evaluated only when necessary (dubbed lazy evaluation), it was possible to specify infinite lists . He also introduced a new implementation method based on combinatory logic. Another 'thread' was developed in Edinburgh by Milner and his group who defined a new typed functional language ML (Meta Language) [45, 85] . The typing was polymorphic in the sense that it was possible to define statically typed functions for many different types of arguments provided that they used the same algorithm, for example, it was possible to define a typed function for counting the number of elements in a list of any type of elements. Moreover, the types of the functions were being inferred by the system. The user did not have to explicitly specify the types of functions; in fact, in the first versions, it was not even possible to do so. ML used an applicative style of evaluation as in Lisp, that is, parameters are fully evaluated before the function call is executed; this is in contrast with the normal order of evaluation which evaluates a parameter only when needed in a function call. There were other similar languages which used the same kind of ideas but with a slightly different point of view: Hope [23] used polymorphic typing but the types were not inferred by the system, they had to be specified by the user. In 1986, David Turner defined Miranda [ 105, 1 1 9] which merged these two trends into a 'polymorphically typed lazy evaluated language with recursive equations' . Following this development, other languages appeared in the same 'vein' but with slightly different features (such as Orwell [ 1 25]) . In 1 990, an international committee was formed to define a new language that was finally called Haskell in honour of Haskell B. Curry, a logician whose work forms the logical basis of much of the theory behind functional programming. Haskell mainly differs from Miranda in the following points (to which we will return later): � it permits the overloading of function names which is often called 'ad-hoc polymor­ phism' ; G it presents a 'monadic' view of the input-output mechanism; � it defines a notion of functional arrays that can be implemented efficiently while keeping the notion of referential transparency. In February 1999, Haskell 98 was defined, which is the dialect we use in this book. In Appendix A, we give some indications on how to get hold of a Haskell implementation as well as the programs used in this book. 1 .2.4 Advantages and disadvantages of functional languages The main advantages of functional languages are summarized as follows: tl' declarativeness of the expressions which have unique values. The order of execution is not specified by the programmer but rather it is left up to the system how to compute the values given the relations to be satisfied between the values. Functional languages can be seen as 'executable mathematics' ; the notation was designed to be as close as possible to the mathematical way of writing.
The above is a preview of the first 20 pages. Register to read the complete e-book.

💝 Support Author

0.00
Total Amount (¥)
0
Donation Count

Login to support the author

Login Now
Back to List