📄 Page
1
Algorithms Notes for ProfessionalsAlgorithms Notes for Professionals GoalKicker.com Free Programming Books Disclaimer This is an unocial free book created for educational purposes and is not aliated with ocial Algorithms group(s) or company(s). All trademarks and registered trademarks are the property of their respective owners 200+ pages of professional hints and tricks
📄 Page
2
Contents About 1 ................................................................................................................................................................................... Chapter 1: Getting started with algorithms 2 .................................................................................................... Section 1.1: A sample algorithmic problem 2 ................................................................................................................. Section 1.2: Getting Started with Simple Fizz Buzz Algorithm in Swift 2 ...................................................................... Chapter 2: Algorithm Complexity 5 ......................................................................................................................... Section 2.1: Big-Theta notation 5 .................................................................................................................................... Section 2.2: Comparison of the asymptotic notations 6 .............................................................................................. Section 2.3: Big-Omega Notation 6 ................................................................................................................................ Chapter 3: Graph 8 ........................................................................................................................................................... Section 3.1: Storing Graphs (Adjacency Matrix) 8 ......................................................................................................... Section 3.2: Introduction To Graph Theory 11 .............................................................................................................. Section 3.3: Storing Graphs (Adjacency List) 15 ........................................................................................................... Section 3.4: Topological Sort 17 ...................................................................................................................................... Section 3.5: Detecting a cycle in a directed graph using Depth First Traversal 18 ................................................... Section 3.6: Thorup's algorithm 19 ................................................................................................................................. Chapter 4: Graph Traversals 21 ............................................................................................................................... Section 4.1: Depth First Search traversal function 21 ................................................................................................... Chapter 5: Dijkstra’s Algorithm 22 ........................................................................................................................... Section 5.1: Dijkstra's Shortest Path Algorithm 22 ......................................................................................................... Chapter 6: A* Pathfinding 27 ....................................................................................................................................... Section 6.1: Introduction to A* 27 ..................................................................................................................................... Section 6.2: A* Pathfinding through a maze with no obstacles 27 .............................................................................. Section 6.3: Solving 8-puzzle problem using A* algorithm 34 ...................................................................................... Chapter 7: A* Pathfinding Algorithm 37 ................................................................................................................ Section 7.1: Simple Example of A* Pathfinding: A maze with no obstacles 37 ............................................................ Chapter 8: Dynamic Programming 44 .................................................................................................................... Section 8.1: Edit Distance 44 ............................................................................................................................................ Section 8.2: Weighted Job Scheduling Algorithm 44 .................................................................................................... Section 8.3: Longest Common Subsequence 48 ........................................................................................................... Section 8.4: Fibonacci Number 49 .................................................................................................................................. Section 8.5: Longest Common Substring 50 .................................................................................................................. Chapter 9: Kruskal's Algorithm 51 ............................................................................................................................ Section 9.1: Optimal, disjoint-set based implementation 51 ......................................................................................... Section 9.2: Simple, more detailed implementation 52 ................................................................................................ Section 9.3: Simple, disjoint-set based implementation 52 .......................................................................................... Section 9.4: Simple, high level implementation 52 ........................................................................................................ Chapter 10: Greedy Algorithms 54 ........................................................................................................................... Section 10.1: Human Coding 54 ..................................................................................................................................... Section 10.2: Activity Selection Problem 57 .................................................................................................................... Section 10.3: Change-making problem 59 ..................................................................................................................... Chapter 11: Applications of Greedy technique 61 ............................................................................................. Section 11.1: Oine Caching 61 ........................................................................................................................................ Section 11.2: Ticket automat 69 ....................................................................................................................................... Section 11.3: Interval Scheduling 72 ................................................................................................................................. Section 11.4: Minimizing Lateness 76 ...............................................................................................................................
📄 Page
3
Chapter 12: Prim's Algorithm 80 ................................................................................................................................. Section 12.1: Introduction To Prim's Algorithm 80 .......................................................................................................... Chapter 13: Bellman–Ford Algorithm 88 ................................................................................................................ Section 13.1: Single Source Shortest Path Algorithm (Given there is a negative cycle in a graph) 88 ..................... Section 13.2: Detecting Negative Cycle in a Graph 91 .................................................................................................. Section 13.3: Why do we need to relax all the edges at most (V-1) times 93 ............................................................. Chapter 14: Line Algorithm 96 .................................................................................................................................... Section 14.1: Bresenham Line Drawing Algorithm 96 .................................................................................................... Chapter 15: Floyd-Warshall Algorithm 99 ............................................................................................................. Section 15.1: All Pair Shortest Path Algorithm 99 ........................................................................................................... Chapter 16: Catalan Number Algorithm 102 ........................................................................................................ Section 16.1: Catalan Number Algorithm Basic Information 102 ................................................................................. Chapter 17: Multithreaded Algorithms 104 ........................................................................................................... Section 17.1: Square matrix multiplication multithread 104 .......................................................................................... Section 17.2: Multiplication matrix vector multithread 104 ........................................................................................... Section 17.3: merge-sort multithread 104 ...................................................................................................................... Chapter 18: Knuth Morris Pratt (KMP) Algorithm 106 ..................................................................................... Section 18.1: KMP-Example 106 ....................................................................................................................................... Chapter 19: Edit Distance Dynamic Algorithm 108 ........................................................................................... Section 19.1: Minimum Edits required to convert string 1 to string 2 108 .................................................................... Chapter 20: Online algorithms 111 ........................................................................................................................... Section 20.1: Paging (Online Caching) 112 .................................................................................................................... Chapter 21: Big-O Notation 118 .................................................................................................................................. Section 21.1: A Simple Loop 119 ...................................................................................................................................... Section 21.2: A Nested Loop 119 ..................................................................................................................................... Section 21.3: O(log n) types of Algorithms 120 ............................................................................................................. Section 21.4: An O(log n) example 122 ........................................................................................................................... Chapter 22: Sorting 124 .................................................................................................................................................. Section 22.1: Stability in Sorting 124 ............................................................................................................................... Chapter 23: Bubble Sort 125 ........................................................................................................................................ Section 23.1: Bubble Sort 125 .......................................................................................................................................... Section 23.2: Implementation in C & C++ 125 ................................................................................................................ Section 23.3: Implementation in C# 126 ......................................................................................................................... Section 23.4: Python Implementation 127 ..................................................................................................................... Section 23.5: Implementation in Java 128 ..................................................................................................................... Section 23.6: Implementation in Javascript 128 ........................................................................................................... Chapter 24: Merge Sort 130 ........................................................................................................................................ Section 24.1: Merge Sort Basics 130 ............................................................................................................................... Section 24.2: Merge Sort Implementation in Go 131 .................................................................................................... Section 24.3: Merge Sort Implementation in C & C# 131 ............................................................................................. Section 24.4: Merge Sort Implementation in Java 133 ................................................................................................ Section 24.5: Merge Sort Implementation in Python 134 ............................................................................................. Section 24.6: Bottoms-up Java Implementation 135 ................................................................................................... Chapter 25: Insertion Sort 137 .................................................................................................................................... Section 25.1: Haskell Implementation 137 ...................................................................................................................... Chapter 26: Bucket Sort 138 ........................................................................................................................................ Section 26.1: C# Implementation 138 ............................................................................................................................. Chapter 27: Quicksort 139 ............................................................................................................................................
📄 Page
4
Section 27.1: Quicksort Basics 139 .................................................................................................................................. Section 27.2: Quicksort in Python 141 ............................................................................................................................ Section 27.3: Lomuto partition java implementation 141 ............................................................................................ Chapter 28: Counting Sort 143 ................................................................................................................................... Section 28.1: Counting Sort Basic Information 143 ....................................................................................................... Section 28.2: Psuedocode Implementation 143 ............................................................................................................ Chapter 29: Heap Sort 145 ........................................................................................................................................... Section 29.1: C# Implementation 145 ............................................................................................................................. Section 29.2: Heap Sort Basic Information 145 ............................................................................................................. Chapter 30: Cycle Sort 147 ........................................................................................................................................... Section 30.1: Pseudocode Implementation 147 ............................................................................................................. Chapter 31: Odd-Even Sort 148 .................................................................................................................................. Section 31.1: Odd-Even Sort Basic Information 148 ....................................................................................................... Chapter 32: Selection Sort 151 ................................................................................................................................... Section 32.1: Elixir Implementation 151 .......................................................................................................................... Section 32.2: Selection Sort Basic Information 151 ...................................................................................................... Section 32.3: Implementation of Selection sort in C# 153 ............................................................................................ Chapter 33: Trees 155 ..................................................................................................................................................... Section 33.1: Typical anary tree representation 155 ..................................................................................................... Section 33.2: Introduction 155 ......................................................................................................................................... Section 33.3: To check if two Binary trees are same or not 156 ................................................................................. Chapter 34: Binary Search Trees 159 ..................................................................................................................... Section 34.1: Binary Search Tree - Insertion (Python) 159 ........................................................................................... Section 34.2: Binary Search Tree - Deletion(C++) 161 ................................................................................................. Section 34.3: Lowest common ancestor in a BST 162 .................................................................................................. Section 34.4: Binary Search Tree - Python 163 ............................................................................................................. Chapter 35: Check if a tree is BST or not 165 ..................................................................................................... Section 35.1: Algorithm to check if a given binary tree is BST 165 .............................................................................. Section 35.2: If a given input tree follows Binary search tree property or not 166 ................................................... Chapter 36: Binary Tree traversals 167 ................................................................................................................. Section 36.1: Level Order traversal - Implementation 167 ........................................................................................... Section 36.2: Pre-order, Inorder and Post Order traversal of a Binary Tree 168 ...................................................... Chapter 37: Lowest common ancestor of a Binary Tree 170 ..................................................................... Section 37.1: Finding lowest common ancestor 170 ..................................................................................................... Chapter 38: Searching 171 ............................................................................................................................................ Section 38.1: Binary Search 171 ...................................................................................................................................... Section 38.2: Rabin Karp 172 .......................................................................................................................................... Section 38.3: Analysis of Linear search (Worst, Average and Best Cases) 173 ........................................................ Section 38.4: Binary Search: On Sorted Numbers 175 ................................................................................................. Section 38.5: Linear search 175 ...................................................................................................................................... Chapter 39: Substring Search 177 ............................................................................................................................ Section 39.1: Introduction To Knuth-Morris-Pratt (KMP) Algorithm 177 ..................................................................... Section 39.2: Introduction to Rabin-Karp Algorithm 180 ............................................................................................. Section 39.3: Python Implementation of KMP algorithm 183 ...................................................................................... Section 39.4: KMP Algorithm in C 184 ............................................................................................................................ Chapter 40: Breadth-First Search 187 ................................................................................................................... Section 40.1: Finding the Shortest Path from Source to other Nodes 187 .................................................................. Section 40.2: Finding Shortest Path from Source in a 2D graph 193 .........................................................................
📄 Page
5
Section 40.3: Connected Components Of Undirected Graph Using BFS 194 ............................................................ Chapter 41: Depth First Search 199 ......................................................................................................................... Section 41.1: Introduction To Depth-First Search 199 .................................................................................................... Chapter 42: Hash Functions 204 ................................................................................................................................ Section 42.1: Hash codes for common types in C# 204 ............................................................................................... Section 42.2: Introduction to hash functions 205 .......................................................................................................... Chapter 43: Travelling Salesman 207 .................................................................................................................... Section 43.1: Brute Force Algorithm 207 ........................................................................................................................ Section 43.2: Dynamic Programming Algorithm 207 ................................................................................................... Chapter 44: Knapsack Problem 209 ....................................................................................................................... Section 44.1: Knapsack Problem Basics 209 .................................................................................................................. Section 44.2: Solution Implemented in C# 209 .............................................................................................................. Chapter 45: Equation Solving 211 ............................................................................................................................ Section 45.1: Linear Equation 211 ................................................................................................................................... Section 45.2: Non-Linear Equation 213 .......................................................................................................................... Chapter 46: Longest Common Subsequence 217 ............................................................................................. Section 46.1: Longest Common Subsequence Explanation 217 .................................................................................. Chapter 47: Longest Increasing Subsequence 222 ......................................................................................... Section 47.1: Longest Increasing Subsequence Basic Information 222 ...................................................................... Chapter 48: Check two strings are anagrams 225 .......................................................................................... Section 48.1: Sample input and output 225 .................................................................................................................... Section 48.2: Generic Code for Anagrams 226 ............................................................................................................. Chapter 49: Pascal's Triangle 228 ............................................................................................................................ Section 49.1: Pascal triangle in C 228 ............................................................................................................................. Chapter 50: Algo:- Print a m*n matrix in square wise 229 ............................................................................ Section 50.1: Sample Example 229 ................................................................................................................................. Section 50.2: Write the generic code 229 ...................................................................................................................... Chapter 51: Matrix Exponentiation 230 ................................................................................................................... Section 51.1: Matrix Exponentiation to Solve Example Problems 230 .......................................................................... Chapter 52: Applications of Dynamic Programming 234 ............................................................................. Section 52.1: Fibonacci Numbers 234 ............................................................................................................................. Chapter 53: polynomial-time bounded algorithm for Minimum Vertex Cover 237 ........................ Section 53.1: Algorithm Pseudo Code 237 ...................................................................................................................... Chapter 54: Dynamic Time Warping 238 .............................................................................................................. Section 54.1: Introduction To Dynamic Time Warping 238 .......................................................................................... Chapter 55: Fast Fourier Transform 242 .............................................................................................................. Section 55.1: Radix 2 FFT 242 .......................................................................................................................................... Section 55.2: Radix 2 Inverse FFT 247 ............................................................................................................................ Appendix A: Pseudocode 249 ....................................................................................................................................... Section A.1: Variable aectations 249 ............................................................................................................................ Section A.2: Functions 249 ............................................................................................................................................... Credits 250 ............................................................................................................................................................................ You may also like 252 ......................................................................................................................................................
📄 Page
6
GoalKicker.com – Algorithms Notes for Professionals 1 About Please feel free to share this PDF with anyone for free, latest version of this book can be downloaded from: https://goalkicker.com/AlgorithmsBook This Algorithms Notes for Professionals book is compiled from Stack Overflow Documentation, the content is written by the beautiful people at Stack Overflow. Text content is released under Creative Commons BY-SA, see credits at the end of this book whom contributed to the various chapters. Images may be copyright of their respective owners unless otherwise specified This is an unofficial free book created for educational purposes and is not affiliated with official Algorithms group(s) or company(s) nor Stack Overflow. All trademarks and registered trademarks are the property of their respective company owners The information presented in this book is not guaranteed to be correct nor accurate, use at your own risk Please send feedback and corrections to web@petercv.com
📄 Page
7
GoalKicker.com – Algorithms Notes for Professionals 2 Chapter 1: Getting started with algorithms Section 1.1: A sample algorithmic problem An algorithmic problem is specified by describing the complete set of instances it must work on and of its output after running on one of these instances. This distinction, between a problem and an instance of a problem, is fundamental. The algorithmic problem known as sorting is defined as follows: [Skiena:2008:ADM:1410219] Problem: Sorting Input: A sequence of n keys, a_1, a_2, ..., a_n. Output: The reordering of the input sequence such that a'_1 <= a'_2 <= ... <= a'_{n-1} <= a'_n An instance of sorting might be an array of strings, such as { Haskell, Emacs } or a sequence of numbers such as { 154, 245, 1337 }. Section 1.2: Getting Started with Simple Fizz Buzz Algorithm in Swift For those of you that are new to programming in Swift and those of you coming from different programming bases, such as Python or Java, this article should be quite helpful. In this post, we will discuss a simple solution for implementing swift algorithms. Fizz Buzz You may have seen Fizz Buzz written as Fizz Buzz, FizzBuzz, or Fizz-Buzz; they're all referring to the same thing. That "thing" is the main topic of discussion today. First, what is FizzBuzz? This is a common question that comes up in job interviews. Imagine a series of a number from 1 to 10. 1 2 3 4 5 6 7 8 9 10 Fizz and Buzz refer to any number that's a multiple of 3 and 5 respectively. In other words, if a number is divisible by 3, it is substituted with fizz; if a number is divisible by 5, it is substituted with buzz. If a number is simultaneously a multiple of 3 AND 5, the number is replaced with "fizz buzz." In essence, it emulates the famous children game "fizz buzz". To work on this problem, open up Xcode to create a new playground and initialize an array like below: // for example let number = [1,2,3,4,5] // here 3 is fizz and 5 is buzz To find all the fizz and buzz, we must iterate through the array and check which numbers are fizz and which are buzz. To do this, create a for loop to iterate through the array we have initialised: for num in number { // Body and calculation goes here } After this, we can simply use the "if else" condition and module operator in swift ie - % to locate the fizz and buzz
📄 Page
8
GoalKicker.com – Algorithms Notes for Professionals 3 for num in number { if num % 3 == 0 { print("\(num) fizz") } else { print(num) } } Great! You can go to the debug console in Xcode playground to see the output. You will find that the "fizzes" have been sorted out in your array. For the Buzz part, we will use the same technique. Let's give it a try before scrolling through the article — you can check your results against this article once you've finished doing this. for num in number { if num % 3 == 0 { print("\(num) fizz") } else if num % 5 == 0 { print("\(num) buzz") } else { print(num) } } Check the output! It's rather straight forward — you divided the number by 3, fizz and divided the number by 5, buzz. Now, increase the numbers in the array let number = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] We increased the range of numbers from 1-10 to 1-15 in order to demonstrate the concept of a "fizz buzz." Since 15 is a multiple of both 3 and 5, the number should be replaced with "fizz buzz." Try for yourself and check the answer! Here is the solution: for num in number { if num % 3 == 0 && num % 5 == 0 { print("\(num) fizz buzz") } else if num % 3 == 0 { print("\(num) fizz") } else if num % 5 == 0 { print("\(num) buzz") } else { print(num) } } Wait...it's not over though! The whole purpose of the algorithm is to customize the runtime correctly. Imagine if the range increases from 1-15 to 1-100. The compiler will check each number to determine whether it is divisible by 3 or 5. It would then run through the numbers again to check if the numbers are divisible by 3 and 5. The code would essentially have to run through each number in the array twice — it would have to runs the numbers by 3 first and then run it by 5. To speed up the process, we can simply tell our code to divide the numbers by 15 directly. Here is the final code: for num in number {
📄 Page
9
GoalKicker.com – Algorithms Notes for Professionals 4 if num % 15 == 0 { print("\(num) fizz buzz") } else if num % 3 == 0 { print("\(num) fizz") } else if num % 5 == 0 { print("\(num) buzz") } else { print(num) } } As Simple as that, you can use any language of your choice and get started Enjoy Coding
📄 Page
10
GoalKicker.com – Algorithms Notes for Professionals 5 Chapter 2: Algorithm Complexity Section 2.1: Big-Theta notation Unlike Big-O notation, which represents only upper bound of the running time for some algorithm, Big-Theta is a tight bound; both upper and lower bound. Tight bound is more precise, but also more difficult to compute. The Big-Theta notation is symmetric: f(x) = Ө(g(x)) <=> g(x) = Ө(f(x)) An intuitive way to grasp it is that f(x) = Ө(g(x)) means that the graphs of f(x) and g(x) grow in the same rate, or that the graphs 'behave' similarly for big enough values of x. The full mathematical expression of the Big-Theta notation is as follows: Ө(f(x)) = {g: N0 -> R and c1, c2, n0 > 0, where c1 < abs(g(n) / f(n)), for every n > n0 and abs is the absolute value } An example If the algorithm for the input n takes 42n^2 + 25n + 4 operations to finish, we say that is O(n^2), but is also O(n^3) and O(n^100). However, it is Ө(n^2) and it is not Ө(n^3), Ө(n^4) etc. Algorithm that is Ө(f(n)) is also O(f(n)), but not vice versa! Formal mathematical definition Ө(g(x)) is a set of functions. Ө(g(x)) = {f(x) such that there exist positive constants c1, c2, N such that 0 <= c1*g(x) <= f(x) <= c2*g(x) for all x > N} Because Ө(g(x)) is a set, we could write f(x) ∈ Ө(g(x)) to indicate that f(x) is a member of Ө(g(x)). Instead, we will usually write f(x) = Ө(g(x)) to express the same notion - that's the common way. Whenever Ө(g(x)) appears in a formula, we interpret it as standing for some anonymous function that we do not care to name. For example the equation T(n) = T(n/2) + Ө(n), means T(n) = T(n/2) + f(n) where f(n) is a function in the set Ө(n). Let f and g be two functions defined on some subset of the real numbers. We write f(x) = Ө(g(x)) as x->infinity if and only if there are positive constants K and L and a real number x0 such that holds: K|g(x)| <= f(x) <= L|g(x)| for all x >= x0. The definition is equal to: f(x) = O(g(x)) and f(x) = Ω(g(x)) A method that uses limits if limit(x->infinity) f(x)/g(x) = c ∈ (0,∞) i.e. the limit exists and it's positive, then f(x) = Ө(g(x)) Common Complexity Classes Name Notation n = 10 n = 100 Constant Ө(1) 1 1 Logarithmic Ө(log(n)) 3 7 Linear Ө(n) 10 100
📄 Page
11
GoalKicker.com – Algorithms Notes for Professionals 6 Linearithmic Ө(n*log(n)) 30 700 Quadratic Ө(n^2) 100 10 000 Exponential Ө(2^n) 1 024 1.267650e+ 30 Factorial Ө(n!) 3 628 800 9.332622e+157 Section 2.2: Comparison of the asymptotic notations Let f(n) and g(n) be two functions defined on the set of the positive real numbers, c, c1, c2, n0 are positive real constants. Notation f(n) = O(g(n)) f(n) = Ω(g(n)) f(n) = Θ(g(n)) f(n) = o(g(n)) f(n) = ω(g(n)) Formal definition ∃ c > 0, ∃ n0 > 0 : ∀ n ≥ n0, 0 ≤ f(n) ≤ c g(n) ∃ c > 0, ∃ n0 > 0 : ∀ n ≥ n0, 0 ≤ c g(n) ≤ f(n) ∃ c1, c2 > 0, ∃ n0 > 0 : ∀ n ≥ n0, 0 ≤ c1 g(n) ≤ f(n) ≤ c2 g(n) ∀ c > 0, ∃ n0 > 0 : ∀ n ≥ n0, 0 ≤ f(n) < c g(n) ∀ c > 0, ∃ n0 > 0 : ∀ n ≥ n0, 0 ≤ c g(n) < f(n) Analogy between the asymptotic comparison of f, g and real numbers a, b a ≤ b a ≥ b a = b a < b a > b Example 7n + 10 = O(n^2 + n - 9) n^3 - 34 = Ω(10n^2 - 7n + 1) 1/2 n^2 - 7n = Θ(n^2) 5n^2 = o(n^3) 7n^2 = ω(n) Graphic interpretation The asymptotic notations can be represented on a Venn diagram as follows: Links Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein. Introduction to Algorithms. Section 2.3: Big-Omega Notation Ω-notation is used for asymptotic lower bound.
📄 Page
12
GoalKicker.com – Algorithms Notes for Professionals 7 Formal definition Let f(n) and g(n) be two functions defined on the set of the positive real numbers. We write f(n) = Ω(g(n)) if there are positive constants c and n0 such that: 0 ≤ c g(n) ≤ f(n) for all n ≥ n0. Notes f(n) = Ω(g(n)) means that f(n) grows asymptotically no slower than g(n). Also we can say about Ω(g(n)) when algorithm analysis is not enough for statement about Θ(g(n)) or / and O(g(n)). From the definitions of notations follows the theorem: For two any functions f(n) and g(n) we have f(n) = Ө(g(n)) if and only if f(n) = O(g(n)) and f(n) = Ω(g(n)). Graphically Ω-notation may be represented as follows: For example lets we have f(n) = 3n^2 + 5n - 4. Then f(n) = Ω(n^2). It is also correct f(n) = Ω(n), or even f(n) = Ω(1). Another example to solve perfect matching algorithm : If the number of vertices is odd then output "No Perfect Matching" otherwise try all possible matchings. We would like to say the algorithm requires exponential time but in fact you cannot prove a Ω(n^2) lower bound using the usual definition of Ω since the algorithm runs in linear time for n odd. We should instead define f(n)=Ω(g(n)) by saying for some constant c>0, f(n)≥ c g(n) for infinitely many n. This gives a nice correspondence between upper and lower bounds: f(n)=Ω(g(n)) iff f(n) != o(g(n)). References Formal definition and theorem are taken from the book "Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein. Introduction to Algorithms".
📄 Page
13
GoalKicker.com – Algorithms Notes for Professionals 8 Chapter 3: Graph A graph is a collection of points and lines connecting some (possibly empty) subset of them. The points of a graph are called graph vertices, "nodes" or simply "points." Similarly, the lines connecting the vertices of a graph are called graph edges, "arcs" or "lines." A graph G can be defined as a pair (V,E), where V is a set of vertices, and E is a set of edges between the vertices E ⊆ {(u,v) | u, v ∈ V}. Section 3.1: Storing Graphs (Adjacency Matrix) To store a graph, two methods are common: Adjacency Matrix Adjacency List An adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph. Adjacent means 'next to or adjoining something else' or to be beside something. For example, your neighbors are adjacent to you. In graph theory, if we can go to node B from node A, we can say that node B is adjacent to node A. Now we will learn about how to store which nodes are adjacent to which one via Adjacency Matrix. This means, we will represent which nodes share edge between them. Here matrix means 2D array. Here you can see a table beside the graph, this is our adjacency matrix. Here Matrix[i][j] = 1 represents there is an edge between i and j. If there's no edge, we simply put Matrix[i][j] = 0. These edges can be weighted, like it can represent the distance between two cities. Then we'll put the value in Matrix[i][j] instead of putting 1. The graph described above is Bidirectional or Undirected, that means, if we can go to node 1 from node 2, we can also go to node 2 from node 1. If the graph was Directed, then there would've been arrow sign on one side of the graph. Even then, we could represent it using adjacency matrix.
📄 Page
14
GoalKicker.com – Algorithms Notes for Professionals 9 We represent the nodes that don't share edge by infinity. One thing to be noticed is that, if the graph is undirected, the matrix becomes symmetric. The pseudo-code to create the matrix: Procedure AdjacencyMatrix(N): //N represents the number of nodes Matrix[N][N] for i from 1 to N for j from 1 to N Take input -> Matrix[i][j] endfor endfor We can also populate the Matrix using this common way: Procedure AdjacencyMatrix(N, E): // N -> number of nodes Matrix[N][E] // E -> number of edges for i from 1 to E input -> n1, n2, cost Matrix[n1][n2] = cost Matrix[n2][n1] = cost endfor For directed graphs, we can remove Matrix[n2][n1] = cost line. The drawbacks of using Adjacency Matrix: Memory is a huge problem. No matter how many edges are there, we will always need N * N sized matrix where N is the number of nodes. If there are 10000 nodes, the matrix size will be 4 * 10000 * 10000 around 381 megabytes. This is a huge waste of memory if we consider graphs that have a few edges. Suppose we want to find out to which node we can go from a node u. We'll need to check the whole row of u, which costs a lot of time. The only benefit is that, we can easily find the connection between u-v nodes, and their cost using Adjacency Matrix. Java code implemented using above pseudo-code: import java.util.Scanner; public class Represent_Graph_Adjacency_Matrix { private final int vertices;
📄 Page
15
GoalKicker.com – Algorithms Notes for Professionals 10 private int[][] adjacency_matrix; public Represent_Graph_Adjacency_Matrix(int v) { vertices = v; adjacency_matrix = new int[vertices + 1][vertices + 1]; } public void makeEdge(int to, int from, int edge) { try { adjacency_matrix[to][from] = edge; } catch (ArrayIndexOutOfBoundsException index) { System.out.println("The vertices does not exists"); } } public int getEdge(int to, int from) { try { return adjacency_matrix[to][from]; } catch (ArrayIndexOutOfBoundsException index) { System.out.println("The vertices does not exists"); } return -1; } public static void main(String args[]) { int v, e, count = 1, to = 0, from = 0; Scanner sc = new Scanner(System.in); Represent_Graph_Adjacency_Matrix graph; try { System.out.println("Enter the number of vertices: "); v = sc.nextInt(); System.out.println("Enter the number of edges: "); e = sc.nextInt(); graph = new Represent_Graph_Adjacency_Matrix(v); System.out.println("Enter the edges: <to> <from>"); while (count <= e) { to = sc.nextInt(); from = sc.nextInt(); graph.makeEdge(to, from, 1); count++; } System.out.println("The adjacency matrix for the given graph is: "); System.out.print(" "); for (int i = 1; i <= v; i++) System.out.print(i + " "); System.out.println();
📄 Page
16
GoalKicker.com – Algorithms Notes for Professionals 11 for (int i = 1; i <= v; i++) { System.out.print(i + " "); for (int j = 1; j <= v; j++) System.out.print(graph.getEdge(i, j) + " "); System.out.println(); } } catch (Exception E) { System.out.println("Somthing went wrong"); } sc.close(); } } Running the code: Save the file and compile using javac Represent_Graph_Adjacency_Matrix.java Example: $ java Represent_Graph_Adjacency_Matrix Enter the number of vertices: 4 Enter the number of edges: 6 Enter the edges: 1 1 3 4 2 3 1 4 2 4 1 2 The adjacency matrix for the given graph is: 1 2 3 4 1 1 1 0 1 2 0 0 1 1 3 0 0 0 1 4 0 0 0 0 Section 3.2: Introduction To Graph Theory Graph Theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. Did you know, almost all the problems of planet Earth can be converted into problems of Roads and Cities, and solved? Graph Theory was invented many years ago, even before the invention of computer. Leonhard Euler wrote a paper on the Seven Bridges of Königsberg which is regarded as the first paper of Graph Theory. Since then, people have come to realize that if we can convert any problem to this City-Road problem, we can solve it easily by Graph Theory. Graph Theory has many applications.One of the most common application is to find the shortest distance between one city to another. We all know that to reach your PC, this web-page had to travel many routers from the server. Graph Theory helps it to find out the routers that needed to be crossed. During war, which street needs to be bombarded to disconnect the capital city from others, that too can be found out using Graph Theory.
📄 Page
17
GoalKicker.com – Algorithms Notes for Professionals 12 Let us first learn some basic definitions on Graph Theory. Graph: Let's say, we have 6 cities. We mark them as 1, 2, 3, 4, 5, 6. Now we connect the cities that have roads between each other. This is a simple graph where some cities are shown with the roads that are connecting them. In Graph Theory, we call each of these cities Node or Vertex and the roads are called Edge. Graph is simply a connection of these nodes and edges. A node can represent a lot of things. In some graphs, nodes represent cities, some represent airports, some represent a square in a chessboard. Edge represents the relation between each nodes. That relation can be the time to go from one airport to another, the moves of a knight from one square to all the other squares etc.
📄 Page
18
GoalKicker.com – Algorithms Notes for Professionals 13 Path of Knight in a Chessboard In simple words, a Node represents any object and Edge represents the relation between two objects. Adjacent Node: If a node A shares an edge with node B, then B is considered to be adjacent to A. In other words, if two nodes are directly connected, they are called adjacent nodes. One node can have multiple adjacent nodes. Directed and Undirected Graph: In directed graphs, the edges have direction signs on one side, that means the edges are Unidirectional. On the other hand, the edges of undirected graphs have direction signs on both sides, that means they are Bidirectional. Usually undirected graphs are represented with no signs on the either sides of the edges. Let's assume there is a party going on. The people in the party are represented by nodes and there is an edge between two people if they shake hands. Then this graph is undirected because any person A shake hands with person B if and only if B also shakes hands with A. In contrast, if the edges from a person A to another person B corresponds to A's admiring B, then this graph is directed, because admiration is not necessarily reciprocated. The former type of graph is called an undirected graph and the edges are called undirected edges while the latter type of graph is called a directed graph and the edges are called directed edges. Weighted and Unweighted Graph: A weighted graph is a graph in which a number (the weight) is assigned to each edge. Such weights might represent for example costs, lengths or capacities, depending on the problem at hand.
📄 Page
19
GoalKicker.com – Algorithms Notes for Professionals 14 An unweighted graph is simply the opposite. We assume that, the weight of all the edges are same (presumably 1). Path: A path represents a way of going from one node to another. It consists of sequence of edges. There can be multiple paths between two nodes. In the example above, there are two paths from A to D. A->B, B->C, C->D is one path. The cost of this path is 3 + 4 + 2 = 9. Again, there's another path A->D. The cost of this path is 10. The path that costs the lowest is called shortest path. Degree: The degree of a vertex is the number of edges that are connected to it. If there's any edge that connects to the vertex at both ends (a loop) is counted twice.
📄 Page
20
GoalKicker.com – Algorithms Notes for Professionals 15 In directed graphs, the nodes have two types of degrees: In-degree: The number of edges that point to the node. Out-degree: The number of edges that point from the node to other nodes. For undirected graphs, they are simply called degree. Some Algorithms Related to Graph Theory Bellman–Ford algorithm Dijkstra's algorithm Ford–Fulkerson algorithm Kruskal's algorithm Nearest neighbour algorithm Prim's algorithm Depth-first search Breadth-first search Section 3.3: Storing Graphs (Adjacency List) Adjacency list is a collection of unordered lists used to represent a finite graph. Each list describes the set of neighbors of a vertex in a graph. It takes less memory to store graphs. Let's see a graph, and its adjacency matrix: Now we create a list using these values.