algorithms data

Upload: nulatata

Post on 03-Jun-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 Algorithms Data

    1/34

    [{ "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Bubble sort is a simple and well-known sorting algorithm. It is used in practice once in a blue moon and its main application is to make anintroduction to the sorting algorithms. Bubble sort belongs to O(n2) sorting algorithms, which makes it quite inefficient for sorting large data volumes. Bubble sort is stable and adaptive. \n\nAlgorithm:\n\n1. Compare each pair of adjacent elements from the beginning of an array and, if they are in reversed order, swap them.\n2. If at least one swap has been done, repeat step 1.\n\nYou can [imagine] that on every step big bubbles float to he surface and stay there. At the step, when no bubble moves, sorting stops", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" }, "algorithms" : [ "static void bubble_sort_generic( T[] array ) where T :IComparable\n{\n long right_border = array.Length - 1;\n do\n {\nlong last_exchange = 0;\n for ( long i = 0; i < right_border; i++ )\n

    {\n if ( array[i].CompareTo(array[i + 1]) > 0 )\n {\n T temp = array[i];\n array[i] = array[i + 1];\n

    array[i + 1] = temp;\n last_exchange = i;\n}\n }\n right_border = last_exchange;\n }\n while ( right_border > 0 );\n}",

    "template \nvoid bubble_sort(IT begin, IT end)\n{\n bool done(false);\n while (begin != end && !done) {\n done = true;\n IT it(begin);\n IT next(begin);\n ++next;\n while(next != end) {\n if (*next < *it) {\n using ::std::swap;\n swap(*next, *it);\n done = false;\n

    }\n ++it;\n ++next;\n }\n --end;\n }\n}",

    "public void sort(E[] array) {\n boolean sorted = false;\nfor (int i = 0; (i < array.length) && !sorted; i++) {\n sorted = true;\n for (int j = array.length - 1; j > i; j--) {\n if (array[j].c

    ompareTo(array[j - 1]) < 0) {\n sorted = false;\nswap(array, j, j - 1);\n }\n }\n }\n}\n\nprivate static void swap(E[] array, int current_pos, int new_position) {\n E temp = array[current_pos];\n array[current_pos] = array[new_position];\n array[new_position] = temp;\n}" ], "key" : 1, "order" : "O(n2)", "shortTitle" : "Bubble Sort", "tileImage" : "images/sorting/bubblepass.png", "title" : "Bubble Sort"},{ "backgroundImage" : "images/sorting/insertion1.gif",

    "description" : "Insertion sort belongs to the O(n2) sorting algorithms. Unlike many sorting algorithms with quadratic complexity, it is actually applied inpractice for sorting small arrays of data. For instance, it is used to improvequicksort routine. Some sources notice, that people use same algorithm orderingitems, for example, hand of cards.\n\nAlgorithm:\nInsertion sort algorithm somewhat resembles selection sort. Array is imaginary divided into two parts - sortedone and unsorted one. At the beginning, sorted part contains first element of the array and unsorted one contains the rest. At every step, algorithm takes first element in the unsorted part and inserts it to the right place of the sorted one. When unsorted part becomes empty, algorithm stops\n\nThe ideas of insertion:

  • 8/12/2019 Algorithms Data

    2/34

    \nThe main operation of the algorithm is insertion. The task is to insert a value into the sorted part of the array. Let us see the variants of how we can do it.\n\n\"Sifting down\" using swaps:\nThe simplest way to insert next element intothe sorted part is to sift it down, until it occupies correct position. Initially the element stays right after the sorted part. At each step algorithm compares the element with one before it and, if they stay in reversed order, swap them.\n\nInsertion sort properties:\nadaptive (performance adapts to the initial order of elements)\nstable (insertion sort retains relative order of the same elements)\nin-place (requires constant amount of additional space)\nonline (new elements can be added during the sort).", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" }, "algorithms" : [ "static void InsertSort(IList list) where T : IComparable\n{\n int i, j;\n\n for (i = 1; i < list.Count; i++)\n {\nT value = list[i];\n j = i - 1;\n while ((j >= 0) && (list[j].Co

    mpareTo(value) > 0))\n {\n list[j + 1] = list[j];\nj--;\n }\n list[j + 1] = value;\n }\n}",

    "void insertionSort(int arr[], int length) {\n int i, j, tm

    p;\n for (i = 1; i < length; i++) {\n j = i;\n while(j > 0 && arr[j - 1] > arr[j]) {\n tmp = arr[j];\narr[j] = arr[j - 1];\n arr[j - 1] = tmp;\n

    j--;\n }\n }\n}","public void sort(E[] array) {\n for (int i = 1; i < array.le

    ngth; i++) {\n for (int j = i; (j > 0) && (array[j].compareTo(array[j - 1]) < 0); j--) {\n swap(array, j, j - 1);\n }\n }\n}\n\nprivate static void swap(E[] array, int current_pos, int new_position) {\n Etemp = array[current_pos];\n array[current_pos] = array[new_position];\n array[new_position] = temp;\n}" ], "key" : 2, "order" : "O(n2)",

    "shortTitle" : "Insertion Sort", "tileImage" : "images/sorting/insertion1.gif", "title" : "Insertion Sort"},{ "backgroundImage" : "images/sorting/selection_2.jpg", "description" : "Selection sort is one of the O(n2) sorting algorithms, which makes it quite inefficient for sorting large data volumes. Selection sort is notable for its programming simplicity and it can over perform other sorts in certain situations (see complexity analysis for more details).\n\nAlgorithm:\nThe idea of algorithm is quite simple. Array is imaginary divided into two parts - sorted one and unsorted one. At the beginning, sorted part is empty, while unsorted one contains whole array. At every step, algorithm finds minimal element in the unsorted part and adds it to the end of the sorted one. When unsorted part bec

    omes empty, algorithm stops.\nWhen algorithm sorts an array, it swaps first element of unsorted part with minimal element and then it is included to the sortedpart. This implementation of selection sort in not stable. In case of linked list is sorted, and, instead of swaps, minimal element is linked to the unsorted part, selection sort is stable.\n\nSelection sort stops, when unsorted part becomes empty. As we know, on every step number of unsorted elements decreased by one.Therefore, selection sort makes n steps (n is number of elements in array) of outer loop, before stop. Every step of outer loop requires finding minimum in unsorted part. Summing up, n + (n - 1) + (n - 2) + ... + 1, results in O(n2) numberof comparisons. Number of swaps may vary from zero (in case of sorted array) to

  • 8/12/2019 Algorithms Data

    3/34

    n - 1 (in case array was sorted in reversed order), which results in O(n) number of swaps. Overall algorithm complexity is O(n2).\n\nFact, that selection sortrequires n - 1 number of swaps at most, makes it very efficient in situations, when write operation is significantly more expensive, than read operation.", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Times", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" }, "algorithms" : [ "static void selectionSortGeneric(T[] a) where T : IComparable\n{\n for(int sortedSize=0; sortedSize < a.Length; sortedSize++)\n {\n int minElementIndex = sortedSize;\n T minElementValue = a[sortedSize];\n for(int i=sortedSize+1; i i; j--) {\n if (array[j].compareTo(array[min_index]) < 0) {\n min_index = j;\n }\n }\n swap(array, i, min_index);\n }\n}\n\npublic static void swap(E[] array, int current_pos, int new_position) {\n E temp = array[current_pos];\n array[current_pos] = array[new_position];\n array[new_position] = temp;\n}" ], "key" : 3, "order" : "O(n2)", "shortTitle" : "Selection Sort",

    "tileImage" : "images/sorting/selection_2.jpg", "title" : "Selection Sort"},{ "backgroundImage" : "images/sorting/merge1.jpg", "description" : "Merge sort is based on the divide-and-conquer paradigm. Itsworst-case running time has a lower order of growth than insertion sort. Sincewe are dealing with subproblems, we state each subproblem as sorting a subarrayA[p .. r]. Initially, p = 1 and r = n, but these values change as we recurse through subproblems.\nTo sort A[p .. r]:\n\n1. Divide Step:\nIf a given array A haszero or one element, simply return; it is already sorted. Otherwise, split A[p.. r] into two subarrays A[p .. q] and A[q + 1 .. r], each containing about halfof the elements of A[p .. r]. That is, q is the halfway point of A[p .. r].\n\n2. Conquer Step:\nConquer by recursively sorting the two subarrays A[p .. q] and

    A[q + 1 .. r].\n\n3. Combine Step:\nCombine the elements back in A[p .. r] by merging the two sorted subarrays A[p .. q] and A[q + 1 .. r] into a sorted sequence. To accomplish this step, we will define a procedure MERGE (A, p, q, r).\n\nNote that the recursion bottoms out when the subarray has just one element, so that it is trivially sorted.", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting",

  • 8/12/2019 Algorithms Data

    4/34

    "shortTitle" : "Sorting", "title" : "Sorting" }, "algorithms" : [ "public IList MergeSort(IList list)\n{\n if (list.Count 0 && right.Count > 0)\n if (((IComparable)left[0]).CompareTo(right[0]) > 0)\n {\n rv.Add(right[0]);\n right.RemoveAt(0);\n }\n else\n {\n rv.Add(left[0]);\n left.RemoveAt(0);\n }\n\nfor (int i = 0; i < left.Count; i++)\n rv.Add(left[i]);\n\n for (int i= 0; i < right.Count; i++)\n rv.Add(right[i]);\n\n return rv;\n}",

    "vector merge_sort(vector& vec)\n{\n // Terminationcondition: List is completely sorted if it\n // only contains a single element.\n if(vec.size() == 1)\n {\n return vec;\n }\n\n // Determine the location of the middle element in the vector\n std::vector::iterator middle = vec.begin() + (vec.size() / 2);\n\n vector left(vec.begin(), middle);\n vector right(middle, vec.end());\n\n // Perform a mergesort on the two smaller vectors\n left = merge_sort(left);\n right = merge_sort(right);\n\n return merge(left, right);\n}",

    "public void sort(E[] array) {\n int left = 0;\n int right

    = array.length - 1;\n mergeSort(array, this.temp, left, right);\n}\n\nprivate void mergeSort(E[] array, E[] temp, int left, int right) {\n int mid = (left + right) / 2;\n if (left == right) {\n return;\n }\n mergeSort(array, temp, left, mid);\n mergeSort(array, temp, mid + 1, right);\n for(int i = left; i

  • 8/12/2019 Algorithms Data

    5/34

    ard, until an element with value greater or equal to the pivot is found. Index jis moved backward, until an element with value lesser or equal to the pivot isfound. If i j then they are swapped and i steps to the next position (i + 1), jsteps to the previous one (j - 1). Algorithm stops, when i becomes greater thanj.\n\nAfter partition, all values before i-th element are less or equal than thepivot and all values after j-th element are greater or equal to the pivot.\n\nWhy does it work?\nOn the partition step algorithm divides the array into two parts and every element a from the left part is less or equal than every element bfrom the right part. Also a and b satisfy a pivot b inequality. After completionof the recursion calls both of the parts become sorted and, taking into accountarguments stated above, the whole array is sorted.\n\nComplexity analysis:\nOnthe average quicksort has O(n log n) complexity, but strong proof of this fact is not trivial and not presented here. Still, you can find the proof in [1]. In worst case, quicksort runs O(n2) time, but on the most \"practical\" data it works just fine and outperforms other O(n log n) sorting algorithms.", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" }, "algorithms" : [ "class Quicksort {\n private void swap(int[] Array, int

    Left, int Right) {\n int temp = Array[Right];\n Array[Right] = Array[Left];\n Array[Left] = temp;\n }\n\npublic void sort(int[] Array, int Left, int Right) {\n int LHold = Left;\n int RHold = Right;\n Random ObjRan = new Random();\n int Pivot = ObjRan.Next(Left,Right);\n swap(Array,Pivot,Left);\nPivot = Left;\n Left++;\n\n while (Right >= Left) {\n if (Array[Left] >= Array[Pivot]\n && Array[Right] < Array[Pivot])\n swap(Array, Left, Right);\n else if (Array[Left] >=Array[Pivot])\n Right--;\n else if (Array[Right] < Array[Pivot])\n Left++;\n else {\n Right--;\n Left++;\n } } \n swap(Array, Pivot, Right);\n Pivot = Right; \n if (Pivot > LHold)\n sort(Array, LHold, Pivot);\n if (RHold > Pivot+1)\n sort(Array, Pivo

    t+1, RHold);\n } \n}","#include \n#include \n#include \n\ntemplate< typename BidirectionalIterator, typename Compare >\nvoid quick_sort( BidirectionalIterator first, BidirectionalIterator last, Compare cmp ) {\nif( first != last ) {\n BidirectionalIterator left = first;\n BidirectionalIterator right = last;\n BidirectionalIterator pivot = left++;\n\n while( left != right ) {\n if( cmp( *left, *pivot ) ) {\n ++left;\n} else {\n while( (left != right) && cmp( *pivot, *right ) )\n--right;\n std::iter_swap( left, right );\n }\n }\n\n --lef

    t;\n std::iter_swap( first, left );\n\n quick_sort( first, left, cmp );\nquick_sort( right, last, cmp );\n }\n}\n\ntemplate< typename BidirectionalIt

    erator >\ninline void quick_sort( BidirectionalIterator first, BidirectionalIterator last ) {\n quick_sort( first, last,\n std::less_equal< typename std::it

    erator_traits< BidirectionalIterator >::value_type >()\n );\n}","public void sort(E[] array) {\n comparisons = 0;\n swaps

    = 0;\n int i = 0;\n int j = array.length - 1;\n quickSort(array, i, j);\n}\n\nprivate void quickSort(E[] array, int i, int j) {\n int pivotIndex = findPivot(i, j);\n NativeMethods.swap(array, pivotIndex, j);\n swaps++;\nint k = partition(array, i - 1, j, array[j]);\n NativeMethods.swap(array, k

    , j);\n swaps++;\n if ((k - i) > 1) {\n quickSort(array, i, k - 1);\n }\n if ((j - k) > 1) {\n quickSort(array, k + 1, j);\n }\n}" ], "key" : 5,

  • 8/12/2019 Algorithms Data

    6/34

    "order" : "O(n log n)", "shortTitle" : "Quick Sort", "tileImage" : "images/sorting/quick1.gif", "title" : "Quick Sort"},{ "backgroundImage" : "images/sorting/heap_1.png", "description" : "Heap Sort is another optimal comparison sort as its time complexity is O(nlgn) in both average and worst cases. Although somewhat slower inpractice on most machines than a good implementation of quicksort, it has the advantages of worst-case O(n log n) runtime. Heapsort is an in-place algorithm and is not a stable sort.\nThe main idea behind heapsort is to construct a heap out of the list of elements to be sorted. From this heap the maximum element (in case of max-heap) or the minimum element (in case of min-heap) can be repeatedlytaken out with ease to give the sorted list of elements.\n\nDescription of the Implementation Of Heap Sort:\nThe code uses max-heap for sorting an array. It starts with the macro definitions of left and right children of a node i. The restof the code consists of 3 functions :\n\n1) max_heapify() : It takes as input the index of node i in an array a whose size is also passed. the node i has the property that both its left and right subtrees are max-heaps. But the tree rootedat node i may not be so. This function makes this tree a max-heap. It traversesthe tree starting from node i to the leaves. Every time it compares the node with its left and right children and the maximum of all of them takes the positionof the node.\n\n2) build_max-heap() : It is the function for the initial formation of the heap when the sorting starts. It calls max-heapify() function for all

    non-leaf nodes. This is so because the leaf nodes are trivially max-heaps of size 1. When this function returns, the array becomes a max-heap.\n\n3) heap_sort(): This function is the main heapsort function. It starts by calling the build_max_heap() function on the array. After this function, the max element of the heap is replaced by the last element in the heap and its size is reduced by 1. Thenmax_heapify() is called from the root of heap. This is continued till the heapcontains no more elements.\n\nFor example, take the max-heap as\n87 34 67 20 1856 40\n\nNow 87 will be replaced by 40\n40 34 67 20 18 56 | 87\nafter max_heapify()\n67 34 56 20 18 40 | 87\n\nNow 67 will be replaced by 40\n40 34 56 20 18 | 67 87\nafter max_heapify()\n56 34 40 20 18 | 67 87\n\nNow 56 will be replaced by18\n18 34 40 20 | 56 67 87\nafter max_heapify()\n40 34 18 20 | 56 67 87\n\nNow 40 will be replaced by 20\n20 34 18 | 40 56 67 87\nafter max_heapify()\n34 20 18| 40 56 67 87\n\nNow 34 will be replaced by 18\n18 20 | 34 40 56 67 87\nafter ma

    x_heapify()\n20 18 | 34 40 56 67 87\n\nNow 20 will be replaced by 18\n18 | 20 3440 56 67 87\nafter max_heapify()\n18 | 20 34 40 56 67 87\n\nThus the sorted list is 18 20 34 40 56 67 87\n\nComplexity:\nIt is a binary tree, so it takes O(logn) sifting elements in worst case. And we do this action for n elements, thus the final running time is O(n log n)", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" },

    "algorithms" : [ "public static void HeapSort(int[] input)\n{\n //Build-Max-Heap\n int heapSize = input.Length;\n for (int p = (heapSize - 1) / 2;p >= 0; p--)\n MaxHeapify(input, heapSize, p);\n\n for (int i = input.Length - 1; i > 0; i--)\n {\n //Swap\n int temp = input[i];\n

    input[i] = input[0];\n input[0] = temp;\n\n heapSize--;\nMaxHeapify(input, heapSize, 0);\n }\n}\n\nprivate static void MaxHeapify

    (int[] input, int heapSize, int index)\n{\n int left = (index + 1) * 2 - 1;\n int right = (index + 1) * 2;\n int largest = 0;\n\n if (left < heapSize && input[left] > input[index])\n largest = left;\n else\n largest = index;\n\n if (right < heapSize && input[right] > input[largest])\n

  • 8/12/2019 Algorithms Data

    7/34

    largest = right;\n\n if (largest != index)\n {\n int temp = input[index];\n input[index] = input[largest];\n input[largest] = temp;\n\n MaxHeapify(input, heapSize, largest);\n }\n}",

    "void heapSort(int numbers[], int array_size) {\n int i, temp;\n \n for (i = (array_size / 2); i >= 0; i--) {\n siftDown(numbers, i, array_size - 1);\n }\n \n for (i = array_size-1; i >= 1; i--) {\n // Swap\n temp = numbers[0];\n numbers[0] = numbers[i];\n numbers[i] = temp;\n \n siftDown(numbers, 0, i-1);\n }\n}\n \nvoid siftDown(int numbers[], int root, intbottom) {\n int maxChild = root * 2 + 1;\n \n // Find the biggest child\n if(maxChild < bottom) {\n int otherChild = maxChild + 1;\n // Reversed for stability\n maxChild = (numbers[otherChild] > numbers[maxChild])?otherChild:maxChild;\n } else {\n // Don't overflow\n if(maxChild > bottom) return;\n}\n \n // If we have the correct ordering, we are done.\n if(numbers[root] >=numbers[maxChild]) return;\n \n // Swap\n int temp = numbers[root];\n numbers[root] = numbers[maxChild];\n numbers[maxChild] = temp;\n \n // Tail queue recursion. Will be compiled as a loop with correct compiler switches.\n siftDown(numbers, maxChild, bottom);\n}",

    "public void sort(E[] array) {\n MyHeap max_heap = new MaxHeap(array, array.length, array.length);\n for (int i = 0; i < array.length; i++) {\n max_heap.removeMax();\n }\n}\n\n public void buildHeap() {\n for (int i = num / 2 - 1; i >= 0; i--) {\n siftdown(i);\n }\n}\n \n private void siftdown(int pos) {\n assert (pos >= 0) && (pos < 0) : \"Illegal Heap Position\";\n while (!isLeaf(pos)) {\n int j = leftChild(pos);\n if ((j < (num - 1)) && (heap[j].compareTo(heap[j + 1]) < 0)

    ) {\n j++;\n }\n if (heap[pos].compareTo(heap[j]) >= 0){\n return;\n }\n wap(heap, pos, j);\n pos = j;\n }\n}\n \npublic E removeMax() {\n assert num > 0 : \"Removing From Empty Heap\";\n swap(heap, 0, --num);\n if (num != 0) {\n siftdown(0);\n }\n return heap[num];\n}" ], "key" : 6, "order" : "O(n log n)", "shortTitle" : "Heap Sort", "tileImage" : "images/sorting/heap_1.png", "title" : "Heap Sort"},{ "backgroundImage" : "images/sorting/counting.gif",

    "description" : "Counting sort is a linear time sorting algorithm used to sort items when they belong to a fixed and finite set. Integers which lie in a fixed interval, say k1 to k2, are examples of such items.\n\nThe algorithm proceedsby defining an ordering relation between the items from which the set to be sorted is derived (for a set of integers, this relation is trivial).Let the set tobe sorted be called A. Then, an auxiliary array with size equal to the number ofitems in the superset is defined, say B. For each element in A, say e, the algorithm stores the number of items in A smaller than or equal to e in B(e). If thesorted set is to be stored in an array C, then for each e in A, taken in reverse order, C[B[e]] = e. After each such step, the value of B(e) is decremented.\n\nThe algorithm makes two passes over A and one pass over B. If size of the rangek is smaller than size of input n, then time complexity=O(n). Also, note that it is a stable algorithm, meaning that ties are resolved by reporting those eleme

    nts first which occur first.\n\nThis visual demonstration takes 8 randomly generated single digit numbers as input and sorts them. The range of the inputs are from 0 to 9.\n\nAnalysis:\nBecause the algorithm uses only simple for loops, without recursion or subroutine calls, it is straightforward to analyze. The initialization of the Count array, and the second for loop which performs a prefix sumon the count array, each iterate at most k + 1 times and therefore take O(k) time. The other two for loops, and the initialization of the output array, each take O(n) time. Therefore the time for the whole algorithm is the sum of the timesfor these steps, O(n + k).\n\nBecause it uses arrays of length k + 1 and n, thetotal space usage of the algorithm is also O(n + k).For problem instances in whi

  • 8/12/2019 Algorithms Data

    8/34

    ch the maximum key value is significantly smaller than the number of items, counting sort can be highly space-efficient, as the only storage it uses other thanits input and output arrays is the Count array which uses space O(k)", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time and Stability", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" }, "algorithms" : [ "static void Sort(int[] a, int numValues) {\n int[] counts = new int[numValues];\n for(int i=0; i < a.Length; i++) {\n counts[a[i]]++;\n }\n int outputPos = 0;\n for (int i=0; i < numValues; i++) {\n for (int j=0; j < counts[i]; j++) {\n a[outputPos] = i;\n outputPos++;\n }\n }\n}",

    "#include \n#include \n\nvoid counting_sort(int a[], unsigned int length, int num_values) {\n unsigned int i;\n unsigned int j, output_pos;\n int* counts = malloc(sizeof(int) * num_values);\n if (counts == NULL) {\n printf(\"Out of memory\n\");\n exit(-1);\n

    }\n for(i=0; i max)\n max = a[i];\n else if (a[i] < min)\n min = a[i];\n }\n

    int numValues = max - min + 1;\n int[] counts = new int[numValues];\n for (int i = 0; i < a.length; i++) {\n counts[a[i]-min]++;\n }\n int outputPos = 0;\n for (int i = 0; i < numValues; i++) {\n

    for (int j = 0; j < counts[i]; j++) {\n a[outputPos] = i+min;\n outputPos++;\n }\n }\n}" ], "key" : 7, "order" : "O(n)", "shortTitle" : "Counting Sort",

    "tileImage" : "images/sorting/counting.gif", "title" : "Couting Sort"},{ "backgroundImage" : "images/sorting/bucket.jpg", "description" : "Bucket sort runs in linear time on the average. It assumesthat the input is generated by a random process that distributes elements uniformly over the interval [0, 1).\n\nThe idea of Bucket sort is to divide the interval [0, 1) into n equal-sized subintervals, or buckets, and then distribute the ninput numbers into the buckets. Since the inputs are uniformly distributed over(0, 1), we don't expect many numbers to fall into each bucket. To produce the output, simply sort the numbers in each bucket and then go through the bucket inorder, listing the elements in each.\n\nThe code assumes that input is in n-element array A and each element in A satisfies 0

  • 8/12/2019 Algorithms Data

    9/34

    stinct values; in exchange, it gives up counting sort's O(n + M) worst-case behavior.\n\nThe time complexity of bucket sort is: O(n + m) where: m is the range input values, n is the total number of values in the array. Bucket sort beats allother sorting routines in time complexity. It should only be used when the range of input values is small compared with the number of values. In other words, occasions when there are a lot of repeated values in the input. Bucket sort worksby counting the number of instances of each input value throughout the array. It then reconstructs the array from this auxiliary data. This implementation hasa configurable input range, and will use the least amount of memory possible.\n\nAds\n\nAdvantages:\n\n the user knows the range of the elements;\n time complexity is good compared to other algorithms.\n\nDisadvantages:\n\n you arelimited to having to know the greatest element;\n extra memory is required;\n\nConclusion:\n\nIf you know the range of the elements, the bucket sort is quite good, with a time complexity of only O(n+k). At the same time, this is a majordrawback, since it is a must to know the greatest elements. It is a distribution sort and a cousin of radix sort.", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" },

    "algorithms" : [ "public IList BucketSort(IList iList)\n{\nIList _iList = iList;\nif (iList == null || iList.Count 0) { maxValue = item;}\n if ((_item).CompareTo(minValue) < 0) { minValue = item; }\n}\nArrayList[]bucket = new ArrayList[(int)maxValue - (int)minValue + 1];\nforeach (var item in iList)\n{\n bucket[(int)item] = new ArrayList();\n}\nforeach (var item in iList)\n{\n bucket[(int)item - (int)minValue].Add(item);\n}\niList.Clear();\nforeach (var node in bucket)\n{\n if (node.Count > 0)\n {\n foreach(var element in node) { iList.Add(element); }\n }\n}\nreturn iList;\n}",

    "void bucketSort(int array[], int n) {\n int i, j;\n int count[n];\n for(i=0; i < n; i++) {\n count[i] = 0;\n }\n for(i=0; i < n; i++) {\n (count[array[i]])++;\n }\n for(i=0,j=0; i < n; i++) {\n for(; count[i

    ]>0; (count[i])--) {\n array[j++] = i;\n }\n }\n}","public static int[] bucketSort(int[] arr) {\n int i, j;\nint[] count = new int[arr.length];\n for (i = 0; i < arr.length; i++) {\n

    count[arr[i]]++;\n }\n for (i = 0, j = 0; i < count.length; i++) {\nfor (; count[i] > 0; (count[i])--) {\n arr[j++] = i;\n

    }\n }\n return arr;\n}" ], "key" : 8, "order" : "O(n)", "shortTitle" : "Bucket Sort", "tileImage" : "images/sorting/bucket.jpg", "title" : "Bucket Sort"},

    { "backgroundImage" : "images/sorting/RadixSort1.jpg", "description" : "Radix sort is one of the linear sorting algorithms for integers. It functions by sorting the input numbers on each digit, for each of the digits in the numbers. However, the process adopted by this sort method is somewhat counterintuitive, in the sense that the numbers are sorted on the least-significant digit first, followed by the second-least significant digit and so on till the most significant digit.\n\nTo appreciate Radix Sort, consider the following analogy: Suppose that we wish to sort a deck of 52 playing cards (the different suits can be given suitable values, for example 1 for Diamonds, 2 for Clubs, 3for Hearts and 4 for Spades). The 'natural' thing to do would be to first sort

  • 8/12/2019 Algorithms Data

    10/34

    the cards according to suits, then sort each of the four seperate piles, and finally combine the four in order. This approach, however, has an inherent disadvantage. When each of the piles is being sorted, the other piles have to be kept aside and kept track of. If, instead, we follow the 'counterintuitive' aproach offirst sorting the cards by value, this problem is eliminated. After the first step, the four seperate piles are combined in order and then sorted by suit. If astable sorting algorithm (i.e. one which resolves a tie by keeping the number obtained first in the input as the first in the output) it can be easily seen thatcorrect final results are obtained.\n\nAs has been mentioned, the sorting of numbers proceeds by sorting the least significant to most significant digit. For sorting each of these digit groups, a stable sorting algorithm is needed. Also, the elements in this group to be sorted are in the fixed range of 0 to 9. Both ofthese characteristics point towards the use of Counting Sort as the sorting algorithm of choice for sorting on each digit (If you haven't read the descriptionon Counting Sort already, please do so now).\n\nThe time complexity of the algorithm is as follows: Suppose that the n input numbers have maximum k digits. Thenthe Counting Sort procedure is called a total of k times. Counting Sort is a linear, or O(n) algorithm. So the entire Radix Sort procedure takes O(kn) time. Ifthe numbers are of finite size, the algorithm runs in O(n) asymptotic time.\n\nA graphical representation of the entire procedure has been provided in the attached applet. This program takes 8 randomly generated 3 digit integers as input and sorts them using Radix Sort.\n\nAnalysis\n\nThe running time depends on the stable used as an intermediate sorting algorithm. When each digits is in the range 1 to k, and k is not too large, COUNTING_SORT is the obvious choice. In case

    of counting sort, each pass over n d-digit numbers takes O(n + k) time. There are d passes, so the total time for Radix sort is (n+k) time. There are d passes,so the total time for Radix sort is (dn+kd). When d is constant and k = (n), theRadix sort runs in linear time.\n", "group" : { "backgroundImage" : "images/sorting/bubblepass.png", "description" : "Set of Algorithms Descripes Sorting Process. With Different Purpose Sorting and Different Running Time", "groupImage" : "images/sorting/sort_header.png", "key" : "Sorting", "shortTitle" : "Sorting", "title" : "Sorting" }, "algorithms" : [ "public void RadixSort(int[] a)\n{ \n // helper array \

    n int[] t=new int[a.Length]; \n // number of bits our group will be long \n int r=4; // try to set this also to 2, 8 or 16 to see if it is quicker or not\n // number of bits of a C# int \n int b=32; \n // counting and prefix arrays\n // (note dimensions 2^r which is the number of all possible values of a r-bit number) \n int[] count=new int[1

  • 8/12/2019 Algorithms Data

    11/34

    }\n else\n { // The queue for digit d is empty\nhead[d] = temp; // Point to the new head\n tail[d]

    = temp; // Point to the new tail\n }\n }\n d=0;\nwhile (head[d]==NULL)\n d++;\n L = head[d];\n temp =

    tail[d];\n d++;\n while (d < 10)\n {\n if (head[d]!=NULL)\n {\n temp->next = head[d];\ntemp = tail[d];\n }\n d++;\n }\n m = m * 1

    0;\n return L;\n}","// LSD radix sort\npublic static void sort(String[] a, int W) {

    \n int N = a.length;\n int R = 256; // extend ASCII alphabet size\n String[] aux = new String[N];\n\n for (int d = W-1; d >= 0; d--) {\n //sort by key-indexed counting on dth character\n\n // compute frequency counts\n int[] count = new int[R+1];\n for (int i = 0; i < N; i++)\n count[a[i].charAt(d) + 1]++;\n\n // compute cumulates\n

    for (int r = 0; r < R; r++)\n count[r+1] += count[r];\n\n // move data\n for (int i = 0; i < N; i++)\n aux[count[a[i].charAt(d)]++] = a[i];\n\n // copy back\n for (int i = 0; i < N; i++)\n a[i] = aux[i];\n }\n}" ], "key" : 9, "order" : "O(n)", "shortTitle" : "Radix Sort", "tileImage" : "images/sorting/RadixSort1.jpg", "title" : "Radix Sort"

    },

    { "backgroundImage" : "images/math/sieve.png", "description" : "Sieve of Eratosthenes is used to find prime numbers up to some predefined integer n. For sure, we can just test all the numbers in range from 2 to n for primality using some approach, but it is quite inefficient. Sieveof Eratosthenes is a simple algorithm to find prime numbers. Though, there are better algorithms exist today, sieve of Eratosthenes is a great example of the sieve approach.\nAlgorithm\n\nFirst of all algorithm requires a bit array isComposite to store n - 1 numbers: isComposite[2 .. n]. Initially the array contains zeros in all cells. During algorithm's work, numbers, which can be represented ask * p, where k >= 2, p is prime, are marked as composite numbers by writing 1 ina corresponding cell. Algorithm consists of two nested loops: outer, seeking for unmarked numbers (primes), and inner, marking primes' multiples as composites.

    \n\n For all numbers m: 2 .. n, if m is unmarked:\n add m to primes list;\n mark all it's multiples, lesser or equal, than n (k * m =2);\n otherwise, if m is marked, then it is a composite number.\n\nModification\n\nLet's notice, that algorithm can start marking multiples beginning from square of a found unmarked number. Indeed,\n\nFor m = 2, algorithm marks\n\n 2* 2 3 * 2 4 * 2 5 * 2 6 * 2 7 * 2 ...\n\nFor m = 3,\n\n 2 * 3\n\nwere already marked on m = 2 step.\n\nFor m = 5,\n\n 2 * 5 3 * 5 4 * 5 (=10 * 2)\n\nwere already marked on m = 2 and m = 3 steps.\n\nStrong proof is omitted, but you can easily prove it on your own. Modified algorithm is as following:\n\n For all numbers m: 2 .. sqrt(n), if m is unmarked:\n add m to p

  • 8/12/2019 Algorithms Data

    12/34

    rimes list;\n mark all it's multiples, starting from square, lesser or equal than n (k * m = m);\n otherwise, if m is marked, then it is a composite number;\n check all numbers in range sqrt(n) .. n. All found unmarkednumbers are primes, add them to list.\n\nExample\n\nApply sieve of Eratosthenesto find all primes in range 2..100.\nNB: Shown in The Algorithm Image", "group" : { "backgroundImage" : "images/math/math_1.gif", "description" : "Set of Mathematical based Algorithms", "groupImage" : "images/math/math_header.png", "key" : "Mathematics", "shortTitle" : "Math", "title" : "Mathematics" }, "algorithms" : [ "// Normal Code C#\nbool[] notPrime = new bool[total];\nnotPrime[0] = true;\nnotPrime[1] = true;\nfor (int i = 2; i cur);\n pc.RemoveAll(i => i != cur && i % cur == 0);\n}",

    "#include \n#include \n#define SIZE 10000000\n#define MAX (int)(SIZE-3)/2\nusing namespace std;\n \nint primes[MAX + 1];\nbitset bset;\n \nvoid setPrimes()\n{\n for(int i = 0; i * i

  • 8/12/2019 Algorithms Data

    13/34

    (the third line is \"1\", \"2\", \"1\")\n etc!\nBut what happens with 11 ^ 5? Simple! The digits just overlap [1 5 10 10 5 1] = [1 (5+1) (0+1) 0 5 1] = 161051 = 11 ^ 5\n\nFibonacci Sequence:\nTry this: make a pattern by going up and then along, then add up the values (as illustrated) ... you will get the FibonacciSequence.\n(The Fibonacci Sequence starts \"1, 1\" and then continues by addingthe two previous numbers, for example 3+5=8, then 5+8=13, etc)\n\nNB: Check thealgorithm image and try to make sense with the colored pixels\n\nSymmetrical:\nAnd the triangle is also symmetrical. The numbers on the left side have identical matching numbers on the right side, like a mirror image.", "group" : { "backgroundImage" : "images/math/math_1.gif", "description" : "Set of Mathematical based Algorithms", "groupImage" : "images/math/math_header.png", "key" : "Mathematics", "shortTitle" : "Math", "title" : "Mathematics" }, "algorithms" : [ "using System;\nusing System.Collections.Generic;\nusing System.Text;\n\nnamespace PascalTriangle\n{\n class PascalTriangle\n {\nstatic void Main(string[] args)\n {\n System.Console.WriteLi

    ne(\"Pascal Triangle Program\");\n System.Console.Write(\"Enter the number of rows: \");\n string input = System.Console.ReadLine();\n

    int n = Convert.ToInt32(input);\n for (int y = 0; y < n; y++)\n {\n int c = 1;\n for (int q = 0; q = 0\n max [vi +c[i-1, w-wi], c[i-1, w]} if i>0 and w >= wi\n\nThis says that the value ofthe solution to i items either include ith item, in which case it is vi plus asubproblem solution for (i - 1) items and the weight excluding wi, or does not include ith item, in which case it is a subproblem

    s solution for (i - 1) items a

    nd the same weight. That is, if the thief picks item i, thief takes vi value, and thief can choose from items w - wi, and get c[i - 1, w - wi] additional value.On other hand, if thief decides not to take item i, thief can choose from item1,2, . . . , i- 1 upto the weight limit w, and get c[i - 1, w] value. The betterof these two choices should be made.\n\nAlthough the 0-1 knapsack problem, theabove formula for c is similar to LCS formula: boundary values are 0, and othervalues are computed from the input and \"earlier\" values of c. So the 0-1 knapsack algorithm is like the LCS-length algorithm given in CLR for finding a longest common subsequence of two sequences.\n\nThe algorithm takes as input the maximum weight W, the number of items n, and the two sequences v = and w = . It stores the c[i, j] values in the table, thatis, a two dimensional array, c[0 . . n, 0 . . w] whose entries are computed ina row-major order. That is, the first row of c is filled in from left to right,

    then the second row, and so on. At the end of the computation, c[n, w] containsthe maximum value that can be picked into the knapsack.\n\nAnalysis\n\nThis dynamic-0-1-kanpsack algorithm takes O(nw) times, broken up as follows: q(nw) times tofill the c-table, which has (n +1).(w +1) entries, each re

    uiring q(1) time to compute. O(n) time to trace the solution, because the tracing process starts in row n of the table and moves up 1 row at each step.", "group" : { "backgroundImage" : "images/dynamic/dp_group.jpg", "description" : "Set of Basic Algorithms related to Dynamic ProgrammingConcept. Illustrates this concept and give valuable examples", "groupImage" : "images/dynamic/dynamic_header.png", "key" : "Dynamic Programming", "shortTitle" : "Dynamic Programming", "title" : "Dynamic Programming"

    }, "algorithms" : [ "public static void Run()\n{\n var n = I.Length;

    \n for (var i = 1; i

  • 8/12/2019 Algorithms Data

    32/34

    \n }\n MaxValue = M[n][W];\n}","#include\n#include\n\nvoid knapsack(int n,in

    t W);\nint n,i,w,W;\nint weight[50],v[50];\nint C[50][50];\n\nvoid knapsack(intn,int W)\n{\n for(i = 1; i

  • 8/12/2019 Algorithms Data

    33/34

    , R] would have been the interval for that substring rather than its current value. Thus we \"reset\" and compute a new [L, R] by comparing S[0...] to S[i...] and get Z[i] at the same time (Z[i] = R - L + 1).\n Otherwise, i = min(Z[k], R - i + 1) because S[i...] matches S[k...] for at least R - i + 1 characters(they are in the [L, R] interval which we know to be a prefix-substring). Now we have a few more cases to consider.\n If Z[k] < R - i + 1, then there is nolonger prefix-substring starting at S[i] (or else Z[k] would be larger), meaningZ[i] = Z[k] and [L, R] stays the same. The latter is true because [L, R] only changes if there is a prefix-substring starting at S[i] that extends beyond R, which we know is not the case here.\n If Z[k] >= R - i + 1, then it is possiblefor S[i...] to match S[0...] for more than R - i + 1 characters (i.e. past position R). Thus we need to update [L, R] by setting L = i and matching from S[R +1] forward to obtain the new R. Again, we get Z[i] during this.\n\nThe process computes all of the Z values in a single pass over the string, so we're done. Correctness is inherent in the algorithm and is pretty intuitively clear.\n\nAnalysis\nWe claim that the algorithm runs in O(n) time, and the argument is straightforward. We never compare characters at positions less than R, and every time wematch a character R increases by one, so there are at most n comparisons there.Lastly, we can only mismatch once for each i (it causes R to stop increasing), so that's another at most n comparisons, giving O(n) total.", "group" : { "backgroundImage" : "images/strings/string_group.gif", "description" : "Set of Basic Algorithms related to string processing and matching",

    "groupImage" : "images/strings/strings_header.png", "key" : "String Processing", "shortTitle" : "String Processing", "title" : "String Processing" }, "algorithms" : [ "NOT FOUND",

    "int L = 0, R = 0;\nfor (int i = 1; i < n; i++) {\n if (i > R){\n L = R = i;\n while (R < n && s[R-L] == s[R]) R++;\n z[i] = R-L; R--;\n } else {\n int k = i-L;\n if (z[k] < R-i+1) z[i] = z[k];\n else {\n L = i;\n while (R < n && s[R-L] == s[R]) R++;\n z[i] = R-L; R--;\n }\n }\n}\n\nint maxz = 0, res = 0;\nfor (int i = 1; i < n; i++) {\n if(z[i] == n-i && maxz >= n-i) { res = n-i; break; }\n maxz = max(maxz, z[i]);\n}",

    "public int findPrefixLength (char [] text, int a, int b)\n{\nint len = 0;\n for(int i = 0; i + a < text.length && i + b < text.length; i++)\n {\n if (text[i + a] == text[i + b]) len++;\n else break;\n }\n return len;\n}\n \npublic void calcZbox (String word)\n{\n text= word.toCharArray ();\n z = new int [text.length];\n int l=0;\n int r=0;\n \n if (z.length > 1) z[1] = findPrefixLength (text, 0, 1);\n elsereturn;\n \n if (z[1] > 0)\n {\n l = 1;\n r = z[1];\n }\n \n for (int j = 2; j < text.length; j++)\n {\n if (j > r) //case 1\n {\n z[j] = findPrefixLength (text, 0, j);\nif (z[j] > 0)\n {\n l = j;\n r = j + z[j] - 1;\n }\n } \n else //case 2\n {\n

    int k = j - l;\n if (z[k] < r - j + 1) //case 2a\n {\nz[j] = z[k];\n }\n else //case 2b\n

    {\n int p = findPrefixLength (text, r - j + 1, r + 1);\nz[j] = r - j + 1 + p;\n l = j;\n r = j

    + z[j] - 1;\n }\n }\n }\n}" ], "key" : 501, "order" : "O(|S|)", "shortTitle" : "Z Algorithm", "tileImage" : "images/strings/z.png", "title" : "Z Algorithm"}

  • 8/12/2019 Algorithms Data

    34/34

    ]