diff --git a/content/notes/01-introduction.md b/content/notes/01-introduction.md new file mode 100644 index 000000000..0410d1c4a --- /dev/null +++ b/content/notes/01-introduction.md @@ -0,0 +1,8 @@ +--- +title: "01-introduction" +tags: +--- + +# 01-introduction + + diff --git a/content/notes/02-union-find-1.md b/content/notes/02-union-find-1.md new file mode 100644 index 000000000..65d4f6900 --- /dev/null +++ b/content/notes/02-union-find-1.md @@ -0,0 +1,8 @@ +--- +title: "02-union-find-1" +tags: +--- + +# 02-union-find-1 + + diff --git a/content/notes/03-union-find-2.md b/content/notes/03-union-find-2.md new file mode 100644 index 000000000..01290b778 --- /dev/null +++ b/content/notes/03-union-find-2.md @@ -0,0 +1,8 @@ +--- +title: "03-union-find-2" +tags: +--- + +# 03-union-find-2 + + diff --git a/content/notes/04-union-find-3.md b/content/notes/04-union-find-3.md new file mode 100644 index 000000000..97a7b9a2e --- /dev/null +++ b/content/notes/04-union-find-3.md @@ -0,0 +1,8 @@ +--- +title: "04-union-find-3" +tags: +--- + +# 04-union-find-3 + + diff --git a/content/notes/06-analysing-recurive-algorithms.md b/content/notes/06-analysing-recurive-algorithms.md new file mode 100644 index 000000000..bc6243694 --- /dev/null +++ b/content/notes/06-analysing-recurive-algorithms.md @@ -0,0 +1,8 @@ +--- +title: "06-analysing-recurive-algorithms" +tags: +--- + +# 06-analysing-recurive-algorithms + + diff --git a/content/notes/07-mergesort-1.md b/content/notes/07-mergesort-1.md new file mode 100644 index 000000000..59f09b7a0 --- /dev/null +++ b/content/notes/07-mergesort-1.md @@ -0,0 +1,106 @@ +--- +title: "07-mergesort-1" +tags: +- cosc201 +- lecture +--- + +# 07-mergesort-1 + +# Divide and conquer + +1. pre ⇒ break apartinto two or more smaller problems whose size add up to at most n +2. Rec ⇒ solve those problems recursively +3. post ⇒ combine solutions into a solution of the original problem + +## 1 quicksort + +pre ⇒ select pivot and split the array + +rec ⇒ apply quicksort to the partitions + +post ⇒ not much + +designeds when sorting inplace was important + +works best of primitive types as they can be stored in the fastest memory location + +- memory access can be localised and the comparisions are direct +- those advantages are limited when sorting objects of reference type +- i that case each element of the array is just a reference to where the object really is +- so there are no local access advantages + +# Mergesort + +a variant of a divide and conquer sorting array + +pre ⇒ split array into two pieces of nearly equal size, + +rec ⇒ sort the pieces, + +post ⇒ merge the pieces + +## 2 Merge + +take the two lowest values + +place the lowest of the two in the next place in the sorted array + +## 3 Implementation + +- given: a and b are sorted arrays. m in an array whose sixe is the sum fo their sizes +- desired outcome: the elements of a and b have been copoed into m in sorted order + +- maiain indices, ai, bi, and mi of the active location in a b and m +- if both ai and bi represent actual indices of a and b, find the one which points to the lesser value (break ties in favour of a) copy that vale into m at mi and increment mi and whichever of ai or bi was used for the copy. +- once one of ai and bi is out of range, copy the rest of the other array into the remainder of m + +```java +public static int[] merge (int[] a int[] b){ + int[] m = new int[a.length + b.length] + int ai = 0, bi = 0, mi = 0; + + while(ai < a.length && bi < b.length) { + if(a[ai] <= b[bi]) m[mi++] = a[ai++]; + else m[mi++] = b[bi++] + } + + while (ai < a.length) m[mi++] = a[ai++]; + while (bi < b.length) m[mi++] = a[bi++]; + + return m; +} +``` + +```java + public static void mergeSort(int[] a){ + mergeSort(a, 0, a.length); + } + + public static void mergeSort(int[] a, int lo, int hi){ + if(hi - lo <= 1) return; + int mid = (hi + lo)/2; + mergeSort(a, lo, mid); + mergeSort(a, mid, hi); + merge(a, lo, mid, hi); + } + + public static void merge(int[] a, int lo, int mid, int hi){ + int[] t = new int [hi-lo]; + //adjust code from 'merge' here so that the part of a from lo to mid, and the part of a from mid to hi are merged into t + System.arraycopy(t, 0, a, lo, hi-lo) //copy back into a + + } + + +``` + +## 4 Complexity + +- n is the length of a plus the length of b +- no obvious counter controlled loop +- key ⇒ in each of the three loops mi in incremented by one. + +- ∴ the total number of loop bodies executed is always n +- since each loop has a constant amount of work +- ∴ so total cost is **ϴ(n)** diff --git a/content/notes/08-mergesort-2.md b/content/notes/08-mergesort-2.md new file mode 100644 index 000000000..6a3b091f7 --- /dev/null +++ b/content/notes/08-mergesort-2.md @@ -0,0 +1,112 @@ +--- +title: "08-mergesort-2" +tags: +- cosc201 +- lecture +--- + +# 08-mergesort-2 + +recall definition of merge sort +- pre ⇒ split +- rec ⇒ sort pieces +- post ⇒ merge + +## 1 Complexity + +no counters + +pre and post pahses are constant and ϴ(n) + +so M(n) = ϴ(n) + 2 * M(n/2) + +does this even help. what if n is odd + +pretend ϴ(n) is $C \times n$ + +$$ +\begin{align*} +M(n) &= C \times n+2 \times M(n/2) \\ +&= C \times n+2 \times (C \times (n/2) + 2 \times M(n/4))\\ +&= C \times (2n) + 4 \times M(n/4) \\ +&= C \times (2n) + 4 \times (C \times (n/4)) + 2 \times M(n/8))\\ +&= C \times (3n) + 8 \times M(n/8)\\ \\ +&= C \times (kn) + 2^k \times M(n/2^k) +\end{align*} +$$ + +ends when we find base case +when we get to $n/2^k = 1$ +we could split earlier. +the work done base case is (bounded by) some constatn D +so if $k$ is large enough that $n/2^k$ is a base case, we get + +$$ +M(n) = C \times (kn) + 2^k \times D +$$ + +how big is $k$ + +$k <=lg(n)$ + +so: +$$ +M(n) ≤ C \times (n lg(n)) + D(n) = ϴ(n lg(n)) +$$ + +which is true + +> In a divide and consiwer algo wher pre and pst processign work are Ο(n) and the division is into parts of size at least n for some contatn c > 0 tge total time complexity is Ο(n lg n) and generally ϴ(n log n) + +## 2 Variations of mergesort + +unite and conquer + +5 | 8 | 2 | 3 | 4 | 1 | 7 | 6 + +5 8 | 2 3 | 1 4 | 6 7 + +2 3 5 8 | 1 4 6 7 + +1 2 3 4 5 6 7 8 + +```java + public static void mergeSort(int[] a) { + int blockSize = 1; + while(blockSize < a.length) { + int lo = 0; + while (lo + blockSize < a.length) { + int hi = lo + 2*blockSize; + if (hi > a.length) hi = a.length; + merge(a, lo, lo + blockSize, hi); + lo = hi; + } + blockSize *=2; + } + } + +``` + +outer loop is executed lg n times, where n is the length of a + +inner loop proceeds until we find a block that "runs out of elements" + +inner loop is having 2 x blocksize added each time, to runs most n/2 x blocksize + +inside inner is call to merge which is ϴ(blocksize) + + +### 2.1 complexity from bottom up + +- $n$ is the numbe of elemetns in a +- outer loop is executed + +![[Pasted image 20220329114859.png#invert]] + +### 2.2 improvments +some arrays have sections that are already sorted + +you canm + +### 2.3 timsort +used by python java rust etc diff --git a/content/notes/cosc-201-lectures.md b/content/notes/cosc-201-lectures.md index 0354bacc7..efc304b14 100644 --- a/content/notes/cosc-201-lectures.md +++ b/content/notes/cosc-201-lectures.md @@ -7,12 +7,6 @@ tags: # Cosc 201 Lectures -- [[01-introduction]] -- [[02-union-find-1]] -- [[03-union-find-2]] -- [[04-union-find-3]] -- [[05-induction]] -- [[06-analysing-recurive-algorithms]] - [[07-mergesort-1]] - [[08-mergesort-2]] - [[09-stacks-and-queues]]