Why is analysis of efficiency so important?
This kind of analysis is dedicated to an estimated amount of “steps” (specifically how many lines of code) which an algorithm takes to complete a task. Usually we focus on the "Worse Case". This is because when a program is completed and ready to make its entry to a rather competitive market, we want to make it be satisfied with most of the consumers. The question is: what would be a major concern for users who want to choose a program to help with their task? Undoubtedly the answer is efficiency. So we are responsible to make it clear to the users what would be the worst case of our product and make them have a basic idea of which program should they choose (also make the product more competitive if the runtime is optimistic).
1. Incremental rule
A useful analysis of different algorithms: O(x)
Up to now, we learnt how to analyze sorting algorithms from the lecture and the reading material. Any kind of sorting algorithm can be mathematically proven to be O(x) (where O means Big-oh notation and x is some function or a constant). Thankfully, Big-oh notation was an overlapping concept in both 148 and 165, which helped me in understanding the intuition behind it as well as complementing my grasp of the subject in different aspects.In general, big Oh notation is essentially used to describe the performance of an algorithm in regards to its execution time. For example, the binary search tree had an approximate runtime of O(log(n)) (instead of O(n²) which is much slower). The reason is that the BST is already sorted with the rule: the left subtree is always less than the node's root and the right subtree is always greater than the node's root. So The major advantage of binary search trees over other data structures is that eliminate half of the data every time doing the search which turned out to be very efficient.
How to express the time complexity of some common sorting algorithms
As you Know the complexity of an algorithm is amount of time needed by an algorithm to Execute. this is called Time Complexity. Basically summarize from the reading material, designing of sorting algorithm is based on two types:1. Incremental rule
In this approach every time we have to increase the index to insert and sort the element at its proper position.
2. Divide and Conquer rule
In this approach we divide the array into smaller arrays segment and sort in these small array element, then combine them altogether.2. Divide and Conquer rule
we can start with classifying all the sorting algorithms we have learnt into these 2 types.
1. Incremental rule:
selection sort
The Idea of selection sort is very simple repeatedly finding the smallest or greatest element in the array and place it to its final position to sort accordingly in a sequence.
Worst case for selection sort is a totally reverse array, like [32][13][11][10][5][2][1]
*Note that there is a while loop inside a for loop in the code
Runtime: O(n²)
insertion sort
insertion sort
Insertion sort is a simple sorting algorithm: a comparing sort in which the sorted array (or list) is built one entry at a time.
Worst case for insertion sort is a totally reverse array, like [32][13][11][10][5][2][1]
*Note that there is a while loop inside a for loop in the codeRuntime: O(n²)
assumes that a portion of the array is sorted and inserts the next element into a segment, and so on. obviously, this is useful only for files that are almost sorted, as it is relatively inefficient.
2. Divide and Conquer rule:
merge sort
assumes that a portion of the array is sorted and inserts the next element into a segment, and so on. obviously, this is useful only for files that are almost sorted, as it is relatively inefficient.
2. Divide and Conquer rule:
merge sort
This algorithm first divides the unsorted array-list into the n sub array-list and sorts them before merging all the sub-lists together.
There is no worst case for merge sort which means the worst case is equal to the best case or average case.Runtime: O(n log(n))
hence the logarithmic runtime of big O notation. seen when calculating the fibonacci numbers.
quick sort
hence the logarithmic runtime of big O notation. seen when calculating the fibonacci numbers.
quick sort
Quick sort algorithm works by placing the last element of queue in proper position through comparing the other element from the first end of queue.
The worst case for quick sort would be
- All elements of array are the same
- Array is already sorted
- And the pivot selected is the leftmost or rightmost element
Runtime: O(n²)