Saturday, March 29, 2014

#4 SLOG Sorting and Efficiency

In the past few week we mainly focused on a variety of sorting algorithms and runtime efficiency analysis. Now I will write about my understanding of sorting and efficiency in following aspects:


Why is analysis of efficiency so important?
This kind of analysis is dedicated to an estimated amount of “steps” (specifically how many lines of code) which an algorithm takes to complete a task. Usually we focus on the "Worse Case". This is because when a program is completed and ready to make its entry to a rather competitive market, we want to make it be satisfied with most of the consumers. The question is: what would be a major concern for users who want to choose a program to help with their task? Undoubtedly the answer is efficiency. So we are responsible to make it clear to the users what would be the worst case of our product and make them have a basic idea of which program should they choose (also make the product more competitive if the runtime is optimistic). 

A useful analysis of different algorithms: O(x)
Up to now, we learnt how to analyze sorting algorithms from the lecture and the reading material. Any kind of sorting algorithm can be mathematically proven to be O(x) (where O means Big-oh notation and x is some function or a constant). Thankfully, Big-oh notation was an overlapping concept in both 148 and 165, which helped me in understanding the intuition behind it as well as complementing my grasp of the subject in different aspects.In general, big Oh notation is essentially used to describe the performance of an algorithm in regards to its execution time. For example, the binary search tree had an approximate runtime of O(log(n)) (instead of O() which is much slower). The reason is that the BST is already sorted with the rule: the left subtree is always less than the node's root and the right subtree is always greater than the node's root. So The major advantage of binary search trees over other data structures is that eliminate half of the data every time doing the search which turned out to be very efficient.

How to express the time complexity of some common sorting algorithms
As you Know the complexity of an algorithm is amount of time needed by an algorithm to Execute. this is called Time Complexity. Basically summarize from the reading material, designing of sorting algorithm is based on two types:
1. Incremental rule
In this approach every time we have to increase the index to insert and sort the element at its proper position.
2. Divide and Conquer rule
In this approach we divide the array into smaller arrays segment and sort in these small array element, then combine them altogether.

we can start with classifying all the sorting algorithms we have learnt into these 2 types.

1. Incremental rule:
selection sort
The Idea of selection sort is very simple repeatedly finding the smallest or greatest element in the array and place it to its final position to sort accordingly in a sequence. 
Worst case for selection sort is a totally reverse array, like [32][13][11][10][5][2][1]
*Note that there is a while loop inside a for loop in the code
Runtime: O(n²)

insertion sort
Insertion sort is a simple sorting algorithm: a comparing sort in which the sorted array (or list) is built one entry at a time.
Worst case for insertion sort is a totally reverse array, like [32][13][11][10][5][2][1]
*Note that there is a while loop inside a for loop in the codeRuntime: O(n²)

assumes that a portion of the array is sorted and inserts the next element into a segment, and so on. obviously, this is useful only for files that are almost sorted, as it is relatively inefficient.



2. Divide and Conquer rule:
merge sort
This algorithm first divides the unsorted array-list into the n sub array-list and sorts them before merging all the sub-lists together.
There is no worst case for merge sort which means the worst case is equal to the best case or average case.Runtime: O(n log(n))

hence the logarithmic runtime of big O notation. seen when calculating the fibonacci numbers.

quick sort
Quick sort algorithm works by placing the last element of queue in proper position through comparing the other element from the first end of queue. 
The worst case for quick sort would be 
  1. All elements of array are the same
  2. Array is already sorted  
  3. And the pivot selected is the leftmost or rightmost element
In the cases above, the total number of comparisons is n(n−1)/2
Runtime: O(n²)

Saturday, March 1, 2014

#3 SLOG Recursion

Hi guys, today's post is going to be about one of OOP's amazing features, recursion.

Basically, recursive means the function calls itself recursively. Recursion can be successfully achieved is because that the function of each execution in the stack has its own shape parameters and local variables. So it can run by itself without the reference of other functions.

In the first few weeks, I was quite confused about the topic of recursion. As a beginner of Python, the concept and the essence of recursion give me a hard time grasping. After working on A1, I finally grasped the essentials that it really helps if you start thinking about the base case first. This simplest case sometimes is the key to the solution. After you have implemented the base case, think of the function as a more complicated version of the simple case and use it like you would a variable. But remember however complex the algorithm is, it will always result in the simple case. It just remind me of the simple induction learnt in the course CSC 165, in which we first solve the case where n=1 and then pass on to the n and n+1 cases to prove the whole problem. Despite the induction part, the idea is almost the same with the recursion - deal with base case first.

The recursion has the ability to call itself. At first I was like "how can something have itself inside it?" Now I know the 'itself' inside the function is actually the base case. Just like a tree, think the branch as the base case of it and a branch can have another branch on it which can have another branch on it, that is recursion.

For example if you want to find the sum of all numbers in a list that may also be a nested list. When we are still in the course of CSC108, we may think of utilizing for loop for 10 times or 30 times. But what about a million lists inside the variable list? What do you think would be the solution to this problem?


Undoubtedly the answer is recursion. Those a million lines' code can be written even in a single line. Magic, isn't it? That's why for computer scientists, recursion is very important as it enables us to solve problems quickly and efficiently. It might not be easy to think recursively at first but in the end it's still worth it. Try to break down the procedure and come up with the base case first, everything will be working out soon. Learning to use recursion can not only make your program more efficient, it will also help increase you own thinking skills.

Sunday, February 2, 2014

#2 SLOG Assignment 1

In the past week, I had been working on A1 with my partner. We've heard from senior students that it's a quite challenging assignment, so we started off 3 weeks before the due day in order not to cram. 
While the first part is very easy (simply write all the required headers), we were stuck in figuring out how to raise Exceptions under specific circumstances. After a series of attempts, we found out that the order of the if statements is extremely crucial for Python to operate on the codes and find out which Exception should be raised. Then we continued working on the codes in ConsoleController, it's been a difficulty for us to design the format for the user to input as well as returning the proper error messages when the user doesn't meet the specifications given in the instruction or if it denotes an invalid move. Also, resuming the game whenever the user gives a wrong move/input took us some time to consider. Though it's a bit kicking here and there, we completed all the tasks before step5 and going to write the recursion functions next week. Before that, we should review the class notes in the last 3 weeks and get more familiar with the concept of recursion.

Saturday, January 25, 2014

#1 SLOG Object Oriented Programming

Hello to all and welcome to my weekly blog dedicated to the course CSC148. ;D

In the last 3 weeks, we have been given a preview of the whole course and have mainly focused on Classes, stacks, raising Exceptions and recursion. I have no experience in programming before entering the university, so all of knowledge is based on what I had learned in last term's CSC108 course. Despite the fact that I'm not so proficient with programming compared to some of my peers who have already been exposed to Java or C before, I found  it not hard to catch up after the winter hiatus.

So this week, I will talk about one of the most crucial part of Python: Object Oriented Programming (aka OOP). We can in a way say that the whole design philosophy of Python is a clean programming style. And I learnt several lessons last term which resulted in  deduction of marks due to my bad format. The PEP8 documents on the course website really helped  me a lot with respect of the style standards when programming. Also attending tutorials let me learn more about procedural style of programming.

Back to the topic,  OOP really  has valuable usefulness because it eliminates the unnecessary code and increase the its efficiency. It greatly enhanced the programs flexibility, compatibility and makes the program much more simple to maintain. For example, if we want to write a set of if statements including calling other functions:


if value == 'one':
    # do something
    function1()
elif value == 'two':
    # do something else
    function2()
elif value == 'three':
    # do something else
    function3()

However we can achieve the same goal by using OOP and ends up in shorter lines:

def function1():
    print 'You chose one.'
def  function2():
    print 'You chose two.'
def  function3():
    print 'You chose three.'
#
# switch is our dictionary of functions
switch = {
    'one': function1,
    'two': function2,
    'three': function3,
    }
#
# choice can eithe be 'one', 'two', or 'three'
choice = raw_input('Enter one, two, or three :')
#
# call one of the functions
try:
    result = switch[choice]
except KeyError:
    print 'I didn\'t understand your choice.'
else:
    result()

The magic happens in the line result = switch[choice] which returns one of our function objects (or raises a KeyError). 

This is just one very crude example from the site (http://www.voidspace.org.uk/python/articles/OOP.shtml) to show the advantages of OOP. And as we learn more in the following weeks, I'm sure that the importance of knowing OOP will be more apparent.