Sunday, March 30, 2014

Sorting and Efficiency

Selection sort, while easy to implement, does not scale well when sorting large lists. With a time complexity of O(n^2), it is only a good choice when a trivial algorithm is needed to sort a small list. Thus using selection sort should be avoided when possible.
def select(L):
    """Produce copy of L in sorted order
    >>> select([3, 1, 2])
    [1, 2, 3]
    """
    L1 = L[:]
    for i in range(len(L1)):
        m = min(L1[i:])
        j = L1.index(m,i)
        L1[i], L1[j] = L1[j], L1[i]
    return L1
Quick sort, is generally faster than selection sort with an average time complexity of O(n log n), though in rare cases this increases to O(n^2). Though faster than selection sort, it is slightly more difficult to implement. Also, quick sort has a performance of O(n^2) when the list is sorted. Because of this, I think this algorithm is good for sorting medium-sized lists.
Merge sort, is the fastest of the three with a time complexity of O(n log n). That being said, it is also the most difficult to implement. This makes merge sort ideal for sorting large lists.
Efficiency is a key factor to consider when developing software. However premature optimization can also be counter-productive by adding unnecessary complexity to code, as well as diverting human resources from other areas of the software, slowing down development overall. Similarly, memoization while effective in some situations can also be detrimental to performance by taking up too much space to cache results; it is a trade-off between space and time.

Sunday, March 23, 2014

Week10 SLOG

This week we mostly talked about the assignment and big-oh notation. For the big-oh notation, we mostly talked about the performance of quick sort.
In the first trail for the code of quick sort, we chose the first element of the list to be the pivot. This way, the code first gives a runtime of n^2 for worst cases, and nlgn for average cases. However, when the list is already sorted, quick sort performs very bad and a sorted list of 1000 elements would raise RuntimeError. This problem occurs because python will have to create a stack for every element in the list. To solve the problem, it is very easy, as we can just use a random pivot in the list to break lists into two parts.
Another interesting part we talked about in class is the default value for a function. For example,
def f(n: int, L: list=[]) -> list:
    L.append(n)
    return L
f(7) -> [7]
f(17) -> [7, 17]
Python function creates the object to run(default value) and it runs as long as python can access the list(lists are mutable). If L were to be a tuple, an error would occur. L is a default value that is a part of the function f.

Sunday, March 16, 2014

Week 9 SLOG

This week the profs posted their design for regex, which was a bit different from my design. To be honest, I can't understand why he used a nested class in a nested class - if that is the right way to call it. I spend a lot of time on the exercise. I found the exercise a bit harder than the previous two. I had to went back to review how to approach in-order and pre-order trees. For the second part of the exercise, I think the ideas that people talked about on piazza helped a lot. I used a helper function and the rest of the code was not too hard.

During lectures, we talked about binary search tree and big-O. Binary search tree is very similar to binary search we talked about in CSC108, but in the form of tree with children. We also talked about big-O which is related to efficiency. I am also taking CSC165, and have found big-O not that bad to follow. Overall, this week was not too bad.

Monday, March 10, 2014

Week 8 SLOG

Last week, I focused more on the assignment instead of the lecture. I feel like the most important thing we talked about in lecture is the difference between __repr__ and __str__. __repr__ expresses whatever was typed in the shell, while __str__ returns the exact form. For example, if a tree is typed in the shell, TreeList(1, Treelist(1, None, None ), None), __repr__ would return TreeList(1, Treelist(1, None, None ), None), while __str__ returns the tree form of the tree.
For the assignment, property was what I learnt most about. Property restrains the user from reassigning values to certain variables. I was confused in the beginning because I did not realize that I need to write codes for property, instead of writing property(getf, setf, delf, and whatever it is). Overall, I think the assignment worked out pretty good, and I look forward to work on part 2.

Saturday, March 1, 2014

Recursion

We have been studying recursion for a couple of weeks. Recursion means defining something in terms of itself, perhaps multiple times to achieve an objective. In python, recursion means to solve the problem by calling the function itself to solve smaller problems. Recursion can simplify the code a lot when solving problems related to functions solving nested lists. During lectures, we saw examples for recursion which was tree_burst. At first I found tree_burst really hard to understand and follow as it was probably the first time I saw a recursive function. However, I found it really useful if you trace it for a couple of times.

Sometime during a lab we were requested to write recursive functions. I found it really confusing to write in the beginning, but my parented was very helpful, and we were able to finish the lab. Overall, recursion was not too bad for me as soon as I understood how to trace it, I think.

def rec_len(L):
    if isinstance(L, list):
        acc = 0
        for i in L:
            if isinstance(i, list):
                acc += rec_len(i)
            else:
                acc += 1
        return acc
The code above, for example, is a simple example of recursion that is used to counted the number of nested lists in a list. It recursively calls rec_len so that it will eventually goes through all of the nested lists in the given list L. 

Sunday, February 23, 2014

Week 6 - Last Week Before Reading Week

I have focused mainly on assignment 1 this week and I am very glad to see things pulling together. The weather is also a lot better, and I am very excited about it.
This week we talked about trees in class, the names of different parts of trees, and simple definitions on the parts. For example, each non-root node has exactly one parent.We also went over the code for preorder traversal, in-order traversal, and post-order traversal, and compared the differences between the three different ways to go over the root.
I think I will definitely need some time to understand the parts and the names of the trees, and considering how it is included in the mid-term test.

Sunday, February 9, 2014

Week 5

    The snowstorm was terrible this week, yet I managed to go to both of the lectures and I am very proud of myself for doing so. In both of the lectures, Professor Heap focused on leading us through the first assignment.
    On Monday, he led us through the process of moving cheese and explained basic rules for moving cheeses around. During the lecture, he gave us an example of move_cheeses which prints the intermediate steps of moving a pile of cheeses from a place to another. I think the function is very helpful and gave me basic information about the general rules of moving cheeses. If there are more than 1 cheese, the function uses recursive functions to move cheeses around. I don't think understanding recursive functions is too hard for me. It is not easy, of course, but I can still figure it out after some work.
    However, writing recursive functions in the lab was a lot more difficult than I expected. In the first two problems, we were asked to trace codes for greatest common denominator and binary representation. Tracing the codes was fairly simple, and the lab handout helped to specify thing a lot as well. However, I found it a lot harder when it came to me writing the codes myself. At first, I was kind of lost when my TA told me that I am not allowed to use a for loop. But later it got clearer and I was very happy that I could figure things out myself.