Overview of This Blog

Over the past few weeks through tech talks, we have learned about what time complexity is and why it is important as we construct algorithms. Some time complexities are faster (ex. O(n), O(1)), while some add much more time for each extra step or element of a list (ex. O(n^2), O(2^n), etc.). There are also various kinds of sorting algorithms, with some being more efficient than others. This blog serves to review all of these concepts to allow one to get a quick recap of everything that we have learned so far in Data Structures 1.

Time Complexities

Constant: O(1)

This kind of time complexity is probably the easiest one to understand. Constant time complexity is an algorithm that takes the same amount of time to execute regardless of the number of inputs/elements of the list. For example, if we wanted to find the fourth element of a list of numbers, we can do so as shown in the code below:

num_list = [17,29,42,69,100,206] # list of numbers
print(num_list[3])
69

We could make the list smaller or larger, but it will not change the amount of time that it takes to execute the program.

Linear: O(n)

Linear time complexity is when the time it takes for an algorithm to execute increase linearly with the number of inputs/elements of a list used. For this time complexity and the rest of the ones below, the amount of inputs used does matter, unlike the constant time algorithm. For example, if we wanted to iterate through a list of numbers to check if they are greater than a certain value and to print only those numbers, the more elements that we use, the longer the program will take to execute.

num_list = [10,37,82,99,150,111,69420,525]


def display():
    for num in num_list:
        if num > 100 and num < 1000:
            print(num)

display()
150
111
525

While the difference in time is not visible (since the list is only 8 elements), the larger and larger we make this list, the more time that it will take for the computer to iterate through each number of the list to see which ones are greater than 100 but less than 100.

Quadratic: O(n^2)

O(n^2) time complexity is an algorithm in which the time that it takes to execute increases by the number of inputs added squared. For example, if the input array/list has 10 elements, it will do 100 operations. If the input array/list has 2 elements, will do 4 operations, and if the list has 10 elements, it will do 100 operations, and so on. Clearly, this kind of time complexity is not the most efficient, as it adds a lot more operations and therefore execution time for each extra element.

num_list = [16,102,301]

n = len(num_list)

for i in range(0,n):
    for j in range(0,n):
        print(j)
        
0
1
2
0
1
2
0
1
2

As you can see, the outer loop containing "i" will run 3 times since there are 3 elements in the list. However, since we also have a nested for loop within that loop (j), the inner loop will also run 3 times, thus printing out the length of the list from 0 to 2 3 times (9 operations). Even though it may not look like a big difference now, if we continue to increase the size of the list, it will get to the point in which we will lose all patience because of how long it will take.

Logarithmic: O(logn)

Logarithmic time complexity is probably one of the more faster time complexities of the ones mentioned so far (behind constant). With logarithmic time complexity, as the input/list size became bigger, the number of operations (i.e. time) grows very slowly, with n representing the number of elements. For example, one could show a binary search algorithm to demonstrate logarithmic time complexity, as shown below:

def binary_search(num_list, x, low, high):
    # Repeat until the pointers low and high meet each other
    while low <= high:

        mid = low + (high - low)//2

        if num_list[mid] == x:
            return mid

        elif num_list[mid] < x:
            low = mid + 1

        else:
            high = mid - 1

    return -1

num_list = [12,100,203,532,69420,102710]

x = int(input("Enter any number from the number list: 12,100,203,532,69420,102710"))
print(f'You inputted {x}')


result = binary_search(num_list, x, 0, len(num_list)-1)

if result != -1:
    print(f'{x} is present at index {result}')
else:
    print(f'{x} is not present')
You inputted 532
532 is present at index 3

It is also worth noting that the number of times that the program has to iterate through the program to find x is equal to the log base 2 of the closest next power of 2 to the length of the list. In this case, since the length of the list was 6, log base 2 of 6 is closest to 3, therefore the program had to guess 3 times before arriving at x, which was 532 here.

Exponential: O(2^n)

Exponential time complexity is probably something you can imagine. As the number of elements n increases, the number of operations performed (i.e. time) increases exponentially. Obviously, this kind of time complexity is highly inefficient, as for every one new element, the time increases dramatically compared to linear or quadratic. One example of an exponential time complexity program is a program that finds the number of possible permutations of a list of numbers, as demonstrated below:

import itertools

num_list = [1, 2, 3, 4]

permutations = itertools.permutations(num_list)

for i in permutations:
    print(i)
(1, 2, 3, 4)
(1, 2, 4, 3)
(1, 3, 2, 4)
(1, 3, 4, 2)
(1, 4, 2, 3)
(1, 4, 3, 2)
(2, 1, 3, 4)
(2, 1, 4, 3)
(2, 3, 1, 4)
(2, 3, 4, 1)
(2, 4, 1, 3)
(2, 4, 3, 1)
(3, 1, 2, 4)
(3, 1, 4, 2)
(3, 2, 1, 4)
(3, 2, 4, 1)
(3, 4, 1, 2)
(3, 4, 2, 1)
(4, 1, 2, 3)
(4, 1, 3, 2)
(4, 2, 1, 3)
(4, 2, 3, 1)
(4, 3, 1, 2)
(4, 3, 2, 1)

As we increase the number of inputs in the number list, the program will take more and more time to generate the list of all possible permutations of the original list.

Sorting Algorithms

Sorting algorithms are algorithms that take a large list of items and puts them into a specific order. For example, with a list of numbers, a sorting algorithm may sort the list of numbers from highest to lowest or lowest to highest. The next code segments all fulfill the same purpose but by different means of doing so with different time complexities.

Bubble Sort Algorithm: O(n^2)

The bubble sorting algorithm is an algorithm that compares two consecutive elements of a list. If the two elements (example numbers) are in the right order, nothing needs to be changed. If the numbers are not in the right order, however, they will then be swapped so that the smaller numbers can be "bubbled" to the top of the list.

num_list = [3,11,6,1,5,10,8,4,0,2,7]

def bubble_sort(num_list):
    n = len(num_list)

    for i in range(n):

        already_sorted = True

        for j in range(n - i - 1):
            if num_list[j] > num_list[j + 1]:
                
                num_list[j], num_list[j + 1] = num_list[j + 1], num_list[j]

              
                already_sorted = False

       
        if already_sorted:
            break

    return num_list

bubble_sort(num_list)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11]

Bubble sort gets the job done of sorting the list; however, it does have an average time complexity of O(n^2), meaning that the number of operations performed will get larger and larger as the list in question gets larger and larger. Thus, even though bubble sort is one of the more simpler algorithms to understand, it may not be the best option when it comes to sorting.

Merge Sort Algorithm: O(nlogn)

With the merge sorting algorithm, the algorithm takes one array and splits it in half. For each half of the array, the algorithm rearranges the array so that the elements are in the desired order. Once that has been done successfully, the algorithm merges the two arrays/lists back together so that the desired order is achieved for the entire array/list.

def merge_sort(arr):
    if len(arr) <= 1:
        return arr

    mid = len(arr) // 2
    left_half = arr[:mid]
    right_half = arr[mid:]

    left_half = merge_sort(left_half)
    right_half = merge_sort(right_half)

    return merge(left_half, right_half)

def merge(left_half, right_half):
    result = []
    i, j = 0, 0
    while i < len(left_half) and j < len(right_half):
        if left_half[i] <= right_half[j]:
            result.append(left_half[i])
            i += 1
        else:
            result.append(right_half[j])
            j += 1

    result += left_half[i:]
    result += right_half[j:]

    return result

num_list = [3,11,6,1,5,10,8,4,0,2,7]
merge_sort(num_list)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11]

Merge sort, just like bubble sort, is also another way of sorting a list back into the desired order. However, merge sort is better than bubble sort in terms of time complexity, with merge sort having an average time complexity of O(nlogn). This means that the amount of operations performed will not be as large as the input size increases for a merge sort algorithm as it would be for a bubble sorting algorithm.

Reflection

I created this blog to help remind myself of the differing time complexities and where they would fall in terms of least efficient to most efficient. However, after completing the blog, I hope that it will not only serve as a study tool for myself, but also for other students taking the APCSP exam in May, as I would want anyone who is also in my position to have something like this quickly recap everything that we have learned so far in this class. Furthermore, understanding the different time complexities and sorting algorithms is probably going to be an important concept for College Board, which is why I added some explanations to all of the code segments that I included so that I or someone else viewing this can understand what is going on.