• Congratulations to the Class of 2024 on your results!
    Let us know how you went here
    Got a question about your uni preferences? Ask us here

Comparing sorts (1 Viewer)

SadCeliac

done hsc yay
Joined
Sep 23, 2021
Messages
2,490
Location
Sydney <3
Gender
Male
HSC
2023
Hi all,

I always seem to gloss over the key areas of each algorithm when answering questions. e.g. "Compare and contrast the workings of the bubble sort, insertion sort, and selection sort in terms of efficiency of algorithms"

Could someone help me answer this question?
 

constexpr

Active Member
Joined
Oct 11, 2022
Messages
66
Gender
Male
HSC
2023
The key word here is efficiency, so you need to talk about time complexities and Big O notation and Big Ω notation. In simpler words, you need to talk about how fast the sorts are.

For each sort there exists a best case, and worse case time complexity. Referring to what I said earlier, Big O notation refers to the worst case scenario. Big Ω notation refers to the best case scenario. Also Big Θ notation refers to the average time complexity.

This makes sense because lets say you had a perfectly sorted array [1, 2, 3, 4, 5] and you did a bubble sort on this. No changes would be made.
However if it's jumble up, it's obviously going to take longer.

1691632713046.png


With this in mind, you can say something along the lines of.

On average, selection, bubble, and insertion sort have a similar time complexity of Θ(n^2) however in the case of their best possible scenarios. Bubble sort and insertion sort both operate at Ω(n) whereas selection sort remains the same across all cases as Ω(n^2).

I don't know how many marks this question is, if it's more than 2-3, you'd probably have to talk about why this is the case and mention some specifics in their algorithms that lead to this conclusion.
 

constexpr

Active Member
Joined
Oct 11, 2022
Messages
66
Gender
Male
HSC
2023
I think the syllabus only talks about Big O notation, so maybe that's all you have to talk about
 

SadCeliac

done hsc yay
Joined
Sep 23, 2021
Messages
2,490
Location
Sydney <3
Gender
Male
HSC
2023
The key word here is efficiency, so you need to talk about time complexities and Big O notation and Big Ω notation. In simpler words, you need to talk about how fast the sorts are.

For each sort there exists a best case, and worse case time complexity. Referring to what I said earlier, Big O notation refers to the worst case scenario. Big Ω notation refers to the best case scenario. Also Big Θ notation refers to the average time complexity.

This makes sense because lets say you had a perfectly sorted array [1, 2, 3, 4, 5] and you did a bubble sort on this. No changes would be made.
However if it's jumble up, it's obviously going to take longer.

View attachment 39263


With this in mind, you can say something along the lines of.

On average, selection, bubble, and insertion sort have a similar time complexity of Θ(n^2) however in the case of their best possible scenarios. Bubble sort and insertion sort both operate at Ω(n) whereas selection sort remains the same across all cases as Ω(n^2).

I don't know how many marks this question is, if it's more than 2-3, you'd probably have to talk about why this is the case and mention some specifics in their algorithms that lead to this conclusion.
5 marks**

But that makes much more sense - thanks

What about a quick comprehensive summary of each one? I always get them confused
 

dav53521

Well-Known Member
Joined
Mar 23, 2022
Messages
317
Gender
Male
HSC
2022
I think the syllabus only talks about Big O notation, so maybe that's all you have to talk about
I don't think the syllabus mentions big O at all and instead Davis decided to talk about it which is why some trial papers have Big O questions.

I would assume that the question is questioning your knowledge of how each sort works and how something like a bubble sort will be slower in large arrays due to how it 'bubbles' elements.
 
Last edited:

SadCeliac

done hsc yay
Joined
Sep 23, 2021
Messages
2,490
Location
Sydney <3
Gender
Male
HSC
2023
5 marks**

But that makes much more sense - thanks

What about a quick comprehensive summary of each one? I always get them confused
I've always just understood it as:
Bubble: the larges value in the unsorted part of the array "bubbles" up, with the algorithm comparing each item with its neighbour, and swapping if current>neighbour
Selection: "select" the smallest item available in the unsorted part of the array and swap it with the first available item in the unsorted part of the array
Insertion: "insert" each item into it's correct position in the sorted part of the array by {not sure how it works}
 

brent012

Webmaster
Webmaster
Joined
Feb 26, 2008
Messages
5,291
Gender
Male
HSC
2011
The main ways of comparing sorts would be run-time (worst case, best case + average, constant factors + real world performance), memory usage, stability, parallelisation and if they are online or offline. For SDD, I would probably focus on best case run time, the behaviour for pre-sorted arrays and the constant factors.

The sort algorithms in the SDD syllabus are quite basic, so worst case run time performance (often the focus in an undergraduate class) is not too useful as they can all regress to needing to compare each pair (). The best case run time for arrays that are already sorted (or mostly sorted) is interesting for comparison however as this is a common input.

Constant factors and real world performance are where certain algorithms appear to be academically better or equivalent when using Big-O analysis, but require a lot of work at each step (high constant factors) or cause frequent CPU cache hit misses. CPU cache hits I believe would be beyond the scope of SDD.

For simple sorting algorithms, memory usage will only differ if an algorithm works in place (uses the input array) or requires creating a new array. Parallelisation would be out of scope of the SDD syllabus, and none of these algorithms use divide and conquer.

Stability is an important concept for sorting. In SDD, you are generally sorting numbers, but in practice we usually sort objects by one or more fields on the object. When sorting objects, if there is a tie between two objects, often we want to retain the original order. This may be out of scope for SDD.

Online vs offline compares if an algorithm is able to operate on data as it is received, or if we require the entire input upfront. This is useful if the data is being streamed from a database or some kind of live data feed.

What about a quick comprehensive summary of each one? I always get them confused
Selection Sort is probably the most naive algorithm. It builds up a sorted portion of the array by trying to pick the smallest element from the unsorted part of the array and extending the sorted part of the array by swapping this selected element with the unsorted element currently in that position.

As it has to find the minimum at each iteration, even for a pre-sorted array, it will have an best case.

Since it swaps elements across the array (e.g. first element may move to the end), it is not a stable sort as it won't retain the original order for ties. As it requires finding the global minimum (or maximum) of the array, it is also an offline algorithm as it is not robust to a new minimum arriving.

Insertion sort processes elements one by one and builds up a sorted part of the array. The element being processed is put in place in the sorted portion of the array, some sorted elements will be shifted up a position to fill the gap created by removing the first unsorted element.

If the array is already in order, the process of "building" the sorted array will just involve an scan over the whole array with no swaps. It is an online algorithm as receiving new elements as we are sorting does not affect the operation of the algorithm.

A particularly naive implementation of either Selection or Insertion sort might make a new array entirely for the sorted portion (i.e. not in place), but they can both be implemented in-place and therefore are fairly memory efficient.

Bubble sort scans through the input scanning adjacent pairs and swapping any that are out of order. Each iteration will result in 1 extra element at the end being in the correct order. Repeat until no swapping occurs (upper bound by the input count), which means it can terminate with a single iteration for a pre-sorted array.

Bubble sort on paper has the same runtime characteristics as a insertion sort, but in practice the bubble sort has higher constant factors as it performs more comparisons. This video demonstrates it well:

Since bubble sort uses the input array, it's an in-place sort and memory efficient. Since it compares and swaps only adjacent pairs, it is a stable sort. Since bubble sort generally has an optimisation that assumes part of the array is sorted and fixed in place, this make it an offline algorithm.
 

SadCeliac

done hsc yay
Joined
Sep 23, 2021
Messages
2,490
Location
Sydney <3
Gender
Male
HSC
2023
The main ways of comparing sorts would be run-time (worst case, best case + average, constant factors + real world performance), memory usage, stability, parallelisation and if they are online or offline. For SDD, I would probably focus on best case run time, the behaviour for pre-sorted arrays and the constant factors.

The sort algorithms in the SDD syllabus are quite basic, so worst case run time performance (often the focus in an undergraduate class) is not too useful as they can all regress to needing to compare each pair (). The best case run time for arrays that are already sorted (or mostly sorted) is interesting for comparison however as this is a common input.

Constant factors and real world performance are where certain algorithms appear to be academically better or equivalent when using Big-O analysis, but require a lot of work at each step (high constant factors) or cause frequent CPU cache hit misses. CPU cache hits I believe would be beyond the scope of SDD.

For simple sorting algorithms, memory usage will only differ if an algorithm works in place (uses the input array) or requires creating a new array. Parallelisation would be out of scope of the SDD syllabus, and none of these algorithms use divide and conquer.

Stability is an important concept for sorting. In SDD, you are generally sorting numbers, but in practice we usually sort objects by one or more fields on the object. When sorting objects, if there is a tie between two objects, often we want to retain the original order. This may be out of scope for SDD.

Online vs offline compares if an algorithm is able to operate on data as it is received, or if we require the entire input upfront. This is useful if the data is being streamed from a database or some kind of live data feed.


Selection Sort is probably the most naive algorithm. It builds up a sorted portion of the array by trying to pick the smallest element from the unsorted part of the array and extending the sorted part of the array by swapping this selected element with the unsorted element currently in that position.

As it has to find the minimum at each iteration, even for a pre-sorted array, it will have an best case.

Since it swaps elements across the array (e.g. first element may move to the end), it is not a stable sort as it won't retain the original order for ties. As it requires finding the global minimum (or maximum) of the array, it is also an offline algorithm as it is not robust to a new minimum arriving.

Insertion sort processes elements one by one and builds up a sorted part of the array. The element being processed is put in place in the sorted portion of the array, some sorted elements will be shifted up a position to fill the gap created by removing the first unsorted element.

If the array is already in order, the process of "building" the sorted array will just involve an scan over the whole array with no swaps. It is an online algorithm as receiving new elements as we are sorting does not affect the operation of the algorithm.

A particularly naive implementation of either Selection or Insertion sort might make a new array entirely for the sorted portion (i.e. not in place), but they can both be implemented in-place and therefore are fairly memory efficient.

Bubble sort scans through the input scanning adjacent pairs and swapping any that are out of order. Each iteration will result in 1 extra element at the end being in the correct order. Repeat until no swapping occurs (upper bound by the input count), which means it can terminate with a single iteration for a pre-sorted array.

Bubble sort on paper has the same runtime characteristics as a insertion sort, but in practice the bubble sort has higher constant factors as it performs more comparisons. This video demonstrates it well:

Since bubble sort uses the input array, it's an in-place sort and memory efficient. Since it compares and swaps only adjacent pairs, it is a stable sort. Since bubble sort generally has an optimisation that assumes part of the array is sorted and fixed in place, this make it an offline algorithm.
woah thanks so much <3333

and just out of curiosity - in the real world, merge sort is the most commonly used / quickest???
 

brent012

Webmaster
Webmaster
Joined
Feb 26, 2008
Messages
5,291
Gender
Male
HSC
2011
woah thanks so much <3333

and just out of curiosity - in the real world, merge sort is the most commonly used / quickest???
Merge sort is commonly introduced in undergraduate algorithms classes as it's a good example of divide and conquer, and how sorting can have a worst case of rather than . But quick sort is faster than merge sort in practice even though it's academically worse, since it has lower constants and better cache performance.

But in the real world, people usually just use sorts that are built into the language/framework. These places usually use hybrid/adaptive sorts that will mix multiple sorts (e.g. depending on input size, or as a fall back if certain heuristics are met) - algorithms like this are of more interest to engineers than academics.

The most popular hybrid sorts are Timsort (used in Python) which is built off merge sort and insertion sort and Introsort (used in some C++ libraries, .NET) which starts with a quick sort, falls back to a heap sort but uses an insertion sort if there is a small number of elements.

So insertion sort actually sees real world use due to the good characteristics mentioned before.
 

Users Who Are Viewing This Thread (Users: 0, Guests: 1)

Top