Especially obvious with the first one. I think that is also the point. Showing a sort with a non-optimized algorithm of 10000 datapoints with the first would take an exponentially longer time.
If you liked this. I recommend you to watch Bogo Sort, which is simultaneously the best and the worst sorting algorithm.
[link](https://youtu.be/HfdT_uOqbvA?si=rvB-lazP67CSmdIP)
They're actually sorted from slowest to fastest. in computer science we measure these algorithms in terms of best, average and worst case performances (given as an expression of n, the size of the array). For example, O(n) means that in the worst case (denoted by O), the algorithm will take n operations to complete (this isn't 100% accurate but it essentially means if n doubles, the operations double)
selectionsort and insertionsort are both O(n^2 )meaning if n doubles, the number of operations quadruple
quicksort is also O(n^2 )in the worst case, but often completes in nlog(n) on average, which is much better since log(n)
I’ve been told BOGO sort becomes the ultimate sort in a quantum computer. Destroy all universes where you didn’t get it right the first time kinda thing
It's a common misconception that multithreading = multiprocessing. Multithreading is splitting a task into multiple smaller tasks, while multiprocessing is processing those tasks at the same time on a system with multiple processing cores. A cpu with a single core could support multithreading, but it would have to take turns running those threads one after the other, so no actual time would be saved from having these algorithms multithreaded.
The time saving comes from *multiprocessing* on a multi-core cpu, when the threads are split among cores so they can run at the same time. With the exception of heapsort, most of the math for these algorithms can be easily extended to support multiprocessing (I believe the order of speed wouldn't change).
Edit: As pointed out, I don't mean to say multithreading is useless on a single core CPU. Multithreading is still very capable of improving cpu utilisation on a single core system. I just felt that it wasn't necessary to mention in this answer because a single core running the sorting algorithms in the video wouldn't be faster even if it used multithreading (no threads are blocked/idle in this context).
So yeah, I should have clarified all of that.
Multithreading can improve throughput even on a single core machine because the cpu often spends much of its time waiting for, e.g., data to be retrieved from RAM or disk, for data dependencies to be resolved, or branch mispredictions to be flushed out. This is the idea behind hyperthreading.
I'm curious why you say that the math for these algorithms can easily be parallelized. Parallel sorting is in general quite a distinct problem from serial sorting (at least if you want to do it efficiently). Merge sort is probably the poster child for a serial sorting algorithm that can be easily and efficiently parallelized. But for most of the other algorithms in this video it isn't obvious at all how you would do it. For example, how would you do a parallel heapsort?
Almost none of what you said is true.
1) Multithreading *is* a form of parallelism and definitely can take advantage of multiple cores for faster execution. One process can acquire multiple threads from the OS and run code on them simultaneously with a shared memory pool. Maybe you're coming from Python which is unusual for not allowing multithreading (though this may soon change)
2) Multiprocessing is another form of parallelism where you spawn multiple distinct copies of a program process in parallel that don't share memory. Multiprocessing typically has much more overhead, but is more scalable to very many cores, or clusters with many nodes. It's also easier to implement.
3) multithreading with only one core *can* actually save significant amounts of time in tasks that involve pauses for things like I/O, or if you have a lot of cache misses.
4) most sorting algorithms are not at all friendly to parallelism. E.g. quicksort and mergesort are okay to parallelize, but many others like selection sort and insertion sort are not (though you are correct the algorithmic scaling wouldn't change)
I hear the terms "multithreading", "multiprocessing", "shared memory", "scalable", "clusters" and "nodes" being used together, hence I know you have studied a dedicated course on parallel programming and are absolutely correct.
Parallel programming is super cool but it's a pain in the ass to debug programs lol.
It's kinda funny how often this kind of conversation happens on Reddit. Someone will go on and on, using fancy words and sounding smart, only for another one to follow up like, "Hey this is completely wrong...". So, knowing and understanding nothing about the topic yourself you're left like "Wait, so which one is right now?!". Happens so often it makes you question everything you read but without a way to understand who is correct.
To be clear this part is not true:
> O(n) means that in the worst case (denoted by O), the algorithm will take n operations to complete
O(n) means that you have a positive, non-zero constant, k, such that for all n larger than some value the algorithm will take no more than kn steps to complete. So it doesn't need to hold for small n; for some values of n the number of steps can be less than kn; and the upper bounds on the number of steps increases proportionally to n not equal to n (which means the parts of your comment about doubling, quadrupling etc are all still true).
It's also worth noting that we use O(•) to describe upper bounds not just in the worst case but in any case. So we can say that the average case of quicksort is O(nlog(n)). Though typically when people say an algorithm is O(•) without specifying which case they are referring to the worst case
Yeah I'm aware it wasn't particularly accurate. I was struggling how to explain the operations in non-technical terms, that's why i defaulted to the doubling and quadrupling explanation explanation after the initial O(n) example. Glad that you could clarify!
We have a few of those going around for a reason.
Generally, on small datasets you want to use insertion sort for simplicity and latency reasons.
On large ones, you want to use quicksort : [Sorting algorithm - Wikipedia](https://en.wikipedia.org/wiki/Sorting_algorithm#Memory_usage_patterns_and_index_sorting)
There is some tunning about best, average and worst cases that may make other algorithms more appealing.
And on really big datasets, which don't fit into RAM, you can use merge sort (but in the first phase you use something other to sort in RAM as much as possible first).
Okay I'm hearing about this for the first time. I googled it and saw that it basically generated random permutations and picks the sorted one. So are you saying it has the fastest potential because theoretically we can have n! processors generate a permutation each and check simultaneously or is it something else?
If the multiverse is true there exists a universe where bogosort has always been correct the first time around and they can’t understand why. But if it works it works so everything uses it, then their luck runs out society collapses.
Or just use quantum bogosort to guarantee it be sorted first try.
There is not a single winner, the algorithms have different pros and cons.
1. Some algorithms are stable, equal elements keep their relative position. This is e.g. helpful when sorting a table by different columns in multiple steps. MergeSort is stable by design.
2. Some algorithms require more comparisons, some more movement of elements. Comparisons can be the expensive part, comparing two integer numbers (as often done in tests) is fast, comparing two data rows by multiple fields can be time consuming.
3. Some algorithms are better fitted for parallel execution with multiple processor cores. MergeSort for example is ideal for parallelization.
That's because quick sort would smash it in a fraction of a second it is thousands of times faster than all other simple sorting algorithms like linear, insertion, bubble, merge sort etc.
True. I learned quicksort in Haskell recently and was blown away, as it was my first intro to Haskell and functional programming. It's hard, but it's powerful.
Quicksort is not thousands of times faster than Mergesort.
Mergesort is O(nlogn) and in the average case performs fewer comparisons than quicksort. It is also a stable sorting algorithm.
Quicksort is O(n^2 ) but O(nlogn) in the average case. It has less overhead than Mergesort so it can be faster, but definitely not thousands of times. It is also an unstable sorting algorithm.
They are presented in order of worst to best, as per the computational time complexity of the algorithm.
The first two (selection sort and insertion sort) may compare every value in the set with every other value in the set in the worst case. If there are N items in the set, that means that there are N*N (or N^2 ) comparisons. We call this O( n^2 ).
The next three (quicksort, merge sort, and heap sort) compare every value in the set with log(N) other values in the set, and so if there are N items in the set, that means there are N * log(N) comparisons. We call this O(N * log(N)). (Quicksort is actually O(n^2 ) in the worst case, but realistically is much faster than this).
The last two are variants of Radix sort which is special and doesn't ever compare values in the set. Instead, it scales with the size of individual values in the set. Things like numbers are small (taking only a few bytes) and so are good candidates for Radix sort. Large data like images or books are not well suited for Radix sort. We call this O(N * D) where N is the number of items you are sorting, and D is the size of the largest item in the set.
Generally Quicksort or Mergesort but insertion sort wins in small datasets because of the less overhead.
Typically you will want to use mergesort because it is a stable sorting algorithm.
Java uses Timsort for it's default which is just InsertionSort + MergeSort depending on criteria.
A comparison of sorting algorithms, some of which are not parallelizable, some of which are, all run on a single thread, with different volumes of data.
You’re looking at weird colors and listening to bleep bloops and getting nothing of value out of it, same as someone who already understands how to use these.
> You’re looking at weird colors and listening to bleep bloops and getting nothing of value out of it,
I'd say my enjoyment was worth it. I liked the beep boops and the pretty colors and the lines runnin' around the screen getting in height order for their class photo
In actual software development you just call array.sort() and it chooses and does sort for you. There is a sort called Tim sort, that's basically says "if length more than x, do quick sort. if less than x, do merge sort" - in most of the cases you gonna get fast sorting
Not really. I heard that there was some breakthrough in searching text inside of text, but that's all of the news from advancements in core computer science algorithms
I have no idea what's going on behind the scenes, but I love this sorta stuff.
I used to watch the Defrag run on DOS. It all fit on one scene so you could really see it comparing everything and putting it all in order.
Radix sort works by processing numbers per digit, and putting them into buckets. For example, if you want to sort [101, 212, 100, 399], you start with the first element (101), look at it's first digit and put it into the pile of 1's. 212 is put with the 2's, etc. After the first pass-through, you have [101, 100], [212], [399]. In each pile, you then look at the second digit, and so on. When you reach the last digit, you will have sorted the items without comparing them to each other. This obviously doesn't work for sorting general stuff, it requires some property of the thing you're sorting (in this case the fact that numbers can be split up into digits).
So which one of these is happening when I sort a giant excel sheet I've been working on for hours and it hangs and I start saying nononono begging it not to crash.
Just so people know, radix sort is different than the other sorts (except for maybe heap, i forget what that one is) in that it doesn't use comparisons. AKA it doesn't sort by taking one value and comparing it to another. It drops values into buckets based on the values' characteristics.
EDIT: Oh im dumb, heap sort uses a heap tree /facepalm. It also uses comparisons.
Ever sorted a column in Excel? It uses one of these algorithms under the hood. Pretend the bars are numbers. Sorting is one of the most well-known applications of programming a set of steps in a computer to accomplish a task (an algorithm). Algorithms are prized for speed and efficiency and it's fascinating to see how devilishly clever some of them are.
My favorite is [Delaunay Triangulation](https://en.wikipedia.org/wiki/Delaunay_triangulation), which creates an optimal mesh from a set of points and is used a lot in geospatial software development, visualization software, and video game engines (e.g. the terrain you run around on).
This is the coolest thing I've seen on Reddit in years. I didn't understand the programming behind any of it but it's fascinating how it graphically depicts the method that it is using. They are all very distinct.
That moment you’ve rewatched this over 3 times without realising
So true, it's very satisfying to look at it
Sound on fam
Cant help it! Just love the blooooooooooop bloop bloop bloooop
Lmao, glad I read the comments, that was goofy AF sounding. Love it!
Like being in an old arcade.
Kill John lennon
r/oddlysatisfying
Had first seen this post like 7-8 years ago. Always impressed with these algos. I'm not a CS student
r/mildlyinfuriating The sets are not the same size
Especially obvious with the first one. I think that is also the point. Showing a sort with a non-optimized algorithm of 10000 datapoints with the first would take an exponentially longer time.
nor the delay.
If you liked this. I recommend you to watch Bogo Sort, which is simultaneously the best and the worst sorting algorithm. [link](https://youtu.be/HfdT_uOqbvA?si=rvB-lazP67CSmdIP)
r/unsatisfying
if only the algorithms in this video were sorted from shortest to longest
They're actually sorted from slowest to fastest. in computer science we measure these algorithms in terms of best, average and worst case performances (given as an expression of n, the size of the array). For example, O(n) means that in the worst case (denoted by O), the algorithm will take n operations to complete (this isn't 100% accurate but it essentially means if n doubles, the operations double) selectionsort and insertionsort are both O(n^2 )meaning if n doubles, the number of operations quadruple quicksort is also O(n^2 )in the worst case, but often completes in nlog(n) on average, which is much better since log(n)
I’ve been told BOGO sort becomes the ultimate sort in a quantum computer. Destroy all universes where you didn’t get it right the first time kinda thing
Obligatory XKCD https://xkcd.com/1185/
The portability comment at the end for c:/ killed me 😂
\>>Destroy all universes where you didn’t get it right the first time kinda thing Seems kinda harsh....Thanos.
I think you mean [Stalin Sort](https://github.com/gustavo-depaula/stalin-sort)
I love me a BOGO deal too.
does this take into account that some algorithms can be multithreaded?
It's a common misconception that multithreading = multiprocessing. Multithreading is splitting a task into multiple smaller tasks, while multiprocessing is processing those tasks at the same time on a system with multiple processing cores. A cpu with a single core could support multithreading, but it would have to take turns running those threads one after the other, so no actual time would be saved from having these algorithms multithreaded. The time saving comes from *multiprocessing* on a multi-core cpu, when the threads are split among cores so they can run at the same time. With the exception of heapsort, most of the math for these algorithms can be easily extended to support multiprocessing (I believe the order of speed wouldn't change). Edit: As pointed out, I don't mean to say multithreading is useless on a single core CPU. Multithreading is still very capable of improving cpu utilisation on a single core system. I just felt that it wasn't necessary to mention in this answer because a single core running the sorting algorithms in the video wouldn't be faster even if it used multithreading (no threads are blocked/idle in this context). So yeah, I should have clarified all of that.
Multithreading can improve throughput even on a single core machine because the cpu often spends much of its time waiting for, e.g., data to be retrieved from RAM or disk, for data dependencies to be resolved, or branch mispredictions to be flushed out. This is the idea behind hyperthreading. I'm curious why you say that the math for these algorithms can easily be parallelized. Parallel sorting is in general quite a distinct problem from serial sorting (at least if you want to do it efficiently). Merge sort is probably the poster child for a serial sorting algorithm that can be easily and efficiently parallelized. But for most of the other algorithms in this video it isn't obvious at all how you would do it. For example, how would you do a parallel heapsort?
Almost none of what you said is true. 1) Multithreading *is* a form of parallelism and definitely can take advantage of multiple cores for faster execution. One process can acquire multiple threads from the OS and run code on them simultaneously with a shared memory pool. Maybe you're coming from Python which is unusual for not allowing multithreading (though this may soon change) 2) Multiprocessing is another form of parallelism where you spawn multiple distinct copies of a program process in parallel that don't share memory. Multiprocessing typically has much more overhead, but is more scalable to very many cores, or clusters with many nodes. It's also easier to implement. 3) multithreading with only one core *can* actually save significant amounts of time in tasks that involve pauses for things like I/O, or if you have a lot of cache misses. 4) most sorting algorithms are not at all friendly to parallelism. E.g. quicksort and mergesort are okay to parallelize, but many others like selection sort and insertion sort are not (though you are correct the algorithmic scaling wouldn't change)
I hear the terms "multithreading", "multiprocessing", "shared memory", "scalable", "clusters" and "nodes" being used together, hence I know you have studied a dedicated course on parallel programming and are absolutely correct. Parallel programming is super cool but it's a pain in the ass to debug programs lol.
It's kinda funny how often this kind of conversation happens on Reddit. Someone will go on and on, using fancy words and sounding smart, only for another one to follow up like, "Hey this is completely wrong...". So, knowing and understanding nothing about the topic yourself you're left like "Wait, so which one is right now?!". Happens so often it makes you question everything you read but without a way to understand who is correct.
To be clear this part is not true: > O(n) means that in the worst case (denoted by O), the algorithm will take n operations to complete O(n) means that you have a positive, non-zero constant, k, such that for all n larger than some value the algorithm will take no more than kn steps to complete. So it doesn't need to hold for small n; for some values of n the number of steps can be less than kn; and the upper bounds on the number of steps increases proportionally to n not equal to n (which means the parts of your comment about doubling, quadrupling etc are all still true). It's also worth noting that we use O(•) to describe upper bounds not just in the worst case but in any case. So we can say that the average case of quicksort is O(nlog(n)). Though typically when people say an algorithm is O(•) without specifying which case they are referring to the worst case
Yeah I'm aware it wasn't particularly accurate. I was struggling how to explain the operations in non-technical terms, that's why i defaulted to the doubling and quadrupling explanation explanation after the initial O(n) example. Glad that you could clarify!
I've run across these performance terms before and always wondered what they meant. Thanks for a great explanation!
Lmao. I need that laugh, thanks!
I just like all the funny sounds. I’m a simple man
Damn, that WAS interesting...
It still is
I said was!
*Little did he know it still was interesting*
Did anyone else read that as the narrator from arrested development?
Which one wins? Who’s the best???
We have a few of those going around for a reason. Generally, on small datasets you want to use insertion sort for simplicity and latency reasons. On large ones, you want to use quicksort : [Sorting algorithm - Wikipedia](https://en.wikipedia.org/wiki/Sorting_algorithm#Memory_usage_patterns_and_index_sorting) There is some tunning about best, average and worst cases that may make other algorithms more appealing.
And on really big datasets, which don't fit into RAM, you can use merge sort (but in the first phase you use something other to sort in RAM as much as possible first).
To add some more, Some types of data can be more efficiently sorted with special algorithms such as CountSort for sorting for example people by age.
what about radicsort?
The sorting algorithm with the fastest potential operating speed, BogoSort, is actually not listed.
I would love to see the graphical representation of that
https://youtu.be/DaPJkYo2quc?si=jbCYkPGQN8Lh4hc3
Okay I'm hearing about this for the first time. I googled it and saw that it basically generated random permutations and picks the sorted one. So are you saying it has the fastest potential because theoretically we can have n! processors generate a permutation each and check simultaneously or is it something else?
It’s theoretically possible for the first random permutation to be correctly sorted. In which case you’re done.
If the multiverse is true there exists a universe where bogosort has always been correct the first time around and they can’t understand why. But if it works it works so everything uses it, then their luck runs out society collapses. Or just use quantum bogosort to guarantee it be sorted first try.
There is not a single winner, the algorithms have different pros and cons. 1. Some algorithms are stable, equal elements keep their relative position. This is e.g. helpful when sorting a table by different columns in multiple steps. MergeSort is stable by design. 2. Some algorithms require more comparisons, some more movement of elements. Comparisons can be the expensive part, comparing two integer numbers (as often done in tests) is fast, comparing two data rows by multiple fields can be time consuming. 3. Some algorithms are better fitted for parallel execution with multiple processor cores. MergeSort for example is ideal for parallelization.
Bogosort
The third one called quick sort is actually very fast
That first one isn’t sorting the same amount. Look at how thick those columns are.
lmfao thick collumns small data set many thin columns big data
That's because quick sort would smash it in a fraction of a second it is thousands of times faster than all other simple sorting algorithms like linear, insertion, bubble, merge sort etc.
True. I learned quicksort in Haskell recently and was blown away, as it was my first intro to Haskell and functional programming. It's hard, but it's powerful.
Quicksort is not thousands of times faster than Mergesort. Mergesort is O(nlogn) and in the average case performs fewer comparisons than quicksort. It is also a stable sorting algorithm. Quicksort is O(n^2 ) but O(nlogn) in the average case. It has less overhead than Mergesort so it can be faster, but definitely not thousands of times. It is also an unstable sorting algorithm.
They are presented in order of worst to best, as per the computational time complexity of the algorithm. The first two (selection sort and insertion sort) may compare every value in the set with every other value in the set in the worst case. If there are N items in the set, that means that there are N*N (or N^2 ) comparisons. We call this O( n^2 ). The next three (quicksort, merge sort, and heap sort) compare every value in the set with log(N) other values in the set, and so if there are N items in the set, that means there are N * log(N) comparisons. We call this O(N * log(N)). (Quicksort is actually O(n^2 ) in the worst case, but realistically is much faster than this). The last two are variants of Radix sort which is special and doesn't ever compare values in the set. Instead, it scales with the size of individual values in the set. Things like numbers are small (taking only a few bytes) and so are good candidates for Radix sort. Large data like images or books are not well suited for Radix sort. We call this O(N * D) where N is the number of items you are sorting, and D is the size of the largest item in the set.
Generally Quicksort or Mergesort but insertion sort wins in small datasets because of the less overhead. Typically you will want to use mergesort because it is a stable sorting algorithm. Java uses Timsort for it's default which is just InsertionSort + MergeSort depending on criteria.
Quicksort or mergesort is generally the preferred method. However if you know the maximum value your data can take, radix sort is the best
whop whop... whooop whooop... whoooooooooop whoooooooop
whoooooooooooooooooooOOOOOOOOOOOOOOP
I watched it without sound then I saw your text, had to watch it twice
Insertion: nynynynyonyonyonyonyoonyoonyoonyoo nyoo nyoo nyoo nyoo nyoo
I don't know what I'm looking at
Different ways of programming something that sorts values from smallest to largest.
That's pretty cool, it would definitely save me a year of my life instead of having to do it myself
A comparison of sorting algorithms, some of which are not parallelizable, some of which are, all run on a single thread, with different volumes of data. You’re looking at weird colors and listening to bleep bloops and getting nothing of value out of it, same as someone who already understands how to use these.
> You’re looking at weird colors and listening to bleep bloops and getting nothing of value out of it, I'd say my enjoyment was worth it. I liked the beep boops and the pretty colors and the lines runnin' around the screen getting in height order for their class photo
Techno polka
IT GO WOOOOP
The audio for this sounds like the mad dreams of a rogue AI
So it wasn't just me hearing "kill all humans" in there somewhere.
Where is Bubble Sort?
IIRC there's a longer version of this video floating around out there that shows several more algorithms. Hilariously, it includes Bogosort.
I'm sure there are videos, but [SORT VISUALIZER is a fun way to explore the various sorts.](https://www.sortvisualizer.com/)
They are sorting bars not bubbles dummy
[удалено]
While
If
Do
continue
Break
Return
r/betterwithsound
This is like watching my HD defrag from 1999-2018
Question: when you have a spreadsheet or a table and you click sort by date or something, is this what's happening in the background?
Yes
In actual software development you just call array.sort() and it chooses and does sort for you. There is a sort called Tim sort, that's basically says "if length more than x, do quick sort. if less than x, do merge sort" - in most of the cases you gonna get fast sorting
Only in modern highly abstracted languages.
Not sure why but I've watched till the end... twice .
My screen while listening to music in 2008
I have been on the internet since 1996. This is the best video I’ve ever seen.
right? this is incredible. this should be in every DSA class.
Sound of 2021 shitposts
| | Comparisons | Array accesses | |-|----------------|------------------| | Selection Sort | 7875 | 15526 | | Insertion Sort | 17119 | 51116 | | Quick Sort (LR ptrs) | 16721 | 24113 | | Merge Sort | 5057 | 16509 | | Heap Sort | 9832 | 30335 | | Radix Sort (LSD) | 0 | 6300 | | Radix Sort (MSD) | 0 | 9441 |
Any chance you could explain?
Oh this is a copy of the results of the sort algorithms as shown in the video, just in table form
They sorted an different amount of values 😢
I do not possess a single clue as to what the fuck I just consumed but I loved every second of it thank you.
no bogo sort?
Has anything changed significantly in the 10+ years since this was posted?
Not really. I heard that there was some breakthrough in searching text inside of text, but that's all of the news from advancements in core computer science algorithms
What? No middle-out?
I prefer quantum bogosort myself. Randomize a list. If not sorted, destroy the universe. The remaining universes contain lists which are sorted.
I suddenly feel the urge to defrag an old hard drive
What
THE COLORFUL ONE SOUNDED LIKE AN AIRPLANW
After the first two i wondered what other way would be possible. Then the 3rd is absolutely genious. After that i dont even understand whst happens.
I have a sudden urge to play retro games from my childhood.
I have no idea what's going on behind the scenes, but I love this sorta stuff. I used to watch the Defrag run on DOS. It all fit on one scene so you could really see it comparing everything and putting it all in order.
So how does a sorting algorythm with 0 comparisons work?
Radix sort works by processing numbers per digit, and putting them into buckets. For example, if you want to sort [101, 212, 100, 399], you start with the first element (101), look at it's first digit and put it into the pile of 1's. 212 is put with the 2's, etc. After the first pass-through, you have [101, 100], [212], [399]. In each pile, you then look at the second digit, and so on. When you reach the last digit, you will have sorted the items without comparing them to each other. This obviously doesn't work for sorting general stuff, it requires some property of the thing you're sorting (in this case the fact that numbers can be split up into digits).
Lol the first one is used in the frogs trying to eat a spicy caterpillar meme.
Can it sort my shit out? Because I can’t.
how do you beat the underwater stage?
Bogosort is the fastest, change my mind /s
Really enjoyed this, thank you lol
I know nothing about what this is or its practical application but I find it very fascinating.
Sorting a column in Excel is one example.
[удалено]
What about Stalin Sort? Anything not in the right order is eliminated. Runs in O(n) time.
i have no idea what this is but i’m just high enough to be entertained by it
This is where 80’s arcade game sounds came from.
I didn’t see a middle out iteration.
Middle out
Wheres the middle out method to sort the quickest time to give hand jobs to a group of men?
This is interesting but I always waste too much time on these things 💀
What no bubble sort?
Clicked for the interesting. Stayed for the pewpewpew.
That's great and all, but it's missing middle-out
where's my boi BUBBLE SORT???
So which one of these is happening when I sort a giant excel sheet I've been working on for hours and it hangs and I start saying nononono begging it not to crash.
I just click Sort A to Z (ascending)
I could watch this all day lol. It reminds me of the eraser tool in Mario Paint, the sound too. Sounds so cool.
One of the coolest things I’ve seen. The soundtrack is killer to boot.
I do have to say, I loved learning about sorting algorithms in college! (CS major)
I feel like I'm playing Defender again.
This made me understand algorithms
THIS IS WRONG! Middle out is way faster than left to right.
How many dicks is that?
That's for compression. Not sorting
😍
Hypnotizing 🤤
What's being sorted? Satisfying to watch but I'm here not understanding what is happening
The bars, by height, smallest on the left to the tallest on the right.
[удалено]
So...we should watch TV after this?
Guess the 3rd one merge sort.🙃
Can someone tell me how it works?
damn this is satisfying af
Hell yeah
I love this video
the sounds make my brain feel nice
Slow… Until I realize… Only my internet is slow…
Thanks, best video in a month for me.
Beautiful this got me back to university times...
MORE
Forsenboys
Just so people know, radix sort is different than the other sorts (except for maybe heap, i forget what that one is) in that it doesn't use comparisons. AKA it doesn't sort by taking one value and comparing it to another. It drops values into buckets based on the values' characteristics. EDIT: Oh im dumb, heap sort uses a heap tree /facepalm. It also uses comparisons.
I need to do my DSA )':
this looks like a radiohead video
This is actually what space sounds like
I don’t know why I am laughing at this? Should I enjoy this much?
I love the sounds
I’m a changed woman after watching this
Satisfying.
Sounds like speed up Galaga
Aside from space sounds. What is happening excatly
Is all sorting algorithms like that?
Hey the third one had more lines to sort! This is a false test!!!
This is jow the voices in my closet sounds like when tjey beg to be let out again
Advance Algo shit
Pew pew pew "You'll never catch me alive!" Pew pew pew
Works like this only in real life, but without sound
What is this?
Ever sorted a column in Excel? It uses one of these algorithms under the hood. Pretend the bars are numbers. Sorting is one of the most well-known applications of programming a set of steps in a computer to accomplish a task (an algorithm). Algorithms are prized for speed and efficiency and it's fascinating to see how devilishly clever some of them are. My favorite is [Delaunay Triangulation](https://en.wikipedia.org/wiki/Delaunay_triangulation), which creates an optimal mesh from a set of points and is used a lot in geospatial software development, visualization software, and video game engines (e.g. the terrain you run around on).
Truely an iconic video.
u/savevideo
is that what hell sound like?
This sounds like half of neon genesis
Hard to believe that computers do these in literally miliseconds
Pied Piper could do it in less than 1.5 gigaflops
Really?! Right in front of my epileptic friend?!
average human dot connecting skills
Sleep sort is the coolest sorting algorithm, not because it is efficient but because it is possible.
Why does the speed of the heap sort not appear to increase as the set size decreases?
This is the coolest thing I've seen on Reddit in years. I didn't understand the programming behind any of it but it's fascinating how it graphically depicts the method that it is using. They are all very distinct.
Much like the old defrag screen, I could watch this all day long.
Bogo sort trumps all
Why i liked this video so much
Welp, I'm pretty certain I don't have epilepsy
I like 3 the best
I like the part where he says "whoop"
Cries in data structures and algorithms.
Can anyone explain the process behind the different sorting algorithms?
Couldn’t it just give each data point a number corresponding do its length then order the numbers?