|Martins Eglitis 08947d9f3d||8 months ago|
|src||8 months ago|
|.gitignore||8 months ago|
|README.md||8 months ago|
|hints.txt||8 months ago|
Quick sort picks a random element as a “pivot” and then swaps values in the array such that the elements less than pivot appear before elements greater than pivot. This gives a “partial sort:‘Then it recursively sorts the left and right sides using a similar process.
How do you describe the runtime of insertion? This is a tricky question. The array could be full. If the array contains N elements, then inserting a new element will take O(N) time. You will have to create a new array of size 2N and then copy N elements over. This insertion will take O ( N) time. However, we also know that this doesn't happen very often. The vast majority of the time insertion will be in O(l) time.
In binary search, we are looking for an example x in an N-element sorted array. We first compare x to the midpoint of the array. If x == middle, then we return. If x < middle, then we search on the left side of the array. If x > middle, then we search on the right side of the array.
This is a good takeaway for you to have. When you see a problem where the number of elements in the problem space gets halved each time, that will likely be a 0( log N) runtime.
Here's a tricky one. What's the runtime of this code? Very interesting 3. example!
The following simple code sums the values of all the nodes in a balanced binary search tree. What is its runtime? Very interesting 9. example!
Very interesting 11. example! Not factorial but O(n) time for calculating factorial.
Don't get fooled by naming!
Is there anything else that is unnecessary? Yes. If there's onl one valid d value for each (a, b, c), then we can just compute it. This is just simple math: d = a + b - C • This will reduce our runtime from O(N 4 ) to O(N 3 ).
This is another place where BCR can be useful. Any work you do that's less than or equal to the BCR is “free;’ in the sense that it won't impact your runtime. You might want to eliminate it eventually, but it's not a top priority just yet.
A hash table is a data structure that maps keys to values for highly efficient lookup.
If the number of collisions is very high, the worst case runtime is O(N), where N is the number of keys. However, we generally assume a good implementation that keeps collisions to a minimum, in which case the lookup time is 0(1).
On each concatenation, a new copy of the string is created, and the two strings are copied over, character by character. The first iteration requires us to copy x characters. The second iteration requires copying 2x characters. T he third iteration requires 3x, and so on. The total time therefore is O( x + 2x + . . . + nx). This reduces toO(xn 2 ). That is because 1 + 2 + ... + n = n * (n - 1) / 2. And it happens because of the non-resizing array. Use string builders for resizable array and complexity of O(n).