<aside> 💬 why analyse algorithms? we want a way to describe how algorithms perform at scale (i.e., when we are receiving millions or billions of inputs)
</aside>
we can runtime $T$ is a function of the input size $n$ —> we can write this as $T(n)$
big-O notation answers the question❓
What is the relationship between the runtime of our algorithm and the size of its input?
or in other words…
How does the runtime of our algorithm increase as input increases to very large amounts?
a more formal definition (outside scope of COMP2521)
big-O allows us to disregard most other factors that could affect speed, and just focus on the algorithm’s performance
note: we can also use big-O notation to describe memory usage of an algorithm (called space complexity), but in COMP2521 we care more about time
when describing performance of an algorithm using big-O notation, we disregard the coefficients and lower order terms and just focus on the highest order (dominant) term
e.g. $O(5n^2 + 3n+600)$ is just expressed $O(n^2)$
e.g. $O(n^3 + 4 n^2 \log n + 8000n)$ is just expressed as $O(n^3)$
Time Complexity | Big-O Notation |
---|---|
constant | $O(1)$ |
logarithmic | $O(\log n)$ |
linear | $O(n)$ |
quasilinear | $O(n \log n)$ |
quadratic | $O(n^2)$ |
exponential | $O(2^n)$ |
factorial | $O(n!)$ |
$O(n^n)$ |