The absolute running time of an algorithm cannot be predicted because it depends on many factors.
- It can change based on the programming language used to implement the algorithm.
- The computer the program runs on
- Other programs running at the same time.
Keeping these points in mind we evaluate the performance of an algorithm in terms of its input size. The evaluation is of two types :
- Time Complexity – The amount of time an algorithm takes to run as a function of input size.
- Space complexity – the amount of memory taken up by an algorithm to run as a function of input size.
By evaluating against the input size, the analysis is not only machine independent but the comparison is also more appropriate. Imagine if one algorithm is faster than other for a small input size but slower for a larger input size. We would never be able to accurately judge which is more efficient. Keep in mind, there is no one solution that works every single time. It is always good to know multiple ways to solve the problem and use the best solution, given your constraints.
If your app needs to be very quick and has plenty of memory to work with, you don’t have to worry about space complexity. On the other hand, if you have little memory to work with, you should pick a solution that is relatively slower but needs less space.
How to represent complexity?
We do that using asymptotic notation. Asymptotic notation is a mathematical tool to represent time and space complexity. There are mainly three asymptotic notations.
- Big-O Notation (O-notation) – Worse case complexity.
- Omega Notation – Base case complexity.
- Theta Notation – Average case complexity.
Now at this point, if I were to explain their theoretical definitions it would unnecessarily confuse you. Instead, I want to make this more practical and easier to understand as a beginner.