# Time Complexity
**The complexity of an algorithm as measured by its running time as a [[function]] of input size.**
*Time complexity* is a measurement of the complexity of an algorithm in terms of the time taken for the execution of the algorithm as a [[function]] of the size of the input.
Time complexity is typically estimated by assuming that each *elementary operation* in an algorithm takes a *constant time* to run and counting the total number of them. In general, the *worst-case* time complexity is considered.
The time complexity of an algorithm presents the *asymptotic behaviour* of the running time and as such time complexities are usually expressed using [[big O notation]].
## Common time complexities
![[CommonTimeComplexities.svg|600]]
*Number of operations $N$ as a function of input size $n$*
The following table contains common time complexities in increasing order of rate of growth:
| Name | Complexity class | Big O Notation |
|:-------------------------------------:|:----------------:|:----------------------:|
| Constant time | | $O(1)$ |
| [[Logarithm\|Logarithmic]] time | DLOGTIME | $O(\log n)$ |
| Linear time | | $O(n)$ |
| Log-linear time, linearithmic | | $O(n\log n)$ |
| Quadratic time | | $O(n^{2})$ |
| [[Polynomial time]] | P | $\text{poly}(n)$[^1] |
| [[Exponentiation\|Exponential]] time with<br>linear exponent | E | $2^{O(n)}$ |
| Exponential time | EXPTIME | $2^{\text{poly}(n)}$[^1] |
| Factorial time | | $O(n!)$ |
[^1]: $\text{poly}(n)$ means a *polynomial* in $n$.
## Examples
> [!Example]+ Worst-case logarithmic time
> $\begin{align*}
> &\textbf{Algorithm } \text{ Binary search}\\
> \hline
> &\textbf{Input: } \text{array of size \textit{n}, value}\\
> &\textbf{Output: } \text{index of value}\\
> \\
> &\texttt{left = 0}\\
> &\texttt{right = n - 1}\\
> &\textbf{While } \texttt{left <= right} \textbf{ do:}\\
> &\quad\texttt{mid = floor((left + right) / 2)}\\
> &\quad\textbf{If } \texttt{array[mid] < value} \textbf{ then:}\\
> &\quad\quad\texttt{left = mid + 1}\\
> &\quad\textbf{Else if } \texttt{array[mid] > value} \textbf{ then:}\\
> &\quad\quad\texttt{right = mid - 1}\\
> &\quad\textbf{Else:} \\
> &\quad\quad\textbf{return } \texttt{mid}\\
> &\textbf{end while} \\
> &\\
> &\textbf{return } \texttt{error}
> \end{align*}$
>
> - $\texttt{left = 0}$ and $\texttt{right = n - 1}$ are each $O(1)$
> - In the worst case, the loop has $\lfloor{\log_{2}n+1\rfloor}$ iterations[^2] - multiply all inside by $\log_{2}n$[^3]
> - $\texttt{m = floor((left + right) / 2)}$ is $O(1)$
> - the comparisons $\texttt{array[mid] < value}$ and $\texttt{array[mid] > value}$ are each $O(1)$
> - $\texttt{left = m + 1}$, $\texttt{right = m - 1}$, and $\textbf{return }\texttt{mid}$ are each $O(1)$
>
> The exact run time of each loop varies but never exceeds a constant multiple of $\lfloor{\log_{2}n+1\rfloor}$.
>
> Thus, the worst-case time complexity is $O(\log n)$[^3].
> [!Example]+ Linear time
> $\begin{align*}
> &\textbf{Algorithm }\text{ Sum}\\
> \hline
> &\textbf{Input: } \text{array of size \texttt{n}}\\
> &\textbf{Output: } \text{sum}\\
> \\
> &\texttt{sum = 0}\\
> &\textbf{For } \text{\texttt{i = 0} to \texttt{n - 1}} \textbf{ do:} \\
> &\quad\texttt{sum += array[i]}\\
> &\textbf{end for} \\
> \\
> &\textbf{return } \texttt{sum}
> \end{align*}
> $
>
> - $\texttt{sum = 0}$ is $O(1)$
> - The loop has $n$ iterations - multiply all inside by $n$
> - $\texttt{sum += array[i]}$ is $O(1)$
> - $\texttt{i++}$ is $O(1)$
> - $\texttt{return sum}$ is $O(1)$
>
> Thus, the total run time of the algorithm is $2n+2$ and therefore the time complexity is $O(n)$[^3].
> [!Example]+ Worst-case quadratic time
> $\begin{align*}\\
> &\textbf{Algorithm } \text{ Bubble sort}\\
> \hline
> &\textbf{Input: } \text{array of size \texttt{n}}\\
> &\textbf{Output: } \text{sorted array}\\
> &\\
> &\texttt{swapped = true}\\
> &\textbf{While } \texttt{swapped} \textbf{ do:}\\
> &\texttt{swapped = false}\\
> &\quad\textbf{For } \text{\texttt{i = 0} to \texttt{n - 1}} \textbf{ do:}\\
> &\quad\quad\textbf{If } \texttt{array[i] > array[i + 1]} \textbf{ then:}\\
> &\quad\quad\quad\texttt{temp = array[i]}\\
> &\quad\quad\quad\texttt{array[i + 1] = temp}\\
> &\quad\quad\quad\texttt{swapped = true}\\
> &\quad\textbf{end for}\\
> &\textbf{end while}\\
> &\\
> &\textbf{return } \texttt{array}
> \end{align*}$
>
> - $\texttt{swapped = true}$ is $O(1)$
> - The while loop has $n$ iterations - multiply all inside by $n$
> - $\texttt{swapped = false}$ is $O(1)$
> - The for loop has $n - 1$ iterations - multiply all inside by $n - 1$
> - the comparison $\texttt{array[i] > array[i + 1]}$ is $O(1)$
> - $\texttt{temp = array[i]}$, $\texttt{array[i] = array[i + 1]}$, $\texttt{array[i + 1] = temp}$, and $\texttt{swapped = true}$ are each $O(1)$
> - $\texttt{return array}$ is $O(1)$
>
> The exact run time of each iteration of the for loop varies, but never exceeds a constant multiple of $n - 1$.
>
> Thus, the worst-case time complexity is $O(n^{2})$.
[^1]: $\text{poly}(n)$ means a polynomial in terms of $n$.
[^2]: Every iteration of the loop divides the search space in half, so as the search size increases exponentially, the number of operations only increases linearly.
[^3]: These simplifications are made because time complexity is intended to provide an [[Big O Notation#^a4d18e|asymptotic upper bound]] run time instead of an exact run time function.