Running time as an absolute measure is usually less important than *how that time increases when you add more data*. For example, an algorithm that always takes 5 seconds to process 100 items, 10 seconds to process 200 items and so on, is said to be O(N) since the running time increases linearly with the dataset size. If a second algorithm took 5*5 = 25 seconds to process those 200 items instead, it might be classed as O(N^2). There's no "peak running time" here, since the running time always increases when you throw more data at it.

In fact, big O is an upper bound - so you could say the first algorithm was O(N^2) as well (if N is an upper bound, N*N is higher and hence also an upper bound, albeit a looser one). Common notation to denote other bounds includes Ω (omega, lower bound) and Θ (theta, simultaneous lower and upper bound).

Some algorithms (for instance, Quicksort) exhibit different behaviour depending on the data fed to it - hence the worst case is O(N^2) even though it usually behaves as if it were O(N log N).