4
tueor
5y

I think I'm overthinking this sigh

But its bothering me a lot so pls help, the two paragraphs I've marked, are they both saying the same thing in different ways?

I don't understand them saying there will be a f(n), when runtime is O(n²), upper bounding the runtime and on the other hand saying for runtime Ω(g(n)), g(n) directly lower bounds the runtime.

(The book is CLRS btw)

Comments
  • 2
    First paragraph says O(n^2) gives information about upper bound.
    Second one, that \Omega(n^2) gives information about lower bound.
    I don't know if that answer your question.
  • 2
    @turbod it doesn't sigh, my doubt is, why are they involving a f(n) with O(n²) for upper bounding, and straight away lower bounding with n² when it comes to Ω(n²).
  • 1
    @iineo That's because the worst case running time of insertion sort is proportional to n^2 while the best case running time is proportional to n.

    Read carefully, they are not lower bounding it with Omega(n^2). Quite the opposite, they say it is NOT lower bounded by Omega(n^2), see the last paragraph, i.e. the one after the second red arrow.

    In the best case, relevant for Omega(), the array is already sorted so that insertion sort just skims through the array one time, and that takes time proportional to the length of the array. This is Omega(n).

    In the worst case, relevant for O(), insertion sort has to do a lot of things, and since insertion sort isn't that clever, it needs time proportional to n^2 in this case. This is O(n^2).
  • 0
    I understand they're lower bounding insertion sort's runtime with Ω(n) and upper bounding it with O(n²)

    I said Omega of n² because since they're talking about Ω(g(n)), I put in something that makes the two definitions similar, so the difference is apparent, sorry for the confusion, I'll try to rephrase it.

    The O definition is: When we say “the running time is O(n²), we mean that there is a function f(n) that is O(n²) such that for any value of n, no matter what particular input of size n is chosen, the running time on that input is bounded from above by the value f(n).

    The Ω definition is: When we say that the running time (no modifier) of an algorithm is Ω(g(n)), we mean that no matter what particular input of size n is chosen for each value of n, the running time on that input is at least a constant times g(n), for sufficiently large n.

    My doubt is, for O , they're introducing an extra function f(n) for upper bounding runtime, in Ω, they're not doing so, why?
  • 0
  • 1
    @iineo That's because the definitions are the same, only different wording. Actually, they introduce a function in both cases - f for O and g for Omega.

    It's just that the constant factor thing is explicitely spelt out for g while it's implicit for f.

    The only important difference is that f is an upper bound for all inputs while g is a lower bound even for the best case.

    Actually, the O definition is bad because it's circular, it's not a valid definition. Defining the running time O(n^2) by a function f(n) that is O(n^2) does not define what O actually is because it appears on both sides of the definition. That's bad math.
  • 1
    @Fast-Nop thank you amigo
Add Comment