Question:
Big-O Notation Computer Science?
Dhaos190
2010-03-17 21:09:42 UTC
I'm confused on the whole thing, had to do a presentation to do on it and my professor's explanation of it made things worse. So would anyone be willing to answer these questions for me and help me fill in some blanks?

Concept: What is big-o notation?
Rationale: Why do we need big-o notation? (To calculate the efficiency of an algorithm right?)
Constraints: What can't be done with big-o notation, when do we use it?
Process: How does one go about using big-o notation?
Three answers:
?
2010-03-17 22:51:40 UTC
Concept: Find an upper bound on the run time of a program in terms of input size, don't care about exact results only functional result (i.e x² NOT 2x² <-- don't care about multiplying constants or adding constants OR adding smaller terms: x² + x = O(x²) since x² is "bigger" than x)



Rationale: Big-O notation is helpful in quickly comparing two different algorithms, and yes making a judgement as to which is more efficient...well faster, efficient can take on different meaning.



Constraints: Big-O notation can't predict the actual run-time, it only gives you an idea of the run-time. In fact two different algorithms: a1 has O(x²) and a2 has O(x). Now although a2 seems better, perhaps for small input a1 is actually faster. So what if you know that you're input won't exceed a certain amount, then the O(x²) algorithm may actually end up being faster than the O(x) algorithm. This is because Big-O is an asymptotic notation, meaning that it is only really valid for exceedingly large x. So for different problems "large x" will be different.



Process: I mean I'm not sure, I think I explained above how Big-O notation is used, it's used to compare two algorithms. If you're asking how to you find the Big-O functional, then that's a difficult question since it totally depends on the algorithm you are analyzing. But in general you just count up the number of operations necessary. You should basically look for loops, because this is where you will get most of the size dependence.



For instance in analyzing Selection sort, we know that you have to find the minimum in the remaining list. Well to find the minimum, you have to search the entire list.



So initially you will have n items, then n - 1, then n - 2, etc. until you have 0 items:



1 + 2 + 3 + ... + n - 1 + n = n(n + 1)/2



If you multiply this out get (1/2)n² + (1/2)n, so clearly this is O(n²) since this is the "largest" term.



So I don't really know what you mean by that.
Benny
2010-03-17 22:53:53 UTC
Wikipedia has a pretty good write up about Big O notation:



http://en.wikipedia.org/wiki/Big_O_notation



Basically Big O notation is just a general way of saying how efficient an algorithm is. O(log n) is pretty darn good while O(n!) is just awful.



The good news is you probably won't be talking about Big O notation outside of academia :-)
crismond
2016-10-05 11:45:19 UTC
maximum decently useful sorting algorithms are O(n log(n)). the entire factor of sturdy sorting algorithms is to do greater desirable than the sturdy previous bubble type, that's O(n^2). The QuickSort set of rules reportedly has the ultimate hassle-free overall performance, that's O(n log(n)). regrettably, in case you feed it an already-regarded after lists it degenerates to O(n^2). A heap type is O(n log(n)) and has no worst-case degeneration. It incorporates 2 stages: (a million) partly-type the archives to a heap; then (2) use the heap's partly-regarded after characteristic to coach it right into a regarded after checklist.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...