Concept: Find an upper bound on the run time of a program in terms of input size, don't care about exact results only functional result (i.e x² NOT 2x² <-- don't care about multiplying constants or adding constants OR adding smaller terms: x² + x = O(x²) since x² is "bigger" than x)
Rationale: Big-O notation is helpful in quickly comparing two different algorithms, and yes making a judgement as to which is more efficient...well faster, efficient can take on different meaning.
Constraints: Big-O notation can't predict the actual run-time, it only gives you an idea of the run-time. In fact two different algorithms: a1 has O(x²) and a2 has O(x). Now although a2 seems better, perhaps for small input a1 is actually faster. So what if you know that you're input won't exceed a certain amount, then the O(x²) algorithm may actually end up being faster than the O(x) algorithm. This is because Big-O is an asymptotic notation, meaning that it is only really valid for exceedingly large x. So for different problems "large x" will be different.
Process: I mean I'm not sure, I think I explained above how Big-O notation is used, it's used to compare two algorithms. If you're asking how to you find the Big-O functional, then that's a difficult question since it totally depends on the algorithm you are analyzing. But in general you just count up the number of operations necessary. You should basically look for loops, because this is where you will get most of the size dependence.
For instance in analyzing Selection sort, we know that you have to find the minimum in the remaining list. Well to find the minimum, you have to search the entire list.
So initially you will have n items, then n - 1, then n - 2, etc. until you have 0 items:
1 + 2 + 3 + ... + n - 1 + n = n(n + 1)/2
If you multiply this out get (1/2)n² + (1/2)n, so clearly this is O(n²) since this is the "largest" term.
So I don't really know what you mean by that.