Convergence in probability
Below is a brief discussion on the weak law of large numbers, a very standard result in probability. I like the proof because of its brevity. The statement of the theorem is as follows.
Let be a sequence of iid random variables. Define
is the event that deviates from the random variable in magnitude by not more than .
is the probability of such an event.
Let and let be given.
Convergence in probability means that there exists an such that
is then said to converge to _in probability*. This is denoted as
In words, if one wants to permit to deviate from by less than an margin with at least certainty, there will always exist an N which achieves this for all (assuming the ’s are iid).
Let be a random variable and let be an infinite sequence of i.i.d. copies of . Define
The proof hinges on the well-known tail-bound,
Let and .
Note the use of the i.i.d. assumption in the penultimate step where .
must be bounded below by in order to obtain the desired inequality
The only factor free to be altered is and
Operationally then, given , , and , the weak law of large numbers tells you how large needs to be in order to fall within of with probability .