The core course I took last year was in measure theory. I've written about it a little before, but today I'm going to (briefly) explain its relationship to probability. Then I'll end the post with my real reason for posting, which is a quote that excited me this morning.
Measure is just (more or less) a generalization of length. Let's say you're on the real line and you want to know how big a set is. If the set is just an interval, like (0,2), it's easy enough to say its length (or measure) is 2. But what if your set is much weirder? Like, what if you want to be able to say what "size" any subset of the reals is, in a sensible way that accords with our notion of length? That is, you'd like the interval from 0 to 2 to still have length 2, but you still want to be able to measure anything you want?
Well, unfortunately that is impossible, but what we can do is come up with a measure that works for pretty much any set that anyone cares about. In fact, coming up with a set that you can't measure requires using the axiom of choice to construct some bizarro thing that just doesn't arise in normal life.
So, let me tie this to probability. Let's say you want to pick a random number between 0 and 1, with all numbers equally likely. I'm just talking about a uniform distribution on the interval [0,1]. It's pretty obvious (I think) that the odds of getting a number between 0 and 1/2 is 50%. This corresponds to the length of the interval that you're talking about.
But what about the odds of getting a rational number? Or the odds of getting a number without a 2 in it anywhere? Or the odds of getting a number whose first three digits (after the decimal point) are repeated 9 times?
These are questions which can be answered with measure theory. In particular, there is only one measure (it's called Lebesgue measure) which both assigns to each interval its length and gives a measure for every Borel set (the ones I'm characterizing as "all the sets anyone cares about").
You might ask, "Who cares about the odds of getting a number whose first three digits are repeated 9 times?" But let's unpack that a little bit.
Let's say we roll a 10-sided die an infinite number of times. If the sides of the die are labeled 0 through 9, then if you wrote out the results, after a decimal point, you'd get a number in the interval [0,1]. For instance, it might look like .98362819501... and so on.
If we assume all combinations of die rolls are equally likely (which is true if the die is fair), then the odds of the die roll meeting whatever criteria we give are exactly the measure of the set in [0,1] that corresponds to the numbers that the roll represents.
For instance, let's look at my question about rational numbers. I should note that we're not going to let anything terminate, so for instance, we wouldn't write 0.4 (because that wouldn't correspond in any obvious way to an infinite series of dice rolls) - we would write 0.399999999..., which is the same number.
So what is a rational number? It's one that repeats forever when you write it out like that. For instance, 1/3 is 0.33333333.... You could also have a rational number like .123123123123... where more than one number repeats.
So, if we randomly choose a number between 0 and 1, the odds of it being rational are identical to the odds that, if we roll a die infinitely many times, eventually we hit some number or group of numbers that then repeat forever (for instance, we start rolling 1,2,3 over and over again for all eternity). Common notions of probability suggest that the probability of that happening is basically nil, which turns out to be right - the measure of the rational numbers is 0. (This is proved in an entirely different way in a measure theory course.)
Or, to use my other example, the odds of having your first three rolled numbers happen again like that 8 more times in a row is the same as the measure of the set I mentioned earlier - numbers with their first 3 digits repeated nine times. (So in other words, you can use this to measure probabilities that happen in a finite number of dice rolls too.)
This leads me to my quote, which is from Section 4 of my textbook, Billingsley:
Complex probability ideas can be made clear by the systematic use of measure theory, and probabilistic ideas of extramathematical origin, such as independence, can illuminate problems of purely mathematical interest. It is to this reciprocal exchange that measure-theoretic probability owes much of its interest.So there you have it. Nifty, no?
No comments:
Post a Comment