On Saturday, I said I'd skip the derivation of the t-distribution but, I'm going to come back to it because I've been thinking more about it and it kind of freaks me out. Recall that the t-distribution is the ratio of a normal and independent chi-squared random variable which we'll denote U and V, respectively. To get the pdf, we consider the transformation of U and V to T and W:
We don't really care about W, but we can only use the Jacobian transform trick if we have an equal number of variables on both sides of the transform, so we just pick the easiest possible transformation for the second variable. Inverting those transformations and taking the Jacobian gives
Now we compute the marginal:
Here, I will skip some steps since it's messy and not particularly enlightening. Suffice it to say, you collect terms and notice that you can factor out the kernel of the Gamma distribution to get:
And you were wondering why people just look the numbers up in a table. Seriously, though, forget about all the crazy norming constants and just look at the part term that involves t. What happens when p = 1?
It's the freaking Cauchy distribution! Why does this seem crazy? Because, consider what this represents. This is the distribution of the mean when all we have is a sample of two from a normal population of unknown mean and variance. Basically, we're saying that, if we don't know the mean or variance, the sample mean comes from a nice normal distribution (we just don't know the parameters), but the mean itself (or, our estimator of the mean if you want to use frequentist terms) is so crazy unbounded that we have no moments on the distribution at all.
Maybe it's just me, but that seems nutty. One thing's for sure: don't ever believe a sample of 2.
No comments:
Post a Comment