assuming three iid U(0,1) observations and x < 1/3. (Obviously, that gets scaled to reflect the upper bound being the maximum possible block sum for a stratum, but it's easier to compute the critical value first and then scale it). Thus, the cdf is:
OK, nothing interesting so far, but here's the weird part: set that equal to 0.05 and solve for a. You get 2/9. Really!
There's no significance to the 2/9, the p-value is arbitrary, and it's an approximation (to 3 decimal places) not a real equality. It just turns out that 1/20 is roughly 2/9 squared. Still, it kinda leaps off the page at you.
OK, enough of that nonsense. Let's continue looking at the delta method. In the first-order case, the problem was that when g'(θ) = 0, there's no way to extrapolate a distribution because the approximating line is flat. The obvious step is to go to the second-order polynomial and hope for some variation there. So,
which implies that
since g'(θ) is zero. Since the square of a standard normal is chi-squared with 1 degree of freedom, we see that
Of course, one does well to confirm the second derivative exists and is also not zero. Finally, since Taylor series work just fine in multidimensional space, it should be no surprise that there's a multivariate form of this method as well:
Let X1, ..., Xn be a random sample of p-dimensional vectors with E(Xij) = μi and Cov(Xik, Xjk) = σij. For a given function g with continuous first partial derivatives and a specific value of μ = (μ1, ..., μp) for which
You can pull the second-order trick on this one, too, if you need to, but that rarely happens since all the partials would have to be zero to make the method fail.
No comments:
Post a Comment