I've been trying to sort out the behavior of the correlated block variance for a couple weeks now and haven't had much success. I get that the correlation prevents it from being Chi-squared (if that wasn't true, this whole line of research would be rather pointless). The question is, what is it? Nothing I recognize.
I was hoping that the kernel distribution would either fix it or suggest an easy workaround via sensitivity analysis. No such luck on either. Here's the sample and kernel distribution of the block varaince when five blocks are sampled:
They're both bi-modal and neither yield a reliable confidence interval with either a t- or Normal bound.
Well, maybe 5 is just too small. Nobody would stop sampling that early, anyway. What's it look like when we get to 20?
Now we have a different problem. The sample variance is basically unimodal, but the skew is in the wrong direction. The Kernel distribution has the same problem, plus it appears to be converging to a biased mean (and this was after pulling out my "heavy" prior that slowed convergence, so the bias isn't coming from that).
By 100 blocks sampled, we have a pretty good sample variance distribution but the kernel is still way off.
The problem here is that I'm using the homogeneous bound on the block variance which is always greater than the partitioned variance. I did that because, well, because the distribution was really messy and it looked like the bound would be tight enough. It's obviously not, so I need the real answer and not an estimate.
I really don't know what we're dealing with here, but I think I'm going to have to roll up my sleeves and grind out the real distribution of the variance and test statistic to make any further progress.
No comments:
Post a Comment