Recall from the univariate case that if Y = g(X) and g is monotonic, then the pdf of Y is:
fY(y) = fX(g-1(y))|d/dy g-1(y)|
Extending this to the multivariate case is pretty straightforward when g is a 1:1 function. The kicker is what to do with the derivative. You now have to adjust for the transforms along each dimension of g. This obviously suggests some sort of partial derivative. Indeed it is, but not exactly an obvious one. The extension of the derivative of a scalar function of one variable to one of many variables is the gradient. Extending that to functions of vectors produces a matrix known as the Jacobian:
The matrix doesn't have to be square, but if it is, then the determinant is also called the Jacobian and it's what you stick in to do the transform of the density function. You can see how this reduces to the univariate form when n = 1. Of course, what we need for that formula is the Jacobian of the inverse of g because we're starting with the value for Y and mapping it back to the density of X. This is why it needs to be a 1:1 function.
In the cases where it isn't, we use the same trick employed in the univariate case: break the sample space up into areas where the function is 1:1. If A = {Ai} is the partitioned sample space and hi(y) = g-1(y) for all y in g(Ai), then:
and
Note that |Ji| is the absolute value of the Jacobian determinant, not simply the Jacobian determinant (hey, I'm not the one who decided to call them the same thing and then use ambiguous notation on top of it).
No comments:
Post a Comment