My linear algebra text doesn't name the theorems relating eigenvectors to diagonalization. That surprises me a bit. Not because they are super deep theorems, but the results are so incredibly important. I guess it's an example of stuff that was proved long before it was needed. Either that, or the author (writing in 1980) had no clue how important diagonalization was going to become to data analysis.
Anyway, as a practitioner in 2016, not to mention someone about to take the Q, these results are pretty much the most important takeaway of the whole course.
First, distinct eigenvalues yield independent eigenvectors:
If λ1, ..., λk are distinct eigenvalues of A with corresponding eigenvectors x1, ..., xk, then x1, ..., xk are linearly independent.
From this immediately follows the result that makes Principal Component Analysis possible:
An nxn matrix is A is diagonalizable if and only if A has n linearly independent eigenvectors. Furthermore, if A = S-1DS where D is diagonal, then S is composed of the column eigenvectors and the diagonal entries of D are the corresponding eigenvalues.
As mentioned above, the proof is fairly easy exercise in construction:
Suppose A has n linearly independent eigenvectors. Let S be the matrix formed by taking them as column vectors. For each associated eigenvalue, we know that Axj = λjxj so:
Since the eigenvectors are independent, S is non-singular. So, multiplying both sides by its inverse gives A = S-1DS. The converse is basically walking the same thread back the other way.
No comments:
Post a Comment