A. It is proved that the eigenvalues of AB are all ≥ 0

A. It is proved that the eigenvalues of AB are all ≥ 0


First of all, if a is positive definite and B is semi positive definite, we can use similar transformation, AB is similar to a ^ {- 1 / 2} (AB) a ^ {1 / 2} = a ^ {1 / 2} Ba ^ {1 / 2}, so the eigenvalues are > = 0
Then, using the continuity of eigenvalues, the eigenvalues of AB can be regarded as the limit of eigenvalues of (a + Ti) B, which is still > = 0



It is proved that if a is a positive definite matrix of order n, then all eigenvalues of a are positive


XT * a * x > 0, X is any n-dimensional non-zero vector, because a is positive definite, so a is symmetric, a can be diagonalized, there is an invertible matrix Q, satisfying QT * a * q = n eigenvalue diagonal matrix. Then there is an inverse matrix P of Q, satisfying a = Pt * (n eigenvalue diagonal matrix) * P. substituting a = Pt * (n eigenvalue diagonal matrix) * P into XT * a * x > 0, a coefficient of P * x = y is the standard form of n eigenvalues, If it is greater than 0, all n eigenvalues must be greater than 0



λ 1 and λ 2 are two different eigenvalues of matrix A, and the corresponding eigenvectors are α 1 and α 2 respectively. A necessary and sufficient condition for finding α 1 and a (α 1 + α 2) to be linearly independent is obtained


It is proved that: because the eigenvectors of a belong to different eigenvalues are linearly independent
So α 1, α 2 are linearly independent
And a (α 1 + α 2) = a α 1 + a α 2 = λ 1 α 1 + λ 2 α 2
So α 1, a (α 1 + α 2) are linearly independent if and only if they are determinants
1 0
λ1 λ2
Not equal to 0
That is, λ 2 ≠ 0



λ 1 and λ 2 are two different eigenvalues of matrix A, and the corresponding eigenvectors are α 1 and α 2 respectively. It is proved that α 1 and α 2 are linearly independent


It is proved that let K1 α 1 + K2 α 2 = 0 (1)
Multiply a on both sides of the equation to get k1a α 1 + k2a α 2 = 0
It is known that K1 λ 1 α 1 + K2 λ 2 α 2 = 0 (2)
λ1*(1) - (2)
k2(λ1-λ2)α2=0
Because α 2 is an eigenvector, it is not equal to 0
So K2 (λ 1 - λ 2) = 0
And λ 1 and λ 2 are two different eigenvalues of matrix A
So K2 = 0
Substituting (1) to know K1 = 0
Therefore, α 1 and α 2 are linearly independent



Let input 1 and input 2 be two different eigenvalues of matrix A, and A1A2 belong to the eigenvectors of input 1 and input 2 respectively. It is proved that A1A2 is linearly independent


To the contrary, suppose that there is linear correlation, let k * A1 = A2 (k is not equal to 0)
In 1 * A1 = a * A1
In 2 * A2 = a * A2 = a * (k * A1) = k * (a * A1) = k * in 1 * A1
We get A1 = in 2 / (k * in 1) * A2
At first, we assume that A1 = A2 / K, so enter 2 / (k * enter 1) = 1 / k = > enter 1 / enter 2 = 1, which is different from entering 1 into 2 in the question, so A1A2 is linearly independent



Let α be the eigenvector of matrix A belonging to the eigenvalue λ, and p be an invertible matrix of order n, then α is also the eigenvector of matrix ()
A、P^-1AP B、A^2+3A C、A^2 D、P^TAP


A α = λ α, a ^ 2 α = a λ α = a λ α = λ λ α = λ ^ 2 α,
So λ ^ 2 is the eigenvalue of a ^ 2 and α is the corresponding eigenvector
The answer is C



Are the eigenvectors of orthogonal matrices with different eigenvalues necessarily orthogonal
If the matrix is orthogonal, what about the general matrix? What about the orthogonal matrix? Will the eigenvectors of their different eigenvalues be orthogonal?


Yes. The eigenvectors of orthogonal matrices belonging to different eigenvalues must be orthogonal. Convention: the conjugate complex number of complex number λ is denoted as λ '. The conjugate transpose matrix (vector) of matrix (including vector) a is denoted as a * a is an orthogonal matrix, a * = a ^ (- 1). Let λ 1 and λ 2 be two different eigenvalues of a, then λ 1, λ 2 ′≠ 1 [λ 2 ′ = 1 /



Two orthogonal eigenvectors of different eigenvalues of normal matrix


The eigenvectors of different eigenvalues of a symmetric matrix must be pairwise orthogonal
Let the eigenvalue A1 of symmetric matrix a correspond to the eigenvector x1, and A2 correspond to the eigenvector x2. Let's prove that x1'x2 = 0
Consider a1x1'x2 = (a1x1)'x2 = (ax1)'x2 = x1a'x2
a2x1x2=x1(a2x2)=x1Ax2.
Here a is a symmetric matrix, so a1x1'x2 = a2x1'x2, that is, (a1-a2) x1'x2 = 0, because the inequality between A1 and A2 is a known condition, so x1'x2 = 0
Here we should pay attention to AX = ax, then X1 and X2 are vectors, A1 and A2 are numbers, x1'x2 is the inner product of vectors, and it is also a number.. the rest is high school knowledge



In line generation, are the eigenvectors corresponding to different eigenvalues necessarily orthogonal? I know that different eigenvectors of the same eigenvalue may not be orthogonal
Do you need to define a real symmetric matrix? Can you briefly explain why


The eigenvectors corresponding to different eigenvalues are linearly independent
Orthogonal eigenvectors corresponding to different eigenvalues of real symmetric matrices
Different eigenvectors of the same eigenvalue may not be orthogonal, but their linearly independent eigenvectors can be orthogonalized
This proof is troublesome. It needs at least three theorems. You'd better read a book



Quadratic matrix, when the eigenvalues of the matrix, why do we need the orthogonal transformation of eigenvectors?
When I find out the eigenvalues of a quadratic matrix (∧ 1, ∧ 2, ∧ 3), why do I need to orthogonalize the eigenvectors? The original canonical form is not f = ∧ 1 (Y1) ^ 2 + ∧ 2 (Y2) ^ 2 + ∧ 3 (Y3) ^ 2
Why orthogonal transformation?


1. After finding out the eigenvalue, we know the canonical form of quadratic form. If we just find out the canonical form, we will finish the task
2. But if we continue to ask: what kind of linear transformation should we use to transform the quadratic form into a standard form, we should return to:
F (x1, X2, x3) = x'ax
By transforming x = py, G (Y1, Y2, Y3) = (py)'a (py) = y'p'apy = y '(p'ap) y is obtained
Let (p'ap) be a diagonal matrix
3. However, we find the eigenvector according to the condition that P (inverse) AP is a diagonal matrix
In order to be able to use the analysis of 2, it is thought that when p is an orthogonal matrix, P (inverse) = P '
Therefore, after the eigenvalue is calculated, the eigenvector must be orthogonalized and standardized to form an orthogonal matrix to obtain the orthogonal linear transformation: x = py
Of course, if we only need to use the reversible linear transformation to transform the quadratic form into the canonical form (only containing the square term), the problem is relatively simple, and the corresponding properties are less