A. B is a real symmetric matrix of order n, and a and B have the same eigenvalues. Are a and B similar? Why?

A. B is a real symmetric matrix of order n, and a and B have the same eigenvalues. Are a and B similar? Why?


If a and B have the same eigenvalues, then they are the same diagonal matrix, so a and B are similar. The economic mathematics team will help you solve this problem, please adopt it in time. Thank you!



What are the necessary and sufficient conditions for matrix equivalence?


The rank of matrix is equal; the corresponding linear equations have the same solution



Whether the judgment matrix is similar to a diagonal matrix
2 0 0
Matrix A = 1, 2 - 1
1 0 1
I know that a matrix A has a similar diagonal matrix if and only if a is a square matrix of order n, it must have n linearly independent eigenvectors
There is a sentence in the solution of this problem: the three eigenvalues of a matrix are 1,2,2 respectively. When the rank of (a-2e) is 1, there are two linearly independent eigenvectors, which can be similar to a diagonal matrix
How to understand this sentence, or is there any theorem to refer to?
In the answer, I made a mistake: "when the rank of (a-2e) is 1, there are two linearly independent eigenvectors, which are similar to a diagonal matrix." It means that as long as the rank is 1, there will be two linearly independent eigenvectors? Or how to understand?


The eigenvectors of different eigenvalues must be linearly independent, so the eigenvectors of this matrix can only be related to two eigenvectors of 2, and the eigenvectors of a-2e with rank 1 are exactly the corresponding eigenvectors of 2, so when these two eigenvectors are linearly independent, the whole matrix has three independent eigenvectors
The eigenvector of a-2e is exactly the eigenvector whose eigenvalue is 2
You can calculate the process of eigenvector when the eigenvalue is 2, and you will find that the first step is to calculate a-2e, and the double eigenvalue is 2, so the rank of a-2e is 1
In fact, he made a small detour, that is to say, there are two independent vectors to find the eigenvector corresponding to 2. You can find an example of double eigenvector to find the eigenvalue, see if the rank of a-ne (n is double eigenvalue) is 1, and then see if there are two independent eigenvectors. You can understand it
I'm afraid you don't quite understand just by writing like this want a go.



How to judge whether a matrix can be similar to diagonal form


If a square matrix of order n has n different eigenvalues, then a is similar to a diagonal form
If R (a-ae) = n-k for every k-fold eigenvalue a of a, then a is similar to a diagonal form
There are k linearly independent eigenvectors belonging to eigenvalue a, which are equivalent to a



Can a matrix be similar to a diagonal matrix
How to judge whether the following square matrix is similar to diagonal matrix?
1 1 0
0 2 0
0 0 2


Whether a matrix can be diagonalized or not can be judged by eigenvalues
For a square matrix of order n, if there are n different eigenvalues, then the square matrix can be diagonalized
If there are multiple roots, then judge whether the algebraic multiplicity and geometric multiplicity are equal. If they are equal, they can be diagonalized; otherwise, they cannot be diagonalized
For this problem, the obvious eigenvalues are 1 and 2 (double roots, then the algebraic multiplicity is 2)
Substituting 2 into the basic solution system of (2e-a) x = 0, we find that there are two solution vectors
It means that its geometric multiplicity is also 2
So the matrix is diagonalizable



Similarity problem of diagonal matrix
A = (AIJ) n * n, is an upper triangular matrix, the main diagonal elements of a are equal, and at least one element AIJ is not equal to 0 (I)


The main diagonal element of the upper triangular matrix is the eigenvalue. From the meaning of the problem, we can see that the eigenvalue of a is a and its algebraic multiplicity is n. It is required that a can be diagonalized, and the geometric multiplicity must be equal to the algebraic multiplicity: that is, the dimension of the solution space of the system of linear equations (ae-a) x = 0 is equal to N, which requires rank (ae-a) = 0, and then a-ae = 0



Matrix A is similar to a diagonal matrix
Matrix A is similar to a diagonal matrix. Is its adjoint matrix similar to this diagonal matrix? Or is it only similar to the adjoint matrix of this diagonal matrix? Why?


The adjoint matrix of a must be similar to the adjoint matrix of diagonal matrix (denoted as m) similar to a, so there is no need to prove it
The following focuses on the diagonal matrix with a
When a is a full rank matrix, a * = | a | * a ^ (- 1)
If we want a * to be similar to m, we need m and m * to be similar
Let m be diag (1,2,3). Then m * is diag (6,3,2). The eigenvalues are different, so they are not similar (but they can be proved to be similar in the case of second order)
Therefore, the similarity between a * and M is not true
When n-order matrix A is not a full rank matrix, let R (x) denote the rank of matrix X, then
R (a *) = 1, when R (a) = n-1
R (a *) = 0, when R (a) < n-1
(as for why, you can define a * and pay attention to the relationship between the value of determinant and the rank of matrix.)
The rank of similar matrix is invariable. The diagonal matrix similar to a is still M
R(M) = R(A)
For m to be similar to a *, the rank must be equal, R (a) = R (m) = R (a *)
It is obviously true when R (a) = 0
When R (a)! = 0, it can only be r (a) = 1, n = 2. In this case, m and m * are similar. From the similar transitivity, we can know that a * and m are similar
Generally speaking, for the case of second order, it is indeed similar. Beyond second order, it is generally not similar except for special cases



The necessary and sufficient condition of diagonalization of quasi diagonal matrix is that every block can be diagonalized. It's necessary to prove the diagonalization of quasi diagonal matrix,


The idea of using space is relatively simple
Of course, we need to use a conclusion here: if the matrix A can be diagonalized, then we know that a has the direct sum decomposition of the eigensubspace
So for any invariant subspace w of a, we have
This conclusion seems simple, but it is not so easy to prove!
So let's look at this problem again. We know that a is a quasi diagonal matrix
Then we know that V has the direct sum decomposition of the invariant subspace of A
And a can be diagonalized, so it has the direct sum decomposition of the characteristic subspace. Thus, using the previous conclusion, it can be seen that for each MI, the AI on which a is restricted obviously has the direct sum decomposition of the characteristic subspace
Thus, the restriction of a on each MI can be diagonalized



Let a, B, a + B be invertible matrices of order n, prove that a ^ - 1 + B ^ - 1 is invertible matrix, and find the inverse matrix of a ^ - 1 + B ^ - 1
I can understand the following answer, but I don't understand it - how can it come out of the first step? How can I "notice first", I didn't notice how to ride like this, asking the great God to indicate the T.T
First of all, notice that
A(A^{-1}+B^{-1})B=B+A,
therefore
A^{-1}+B^{-1}=A^{-1}(A+B)B^{-1},
So there is
(A^{-1}+B^{-1})^{-1}=B(A+B)^{-1}A.


In fact, this is obvious. If you really can't think of the following method, first consider the case that a and B are all numbers. At this time, there is one more commutative law of multiplication than the matrix. You can get 1 / A + 1 / b = (a + b) / (AB) by general division



Let a and B be invertible matrices of order n (n ≥ 2), and prove that (AB) * = a * B*


Because a * a = AA * = iaie, a * = a ^ (- 1) IAI. A ^ (- 1) is the inverse of a and IAI is the determinant of A
(AB)*=(AB)^(-1)IABI=B^(-1) A^(-1)IABI=B^(-1)IBI A^(-1)IAI=B*A*
It is proved that (AB) * = b * a*
Your topic is to prove (AB) * = a * B*
So the multiplication of two adjoint matrices can be exchanged!
Take a counter example: for example, a = (1, 2; 0, 1), B = (1, 0; 3, 1), where; represents a branch, that is, a is a matrix with two rows and two columns, the first row is 1 and 2, the second row is 0 and 1. A, B meets the condition, but the equation (AB) * = a * b * does not hold