Matrix A m × n, matrix X n × 1, M

Matrix A m × n, matrix X n × 1, M


There are nonzero solutions
Then R (a)



Is the eigenvalue of the transpose of a and the product of a the square of a certain number


Not necessarily
Counterexample
A =
0 2
3 0
1 2
The eigenvalue of a ^ TA is 2063 / 3053427 / 305, not square



Let a be an M × n matrix and B be an n × M matrix, then the system of linear equations (AB) x = 0 ()
A. When n > m, there is only zero solution B. when n > m, there must be non-zero solution C. when m > n, there must be only zero solution D. when m > n, there must be non-zero solution


Because the AB matrix is m × m square matrix, so the number of unknowns is m, and because: R (AB) ≤ R (a) ≤ n, (1) when m > N, R (AB) ≤ R (a) ≤ n < m, that is, the rank of coefficient matrix is less than the number of unknowns, so the equations have non-zero solutions. (2) when m < n, R (a) ≤ m < n, and R (AB) ≤ n < M



C language problem, find the product of two matrices C, known matrix A and b value
 


for(i = 0;i



Finding the diagonalization matrix of the product of two diagonalizable matrices
Matrix A1 and matrix A2 are two diagonalizable matrices
A1 = V1 * D1 * inverse(V1)
A2 = V2 * D2 * inverse(V2)
and
Diagonal matrix D1 = D2 = D
Find the eigenvector matrix and eigenvalue matrix of A1 * A2
The eigenvalues and eigenvectors of A1 and A2 are the same, but the eigenvector matrices of V1 and V2 are in different order


Note that the condition of the same eigenvalue is not as valuable as the condition of the same eigenvector
We can write A2 as A2 = V1 * p * D2 * P ^ {- 1} * V1 = V1 * D3 * V1 ^ {- 1}, P is an array, D3 = P * D2 * P ^ {- 1} is still a diagonal array, just rearrange D
So A1 * A2 = V1 * (D1 * D3) * V1 ^ {- 1} is the feature decomposition



If a and B are commutative, that is ab = Ba, it is proved that a and B have at least one common eigenvector


First of all, we might as well transform the language into linear transformation: take a set of bases, and the linear transformation with a and B as matrices is still recorded as a and B
In the complex field, the characteristic polynomial must have a solution, and each eigenvalue has a corresponding eigenvector
Taking any eigenvalue of a, we consider the eigensubspace w of a (that is, the solution space of AX = λ x, w ≠ 0)
For any x ∈ W, a (BX) = B (AX) = λ BX, then BX ∈ W, that is, W is an invariant subspace of B
Considering the restriction of B on W, as a linear transformation in a linear space over a complex field, there must be eigenvalues and corresponding eigenvectors
This eigenvector is in the eigensubspace w of a, so it is the common eigenvector of a and B
If we don't use the language of linear transformation, we should express the restriction of B on w as block matrix
However, as a linear transformation is more convenient, so I will not write the specific



If a and B are commutative, that is ab = Ba, it is proved that a and B have at least one common eigenvalue


It can only be said that a and B have at least one common eigenvector, and it is impossible to guarantee the existence of common eigenvalues, such as a = I, B = 0
As for the existence of common eigenvectors, if we take any eigenvalue a of a and its eigensubspace x, then for any vector x in X, ABX = Bax = ABX, then BX also belongs to x, that is to say, X is an invariant subspace of B, where there must be eigenvectors of B



If AB = Ba, then B is said to be commutative with A. find all matrices B which are commutative with a,
A=1 1
0 0


Suppose B=
b1 b2
b3 b4
Because AB = ba
So there are
b1 + b3 b2 + b4
0 0
=
b1 b1
b3 b3
So B1 + B3 = B1
b2+b4 = b1
b3 = 0
So B=
a+b a
0 b
a. B is an arbitrary constant



Matrix AB = Ba, can we get matrix A = B, why


No, let a = e, B be any matrix which is not identity matrix, ab = Ba, but a = B does not hold
However, it should be stated that a and B are of the same type, that is, they have the same number of rows and columns



Cut a line from matrix A to get matrix B. what's the relationship between the ranks of a and B?


R(B)