Invertible matrix is a full rank matrix. Is its inverse proposition correct? This is a linear algebra problem

Invertible matrix is a full rank matrix. Is its inverse proposition correct? This is a linear algebra problem


The answer is yes, the determinant of full rank matrix is not equal to zero, so according to the definition of inverse matrix, this proposition is correct



What is full rank matrix


Linear algebra knowledge, I'm not very good to talk about, have you learned linear algebra! Give you a concept, I slowly understand! First tell you the concept of matrix rank! Matrix rank: use elementary row transformation to change matrix A into ladder matrix, then the number of non-zero rows in the matrix is defined as the rank of the matrix, recorded as R (a)



Is column full rank matrix invertible?


No
Invertible matrix refers to square matrix, that is, the number of rows equals the number of columns
Column, row full rank will generally consider its left inverse, right inverse



Is a full rank matrix invertible?


Yes. Invertible matrix only requires | a | 0, and full rank satisfies this condition



The proof that a does not change rank by multiplication with invertible matrix


Two methods
1. The rank of matrix is not changed by elementary transformation
Because invertible matrix can be expressed as the product of elementary matrix
The multiplication of a by elementary matrix is equivalent to the elementary transformation of A
So the rank of a is constant
--This method includes left multiplication of a, right multiplication of a, or both left and right multiplication of A
2. Using R (AB)



On the understanding of a property formula of matrix rank
Self study of postgraduate entrance examination mathematics: R (a, b) ≤ R (a) + R (b)
Let R (a) = R, R (b) = t, transform a and B into a 'and B' respectively, so that (a, b) is equivalent to (a ', B'). Since (a ', B') only contains R + T non-zero columns, R (a ', B') ≤ R + T, and R (a, b) = R (a ', B'), so r (a, b) ≤ R + T, that is, R (a, b) ≤ R (a) + R (b)
How to understand this proof process? Especially, how to understand the sentence "R (a ', B') ≤ R + T because (a ', B') only contains a non-zero column with R + T"?


It's easy to understand: column transformation of a matrix will not change the rank of the matrix. Since R (a) = R, R (b) = t, there must be a column transformation, so that the matrix obtained after transformation only contains R + T non-zero columns (such as the standard ladder form of the matrix), and a matrix only contains R + T non-zero columns. How can its rank be greater than R + T?
RT?



Rank of linear algebraic matrix
As far as I know, if 0 is the eigenvalue of a matrix, then | a | = 0, that is, its rank is less than 3,
If there are k eigenvalues of n-order matrix which are not zero, can we infer that the rank of matrix is k?
A is the eigenvalue of a matrix of order 3, which is 0, 2. Judge its rank. These are all known conditions, so we can't carry out elementary transformation


1
yes.
The rank of matrix is invariable by elementary row transformation. This is a property. Elementary transformation is just a tool, and auxiliary theorem is not allowed?
It can be elementary transformed into a unit matrix of order k plus 0 elements



Is idempotent matrix of linear algebra necessarily symmetric matrix


No
for example
1 -1
0 0
It's an idempotent matrix
In addition, the idempotent matrix must be diagonalizable and the eigenvalues can only be 1 or 0



What is a symmetric positive definite matrix


Let a be a symmetric matrix of order n, if there is > 0 (≥ 0) for any n-dimensional vector x 0, then it is called a positive definite (semi positive definite) matrix; otherwise, let a be a symmetric matrix of order n, if there is < 0 (≤ 0) for any n-dimensional vector x ≠ 0, then it is called a negative definite (semi negative definite) matrix



Why is the rank of the new matrix formed by the addition of two matrices less than or equal to the sum of the ranks of the original two matrices?
Matrix A and matrix B are s * n matrices. A + B leads to matrix C. why is there rank (c)


Hard back is hard to think about
In this way, we can visualize geography in a sense
First, rank can be understood as the group number of linearly independent column vectors
If the rank of matrix A and B is a and B respectively, then there are a and B linearly independent column vectors respectively
The linear correlation is determined by whether the vectors are parallel after addition and subtraction
Then the two matrices are added together. Of course, the most linearly independent column vectors are a + B, and nothing else can be added at all