On matrix in linear algebra~ a 1 1 1 1 a 1 1 Given that the rank of matrix A = 1 is 3, then a =? Why? 1 1 1 a a 1 1 1 1 a 1 1 1 1 a 1 1 a this is matrix A

On matrix in linear algebra~ a 1 1 1 1 a 1 1 Given that the rank of matrix A = 1 is 3, then a =? Why? 1 1 1 a a 1 1 1 1 a 1 1 1 1 a 1 1 a this is matrix A


The matrix is written as 1 a 111 a 111 AA 111, and then the elementary row transformation is performed to get 1 a 1101-a a 1000 2a-2 A & sup 2; - 1000 a & sup 2; + 2a-3, because the rank is 3, then there is a zero row, then a & sup 2; + 2a-3 = 0 a = 1 or a = - 3, if a = 1, then



What is the most convenient step to convert any invertible matrix into identity matrix?


The identity matrix is that the main diagonal is one. Not all of them can be transformed into identity matrix. If they can, they can be added and subtracted



In linear algebra, why is invertible matrix equivalent to identity matrix? It is better to give some proofs or simple explanations,


Proof: because invertible matrix is full rank matrix, its equivalent standard form is en
That is, a is equivalent to the identity matrix
Note: the equivalent standard form of any matrix A is
Er 0
0 0
Where R is the rank of A. when the rank of a = n, ER in the upper left corner becomes en



Using simple class operation to realize matrix addition, subtraction and inverse operation
Simple multiplication, transpose matrix, including the implementation of the file header, CPP


//"



Inverse operation of matrix
Is the inverse of (matrix A + matrix B) = the inverse of matrix A + the inverse of matrix B?


The inverse of the sum of matrices is equal to the sum of the inverses of matrices



What is the identity matrix of a complex matrix?


It is the same as the identity matrix of real matrix



Write out a coefficient matrix for the identity matrix, the solution for a row of three column matrix (135) of linear equations!


x1+0x2+0x3=1
0x1+x2+0x3=3
0x1+0x2+x3=5
The coefficient matrix is e and the solution is 1,3,5
Is that what you mean?
That's a bit of a problem
If you have any questions, please ask
Take what you want



Matrix property formula operation
Why is e + ba-b (E + AB) ^ - 1a-bab (E + AB) ^ - 1A equal to
E+BA-B(E+AB)(E+AB)^-1A
^-1 refers to the inverse of the previous formula, ask a kind person to answer it! Thank you very much!


E+BA-B(E+AB)^-1A-BAB(E+AB)^-1A
= E+BA-B [(E+AB)^-1A+AB(E+AB)^-1A]
= E+BA-B [E+AB](E+AB)^-1A
Put forward - B on the left and (E + AB) ^ - 1A on the right



What is compound operation of two matrices? What is expression?


As long as it is a number sequence that can be called a matrix, it satisfies the algorithm of number
First of all, we need to find out what a matrix is
A matrix is a square matrix composed of coefficients and constants of a system of equations
It is convenient and intuitive to use it in solving linear equations
a1x+b1y+c1z=d1
a2x+b2y+c2z=d2
a3x+b3y+c3z=d3
For example, we can construct two matrices:
a1b1c1a1b1c1d1
a2b2c2a2b2c2d2
a3b3c3a3b3c3d3
Because these numbers are regularly arranged together, the shape is like a rectangle, so mathematicians call it a matrix, through the change of the matrix, we can get the solution of the equations
The specific concept of matrix was first put forward by the 19th century British mathematician Kelly and formed the systematic theory of matrix algebra
But trace to the source, the matrix first appeared in our country's "nine chapter arithmetic", in the "nine chapter arithmetic" equation chapter, put forward to understand the linear equation coefficients, constants in order to arrange into a rectangular shape. Then move the chip, you can find the solution of this method. In Europe, using this method to solve linear equations, more than 2000 years later than our country
Mathematically, an M × n matrix is a rectangular array of M rows and N columns. A matrix consists of numbers, or more generally, elements in a ring
Matrix is commonly used in linear algebra, linear programming, statistical analysis, combinatorics, etc. Please refer to matrix theory
Directory [hidden]
1 History
2 Definitions and related symbols
2.1 matrices constructed over general rings
2.2 block matrix
3 special matrix categories
4 matrix operation
5 linear transformation, rank, transpose
6 Jacobian determinant
7 see
[editor]
history
The research of matrix has a long history, Latin square and magic square have been studied in prehistoric times
As a tool to solve linear equations, matrix has a long history. In 1693, Gottfried William Leibniz, one of the discoverers of calculus, established the theory of determinants. In 1750, Gabriel Cramer later established Cramer's law. In 1800's, Gauss and William Jordan established Gauss Jordan elimination method
In 1848, James Joseph Sylvester first coined the word matrix. The famous mathematicians who have studied matrix theory are Kelley, William Luyun Hamilton, Glassman, Frobenius and von Neumann
[editor]
Definitions and related symbols
Here is a 4 × 3 matrix:
In the above example, a [2,3] = 7
In C language, it is also expressed as a [i] [J]
In addition, a = (AIJ), which means a [I, J] = AIJ, is common in mathematical works for all I and J
[editor]
Matrices constructed over general rings
A ring R, m (m, N, R) is a set of M × n matrices arranged by the elements in R. if M = n, it is usually denoted by M (n, R). These matrices are additive and multiplicative (see below), so m (n, R) itself is a ring, and this ring is isomorphic to the endomorphism ring of left R module RN
If R is permutable, then M (n, R) is an r-algebra with identity element. The determinant can be defined by Leibniz formula: a matrix is invertible if and only if its determinant is invertible in R
In Wikipedia, a matrix is mostly real or imaginary unless it is specially pointed out
[editor]
Block matrix
A partitioned matrix is a large matrix divided into "matrix of matrices". For example, the following matrix
It can be divided into four 2 × 2 matrices
.
This method can be used to simplify calculation, simplify mathematical proof and some computer applications such as VLSI chip design
[editor]
Special matrix categories
A symmetric matrix is symmetric with respect to its main diagonal (from top left to bottom right), that is, AI, j = AJ, i
Hermitian matrix (or self conjugate matrix) is symmetric with respect to its main diagonal in the form of complex conjugate, that is, AI, j = a * J, i
All the elements of the matrix on any diagonal are relative, AI, j = AI + 1, j + 1
All columns of random matrix are probability vectors, which are used for Markov chain
[editor]
Matrix operation
In this paper, m × n matrices A and B are given, and their sum a + B can be defined as an M × n matrix. I and j terms are (a + b) [I, J] = a [I, J] + B [I, J]
Alternative addition can be seen in matrix addition
If we give a matrix A and a number C, we can define the scalar product Ca, where (CA) [I, J] = CA [I, J]
These two operations make m (m, N, R) a real linear space with dimension Mn
If the number of columns of a matrix is equal to the number of rows of another matrix, the product of the two matrices can be defined. For example, a is an M × n matrix and B is an n × P matrix, they are products, AB is an M × P matrix, where
(AB) [I, J] = a [I, 1] * B [1, J] + a [I, 2] * B [2, J] +... + a [I, n] * B [n, J] for all I and J
for example
This multiplication has the following properties
(AB) C = a (BC) for all K × m matrices A, m × n matrices B and N × P matrices C ("associative law")
(a + b) C = AC + BC for all m × n matrices A and B and N × K matrices C ("distributive law")
C (a + b) = Ca + CB for all m × n matrices A and B and K × m matrices C ("distributive law")
It should be noted that permutability does not necessarily hold, that is, there are matrices A and B such that ab ≠ ba
For other special multiplication, see matrix multiplication
[editor]
Linear transformation, rank, transpose
Matrix is a convenient expression of linear transformation, because the combination of matrix multiplication and linear transformation has the following connection:
Let RN denote the n × 1 matrix (i.e. the vector of length n). For every linear transformation F: RN → RM, there exists a unique m × n matrix a such that f (x) = ax for all x ∈ RN. This matrix a "represents" the linear transformation F. now another K × m matrix B represents the linear transformation G: RM → rk, then the matrix product Ba represents the linear transformation g o F
The dimension of the mapping of linear algebra represented by matrix A is called the matrix rank of A. the matrix rank is also the dimension of row (or column) generating space of A
The transpose of M × n matrix A is the n × M matrix ATR (also known as at or TA) generated by row column commutative angle, that is, ATR [I, J] = a [J, I] for all I and J. if a represents a certain linear transformation, ATR represents its dual operator
(A + B)tr = Atr + Btr,(AB)tr = BtrAtr.



If I have two matrices, after calculation, I can get another matrix, then what mathematical symbol should I use to express the calculation process?
The resulting matrix is not necessarily the product of the original two matrices


Arrow. Remember it's not an equal sign