Examples of inverse matrix in the following topics:
-
- A system of equations can be readily solved using the concept of the inverse matrix and matrix multiplication.
- A system of equations can be readily solved using the concepts of the inverse matrix and matrix multiplication.
- This can be done by hand, finding the inverse matrix of $[A]$, then performing the appropriate matrix multiplication with $[B]$.
- Then calculate $[A^{-1}][B]$, that is, the inverse of matrix $[A]$, multiplied by matrix $[B]$.
- Practice using inverse matrices to solve a system of linear equations
-
- The matrix $B$ is the inverse of the matrix $A$ if when multiplied together, $A\cdot B$ or $B\cdot A$ gives the identity matrix.
- The inverse of matrix $[A]$, designated as $[A]^{-1}$, is defined by the property:
- The definition of an inverse matrix specifies that it must work both ways.
- In some cases, the inverse of a square matrix does not exist.
- Practice finding the inverse of a matrix and describe its properties
-
- Solving a system of linear equations using the inverse of a matrix requires the definition of two new matrices: $X$ is the matrix representing the variables of the system, and $B$ is the matrix representing the constants.
- Using matrix multiplication, we may define a system of equations with the same number of equations as variables as:
- To solve a system of linear equations using an inverse matrix, let $A$ be the coefficient matrix, let $X$ be the variable matrix, and let $B$ be the constant matrix.
- Thus, to solve a system $AX=B$, for $X$, multiply both sides by the inverse of $A$ and we shall obtain the solution:
- Provided the inverse $\left( A^{-1} \right)$ exists, this formula will solve the system.
-
- A left inverse of a matrix $A\in \mathbf{R}^{n \times m}$ is defined to be a matrix $B$ such that
- If there exists a left and a right inverse of $A$ then they must be equal since matrix multiplication is associative:
- You can readily verify that any matrix of the form
- is a left inverse.
- Here you can readily verify that any matrix of the form
-
- Social network analysts use a number of other mathematical operations that can be performed on matrices for a variety of purposes (matrix addition and subtraction, transposes, inverses, matrix multiplication, and some other more exotic stuff like determinants and eigenvalues).
- This is a mathematical operation that finds a matrix which, when multiplied by the original matrix, yields a new matrix with ones in the main diagonal and zeros elsewhere (which is called an identity matrix).
- Without going any further into this, you can think of the inverse of a matrix as being sort of the "opposite of" the original matrix.
- Matrix inverses are used mostly in calculating other things in social network analysis.
- Inverses are calculated with Tools>Matrix Algebra.
-
- An adjacency matrix is a square actor-by-actor (i=j) matrix where the presence of pair wise ties are recorded as elements.
- The main diagonal, or "self-tie" of an adjacency matrix is often ignored in network analysis.
- Vector operations, blocking and partitioning, and matrix mathematics (inverses, transposes, addition, subtraction, multiplication and Boolean multiplication), are mathematical operations that are sometimes helpful to let us see certain things about the patterns of ties in social networks.
- Such data are represented as a series of matrices of the same dimension with the actors in the same position in each matrix.
- Many of the same tools that we can use for working with a single matrix (matrix addition and correlation, blocking, etc.)
-
- It can be proven that any matrix has a unique inverse if its determinant is nonzero.
- The determinant of a matrix $[A]$ is denoted $\det(A)$, $\det\ A$, or $\left | A \right |$.
- In the case where the matrix entries are written out in full, the determinant is denoted by surrounding the matrix entries by vertical bars instead of the brackets or parentheses of the matrix.
- In linear algebra, the determinant is a value associated with a square matrix.
- For a $2 \times 2$ matrix, $\begin{bmatrix} a & b\\ c & d \end{bmatrix}$,
-
- The cofactor of an entry $(i,j)$ of a matrix $A$ is the signed minor of that matrix.
- In linear algebra, the cofactor (sometimes called adjunct) describes a particular construction that is useful for calculating both the determinant and inverse of square matrices.
- Specifically the cofactor of the $(i,j)$ entry of a matrix, also known as the $(i,j)$ cofactor of that matrix, is the signed minor of that entry.
- Otherwise, it is equal to the additive inverse of its minor: $C_{ij}=-M_{ij}$
- Since $i+j=5 $ is an odd number, the cofactor is the additive inverse of its minor: $-(13)=-13$
-
- This subspace is called the column space of the matrix and is usually denoted by $R(A)$ , for "range".
- The dimension of the column space is called the rank of the matrix.
- This subspace is called the nullspace or kernel and is extremely important from the point of view of inverse theory.
- As we shall see, in an inverse calculation the right hand side of a matrix equations is usually associated with perturbations to the data.
- Figuring out what features of a model are unresolved is a major goal of inversion.
-
- Matrix addition is commutative and is also associative, so the following is true:
- Just add each element in the first matrix to the corresponding element in the second matrix.
- Note that element in the first matrix, $1$, adds to element $x_{11}$ in the second matrix, $10$, to produce element $x_{11}$ in the resultant matrix, $11$.
- Multiplying a matrix by $3$ means the same thing; you add the matrix to itself $3$ times, or simply multiply each element by that constant.
- The resulting matrix has the same dimensions as the original.