Term | Definition | Symbol | |||
---|---|---|---|---|---|
Vector | An ordered set of numbers or symbols, typically arranged in a column or row. | \(\vec{v}\) | |||
Unit Vector | A vector with a magnitude of 1. | \(\hat{v}\) | |||
Scalar | A single, standalone numerical value, typically used to represent a quantity such as a magnitude. | \(\lambda\) | |||
Vector Magnitude | The length of a vector. It may be calculated by squaring all elements in a vector, then calculating the square root of their sum. For instance, a two-dimensional vector, \(\vec{v}\)=\([x,y]\), \( |\vec{v}|=\sqrt{x^2 + y^2} \) | \(|\vec{v}|\) |
Term | Definitions | Symbol | |||
---|---|---|---|---|---|
Matrix | A two-dimensional vector of numbers, symbols, or expressions arranged in rows and columns. | ||||
Identity Matrix | A square matrix with ones on the main diagonal and zeros elsewhere. | \(I\) | |||
Determinant | A scalar value that can be computed from the elements of a square matrix. | ||||
Matrix Inverse | If a matrix has an inverse, multiplying the matrix by its inverse results in the identity matrix. | \(AA^{-1} = A^{-1}A = I\) | |||
Pseudoinverse | A generalization of the matrix inverse when the matrix is not necessarily square or invertible. | \(A^+\) |
Term | Definition | ||
---|---|---|---|
Square Matrix | A matrix with the same number of rows and columns. | ||
Diagonal Matrix | A matrix where all elements outside the main diagonal are zero. | ||
Symmetric Matrix | A square matrix that is equal to its transpose. | ||
Dense Matrix | A matrix in which most of the elements are non-zero. | ||
Sparse Matrix | A matrix in which most of the elements are zero. | ||
Cofactor Matrix | A matrix in which each element is the cofactor of the corresponding element of the original matrix, often used in computing determinants and matrix inverses. | ||
Coefficient Matrix | In the context of a system of linear equations, the matrix formed by the coefficients of the variables. | ||
Hilbert Matrix | A square matrix whose entries are the reciprocals of the sums of their row and column indices plus one. | ||
Sub Matrix | A matrix formed by selecting certain rows and columns from a larger matrix. | ||
Singular Matrix | A square matrix that is not invertible, meaning its determinant is zero. | ||
Markov Matrix | A square matrix used to describe transitions between states in a Markov chain, where each element represents the probability of transitioning from one state to another. | ||
Consumption Matrix | In the context of economics or mathematical modeling, a matrix representing the consumption rates of different resources by different entities. | ||
Lower Triangular Matrix | A square matrix in which all the entries above the main diagonal are zero. | ||
Change of Basis Matrix | A matrix that represents the linear transformation between two different bases in a vector space. | ||
Upper Triangular Matrix | A square matrix in which all the entries below the main diagonal are zero. |
Term | Definitions | Formula | |||
---|---|---|---|---|---|
Transpose | The result of swapping a matrix's rows with its columns. | \(A^T\) | |||
Inner Product | A binary operation that takes two vectors and produces a scalar. | \(\vec{v} \cdot \vec{u}\) | |||
Outer Product | A mathematical operation that takes two vectors as input and produces a matrix as output. | \(\vec{v} \otimes \vec{u}\) | |||
Matrix Multiplication | An operation that takes two matrices and produces another matrix by multiplying corresponding entries and summing the results. | \(C = A \cdot B\) | |||
Hadamard Product | An element-wise product of two matrices of the same dimensions. | ||||
Trace | The sum of the diagonal elements of a square matrix. | ||||
Norm | A measure of a matrix's size. | \(||\mathbf{v}||\) | |||
Matrix Conjugate | The matrix that results from changing the sign of each imaginary number in the original matrix. | \( A^* \) | |||
Singular Value | A singular value, denoted as \(\sigma_i\), is a scalar that represents the magnitude of stretching or scaling associated with a particular direction in the transformation defined by a matrix. | ||||
Singular Value Decomposition | A factorization of a matrix into three other matrices, including a diagonal matrix of singular values. |
Term | Definitions | ||
---|---|---|---|
QR Decomposition | A decomposition of a matrix into the product of an orthogonal matrix (Q) and an upper triangular matrix (R). | ||
LU Decomposition | A factorization of a square matrix into a product of a lower triangular matrix (L) and an upper triangular matrix (U). | ||
Jordan Decomposition | A way of representing a square matrix as the sum of a diagonal matrix and a nilpotent matrix. | ||
Free Variables | Variables in a linear system of equations that can take on any value, often corresponding to variables not leading to unique solutions. | ||
Free Columns | Columns in a matrix that do not contain a pivot element when reduced to row-echelon form. | ||
Pivot Variables | Variables in a linear system of equations corresponding to columns containing pivot elements. | ||
Row-Echelon Form | A matrix is in row-echelon form if: All zero rows, if any, are at the bottom of the matrix. The leading entry of each nonzero row is in a column to the right of the leading entry of the previous row. The leading entry in each nonzero row is 1. All entries in the column below a leading 1 are zero. | ||
Reduced Row-Echelon Form | A matrix is in reduced row-echelon form if it satisfies the conditions of row-echelon form and the following additional conditions: The leading 1 in each nonzero row is the only nonzero entry in its column. All entries in the column above and below a leading 1 are zero. |
Term | Definitions | Symbol | |||
---|---|---|---|---|---|
Linear Independence | A set of vectors is linearly independent if no vector in the set is a linear combination of the others. | \(\text{det}(V) \neq 0\) | |||
Vector Space | The set of all possible vectors satisfying certain properties. | ||||
Row Space | The vector space spanned by the rows of a matrix. | \(\text{span}(A^T)\) | |||
Column Space | The vector space spanned by the columns of a matrix. | \(\text{span}(A)\) | |||
Nullspace | The set of all solutions to the homogeneous equation\( Ax = 0\), where \(A\) is a matrix. | ||||
Rank | The maximum number of linearly independent rows or columns in a matrix. | ||||
Linear Combination | A linear combination is an expression constructed from vectors by multiplying them by scalars and adding the results. | \(\sum_{i=1}^{n} c_i\vec{v}_i\) | |||
Span | The set of all possible linear combinations of vectors in a vector space. | \(\text{span}(\vec{v}_1, \vec{v}_2, ..., \vec{v}_n)\) | |||
Basis | A set of vectors that spans a vector space and is linearly independent. | \(\vec{v}_1, \vec{v}_2, ..., \vec{v}_n\) | |||
Dimension | The number of vectors in a basis for a vector space. | \(\text{dim}(\mathbb{V})\) | |||
Full Column Rank | A matrix whose columns are linearly independent, meaning the rank of the matrix is equal to the number of columns. | ||||
Full Row Rank | A matrix whose rows are linearly independent, meaning the rank of the matrix is equal to the number of rows. |
Term | Definitions | Formula | |||
---|---|---|---|---|---|
Orthogonal Matrix | A square matrix whose rows and columns are perpendicular to each other. | \(Q^T = Q^{-1}\) | |||
Orthonormal Matrix | A matrix where the vectors are not only orthogonal to each other but also normalized to have a length of 1. | \(Q^TQ = QQ^T = I\) | |||
Projection | A geometric operation that involves finding the component of one vector along another. | \(\text{proj}_{\mathbb{U}}(\vec{v}) = \frac{\vec{v} \cdot \vec{u}}{\|\vec{u}\|^2}\vec{u}\) | |||
Change of Basis | The process of expressing vectors in one basis using coordinates from another basis. | \(P^{-1}AP\) | |||
Gram-Schmidt | A process that orthogonalizes a linearly independent set of vectors to form an orthonormal basis. |
Term | Definitions | Formula | |||
---|---|---|---|---|---|
Eigenvalue | A scalar \(\lambda\) such that there exists a non-zero vector \(v\) for which \(Av = \lambda v\) | \(\text{det}(A - \lambda I) = 0\) | |||
Eigenvector | A non-zero vector \(v\) that only gets scaled by a scalar (called the eigenvalue) when multiplied by a matrix A. | \((A - \lambda I)\vec{v} = 0\) | |||
Principal Component | The direction in which the data in a matrix varies the most. It is the eigenvector corresponding to the largest eigenvalue of the covariance matrix of the data. | ||||
Covariance Matrix | A matrix that summarizes covariance relationships between variables in a dataset. The diagonal elements represent variances, and off-diagonal elements represent covariances. | ||||
Repeated Eigenvalues | Eigenvalues that occur more than once, indicating that the corresponding eigenvectors are not linearly independent. | ||||
Sum of Eigenvalues | Equal to the trace of the matrix in a square matrix. | ||||
Positive Definite Matrix | A symmetric matrix where all eigenvalues are positive. |
Term | Definitions | ||
---|---|---|---|
Rank-Nullity Theorem | A theorem stating that the dimension of the column space plus the dimension of the null space of a matrix equals the number of columns in the matrix. | ||
Uniqueness of Reduced Row-Echelon Form | Every matrix \(A\) may be represented by a unique matrix in reduced row-echelon form. | ||
Spectral Theorem | A theorem stating that states that every symmetric matrix is diagonalizable and its eigenvalues are real. |
Term | Definitions | ||
---|---|---|---|
Kernel | The set of all vectors that map to the zero vector in the codomain. | ||
Image Codomain | The set of all possible outputs that can be obtained by applying a transformation to the vectors in its domain. | ||
Onto Matrix Surjective Matrix | A matrix transformation that maps a higher-dimensional vector space onto a lower-dimensional one. Every element in the codomain is mapped to by at least one element from the domain. | ||
One-to-One Matrix Injective Matrix | A matrix transformation where distinct elements in the domain map to distinct elements in the codomain. Each element in the domain maps to a unique element in the codomain, and no two distinct elements in the domain map to the same element in the codomain. |
Term | Definition | ||
---|---|---|---|
Least Squares Regression | |||
SVM Classification | |||
SVM Dual | |||
Logistic Regression |
AI Study Tools for STEM Students Worldwide.
© 2025 CompSciLib™, LLC. All rights reserved.
info@compscilib.comContact Us