Note: This page has been translated by MathWorks. Click here to see

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

This topic contains an introduction to creating matrices and performing basic matrix calculations in MATLAB^{®}.

The MATLAB environment uses the term *matrix* to indicate a variable containing real or complex numbers arranged in a two-dimensional grid. An *array* is, more generally, a vector, matrix, or higher dimensional grid of numbers. All arrays in MATLAB are rectangular, in the sense that the component vectors along any dimension are all the same length. The mathematical operations defined on matrices are the subject of linear algebra.

MATLAB has many functions that create different kinds of matrices. For example, you can create a symmetric matrix with entries based on Pascal's triangle:

A = pascal(3)

A = 1 1 1 1 2 3 1 3 6

Or, you can create an unsymmetric *magic square matrix*, which has equal row and column sums:

B = magic(3)

B = 8 1 6 3 5 7 4 9 2

Another example is a 3-by-2 rectangular matrix of random integers. In this case the first
input to `randi`

describes the range of possible values for the
integers, and the second two inputs describe the number of rows and columns.

C = randi(10,3,2)

C = 9 10 10 7 2 1

A column vector is an *m*-by-1 matrix, a row vector is a 1-by-*n* matrix, and a scalar is a 1-by-1 matrix. To define a matrix manually, use square brackets `[ ]`

to denote the beginning and end of the array. Within the brackets, use a semicolon `;`

to denote the end of a row. In the case of a scalar (1-by-1 matrix), the brackets are not required. For example, these statements produce a column vector, a row vector, and a scalar:

u = [3; 1; 4] v = [2 0 -1] s = 7

u = 3 1 4 v = 2 0 -1 s = 7

For more information about creating and working with matrices, see Creating, Concatenating, and Expanding Matrices.

Addition and subtraction of matrices and arrays is performed element-by-element, or *element-wise*. For example, adding `A`

to `B`

and then subtracting `A`

from the result recovers `B`

:

X = A + B

X = 9 2 7 4 7 10 5 12 8

Y = X - A

Y = 8 1 6 3 5 7 4 9 2

Addition and subtraction require both matrices to have compatible dimensions. If the dimensions are incompatible, an error results:

X = A + C

Error using + Matrix dimensions must agree.

For more information, see Array vs. Matrix Operations.

A row vector and a column vector of the same length can be multiplied in either order. The result is either a scalar, called the *inner product*, or a matrix, called the *outer product*:

u = [3; 1; 4]; v = [2 0 -1]; x = v*u

x = 2

X = u*v

X = 6 0 -3 2 0 -1 8 0 -4

For real matrices, the *transpose* operation interchanges *a*_{ij} and *a*_{ji}. For complex matrices, another consideration is whether to take the complex conjugate of complex entries in the array to form the *complex conjugate transpose*. MATLAB uses the apostrophe operator (`'`

) to perform a complex conjugate transpose, and the dot-apostrophe operator (`.'`

) to transpose without conjugation. For matrices containing all real elements, the two operators return the same result.

The example matrix `A = pascal(3)`

is *symmetric*, so `A'`

is equal to `A`

. However, `B = magic(3)`

is not symmetric, so `B'`

has the elements reflected along the main diagonal:

B = magic(3)

B = 8 1 6 3 5 7 4 9 2

X = B'

X = 8 3 4 1 5 9 6 7 2

For vectors, transposition turns a row vector into a column vector (and vice-versa):

x = v' x = 2 0 -1

If `x`

and `y`

are both real column vectors, then the product `x*y`

is not defined, but the two products

x'*y

and

y'*x

produce the same scalar result. This quantity is used so frequently, it has three different names: *inner* product, *scalar* product, or *dot* product. There is even a dedicated function for dot products named `dot`

.

For a complex vector or matrix, `z`

, the quantity `z'`

not only transposes the vector or matrix, but also converts each complex element to its complex conjugate. That is, the sign of the imaginary part of each complex element changes. For example, consider the complex matrix

z = [1+2i 7-3i 3+4i; 6-2i 9i 4+7i]

z = 1.0000 + 2.0000i 7.0000 - 3.0000i 3.0000 + 4.0000i 6.0000 - 2.0000i 0.0000 + 9.0000i 4.0000 + 7.0000i

The complex conjugate transpose of `z`

is:

z'

ans = 1.0000 - 2.0000i 6.0000 + 2.0000i 7.0000 + 3.0000i 0.0000 - 9.0000i 3.0000 - 4.0000i 4.0000 - 7.0000i

The unconjugated complex transpose, where the complex part of each element retains its sign, is denoted by `z.'`

:

z.'

ans = 1.0000 + 2.0000i 6.0000 - 2.0000i 7.0000 - 3.0000i 0.0000 + 9.0000i 3.0000 + 4.0000i 4.0000 + 7.0000i

For complex vectors, the two scalar products `x'*y`

and `y'*x`

are complex conjugates of each other, and the scalar product `x'*x`

of a complex vector with itself is real.

Multiplication of matrices is defined in a way that reflects composition of the underlying linear transformations and allows compact representation of systems of simultaneous linear equations. The matrix product *C* = *AB* is defined when the column dimension of *A* is equal to the row dimension of *B*, or when one of them is a scalar. If *A* is *m*-by-*p* and *B* is *p*-by-*n*, their product *C* is *m*-by-*n*. The product can actually be defined using MATLAB `for`

loops, `colon`

notation, and vector dot products:

A = pascal(3); B = magic(3); m = 3; n = 3; for i = 1:m for j = 1:n C(i,j) = A(i,:)*B(:,j); end end

MATLAB uses an asterisk to denote matrix multiplication, as in `C = A*B`

. Matrix multiplication is not commutative; that is, `A*B`

is typically not equal to `B*A`

:

X = A*B

X = 15 15 15 26 38 26 41 70 39

Y = B*A

Y = 15 28 47 15 34 60 15 28 43

A matrix can be multiplied on the right by a column vector and on the left by a row vector:

u = [3; 1; 4]; x = A*u

x = 8 17 30

v = [2 0 -1]; y = v*B

y = 12 -7 10

Rectangular matrix multiplications must satisfy the dimension compatibility conditions. Since `A`

is 3-by-3 and `C`

is 3-by-2, you can multiply them to get a 3-by-2 result (the common inner dimension cancels):

X = A*C

X = 24 17 47 42 79 77

However, the multiplication does not work in the reverse order:

Y = C*A

Error using * Incorrect dimensions for matrix multiplication. Check that the number of columns in the first matrix matches the number of rows in the second matrix. To perform elementwise multiplication, use '.*'.

You can multiply anything with a scalar:

s = 10; w = s*y

w = 120 -70 100

When you multiply an array by a scalar, the scalar implicitly expands to be the same size as the other input. This is often referred to as *scalar expansion*.

Generally accepted mathematical notation uses the capital letter *I* to denote identity matrices, matrices of various sizes with ones on the main diagonal and zeros elsewhere. These matrices have the property that *A**I* = *A* and *I**A* = *A* whenever the dimensions are compatible.

The original version of MATLAB could not use *I* for this purpose because it did not distinguish between uppercase and lowercase letters and *i* already served as a subscript and as the complex unit. So an English language pun was introduced. The function

eye(m,n)

returns an *m*-by-*n* rectangular identity matrix and `eye(n)`

returns an *n*-by-*n* square identity matrix.

If a matrix `A`

is square and nonsingular (nonzero determinant), then
the equations *A**X* =
*I* and *X**A* =
*I* have the same solution *X*. This solution is called
the *inverse* of `A`

and is denoted *A*^{-1}. The `inv`

function and the expression
`A^-1`

both compute the matrix inverse.

A = pascal(3)

A = 1 1 1 1 2 3 1 3 6

X = inv(A)

X = 3.0000 -3.0000 1.0000 -3.0000 5.0000 -2.0000 1.0000 -2.0000 1.0000

A*X

ans = 1.0000 0 0 0.0000 1.0000 -0.0000 -0.0000 0.0000 1.0000

The *determinant* calculated by `det`

is a measure of the scaling factor of the linear transformation
described by the matrix. When the determinant is exactly zero, the matrix is
*singular* and no inverse exists.

d = det(A)

d = 1

Some matrices are *nearly singular*, and despite the fact that an
inverse matrix exists, the calculation is susceptible to numerical errors. The `cond`

function computes the *condition number for
inversion*, which gives an indication of the accuracy of the results from
matrix inversion. The condition number ranges from `1`

for a
numerically stable matrix to `Inf`

for a singular matrix.

c = cond(A)

c = 61.9839

It is seldom necessary to form the explicit inverse of a matrix. A frequent misuse of
`inv`

arises when solving the system of linear equations *A**x* =
*b*. The best way to solve this equation, from the standpoint of both
execution time and numerical accuracy, is to use the matrix backslash operator
`x = A\b`

. See `mldivide`

for more information.

The Kronecker product, `kron(X,Y)`

, of two matrices is the larger matrix formed from all possible products of the elements of `X`

with those of `Y`

. If `X`

is *m*-by-*n* and `Y`

is *p*-by-*q*, then `kron(X,Y)`

is *mp*-by-*nq*. The elements are arranged such that each element of `X`

is multiplied by the entire matrix `Y`

:

[X(1,1)*Y X(1,2)*Y . . . X(1,n)*Y . . . X(m,1)*Y X(m,2)*Y . . . X(m,n)*Y]

The Kronecker product is often used with matrices of zeros and ones to build up repeated copies of small matrices. For example, if `X`

is the 2-by-2 matrix

X = [1 2 3 4]

and `I = eye(2,2)`

is the 2-by-2 identity matrix, then:

kron(X,I)

ans = 1 0 2 0 0 1 0 2 3 0 4 0 0 3 0 4

and

kron(I,X)

ans = 1 2 0 0 3 4 0 0 0 0 1 2 0 0 3 4

Aside from `kron`

, some other functions that are useful to replicate arrays are `repmat`

, `repelem`

, and `blkdiag`

.

The *p*-norm of a vector *x*,

$${\Vert x\Vert}_{p}={\left({\displaystyle \sum {\left|{x}_{i}\right|}^{p}}\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$p$}\right.}\text{\hspace{0.17em}},$$

is computed by `norm(x,p)`

. This operation is defined for any value of *p* > 1, but the most common values of *p* are 1, 2, and ∞. The default value is *p* = 2, which corresponds to *Euclidean length* or *vector magnitude*:

v = [2 0 -1]; [norm(v,1) norm(v) norm(v,inf)]

ans = 3.0000 2.2361 2.0000

The *p*-norm of a matrix *A*,

$${\Vert A\Vert}_{p}=\underset{x}{\mathrm{max}}\frac{{\Vert Ax\Vert}_{p}}{{\Vert x\Vert}_{p}},$$

can be computed for *p* = 1, 2, and ∞ by `norm(A,p)`

. Again, the default value is *p* = 2:

A = pascal(3); [norm(A,1) norm(A) norm(A,inf)]

ans = 10.0000 7.8730 10.0000

In cases where you want to calculate the norm of each row or column of a matrix, you can use `vecnorm`

:

vecnorm(A)

ans = 1.7321 3.7417 6.7823

MATLAB supports multithreaded computation for a number of linear algebra and element-wise numerical functions. These functions automatically execute on multiple threads. For a function or expression to execute faster on multiple CPUs, a number of conditions must be true:

The function performs operations that easily partition into sections that execute concurrently. These sections must be able to execute with little communication between processes. They should require few sequential operations.

The data size is large enough so that any advantages of concurrent execution outweigh the time required to partition the data and manage separate execution threads. For example, most functions speed up only when the array contains several thousand elements or more.

The operation is not memory-bound; processing time is not dominated by memory access time. As a general rule, complicated functions speed up more than simple functions.

The matrix multiply `(X*Y)`

and matrix power `(X^p)`

operators show significant increase in speed on large double-precision arrays (on order of 10,000 elements). The matrix analysis functions `det`

, `rcond`

, `hess`

, and `expm`

also show significant increase in speed on large double-precision arrays.