## Principal Component Analysis (PCA)

One of the difficulties inherent in multivariate statistics is the problem of visualizing data that has many variables. The function `plot`

displays a graph of the relationship between two variables. The `plot3`

and `surf`

commands display different three-dimensional views. But when there are more than three variables, it is more difficult to visualize their relationships.

Fortunately, in data sets with many variables, groups of variables often move together. One reason for this is that more than one variable might be measuring the same driving principle governing the behavior of the system. In many systems there are only a few such driving forces. But an abundance of instrumentation enables you to measure dozens of system variables. When this happens, you can take advantage of this redundancy of information. You can simplify the problem by replacing a group of variables with a single new variable.

Principal component analysis is a quantitatively rigorous method for achieving this simplification. The method generates a new set of variables, called *principal components*. Each principal component is a linear combination of the original variables. All the principal components are orthogonal to each other, so there is no redundant information. The principal components as a whole form an orthogonal basis for the space of the data.

There are an infinite number of ways to construct an orthogonal basis for several columns of data. What is so special about the principal component basis?

The first principal component is a single axis in space. When you project each observation on that axis, the resulting values form a new variable. And the variance of this variable is the maximum among all possible choices of the first axis.

The second principal component is another axis in space, perpendicular to the first. Projecting the observations on this axis generates another new variable. The variance of this variable is the maximum among all possible choices of this second axis.

The full set of principal components is as large as the original set of variables. But it is commonplace for the sum of the variances of the first few principal components to exceed 80% of the total variance of the original data. By examining plots of these few new variables, researchers often develop a deeper understanding of the driving forces that generated the original data.

You can use the function `pca`

to find the principal components. To use `pca`

, you need to have the actual measured data you want to analyze. However, if you lack the actual data, but have the sample covariance or correlation matrix for the data, you can still use the function `pcacov`

to perform a principal components analysis. See the reference page for `pcacov`

for a description of its inputs and outputs.

When you need to process incoming data from a data stream, you can perform incremental
PCA by creating an incremental PCA model object using the `incrementalPCA`

function. When you create the model object, you can specify
a default model, or specify the initial principal component coefficients and variances.
The `fit`

function fits the model to an incoming data chunk, and stores the updated PCA properties
in the output model. After the model is warm, the `fit`

function can
optionally return the principal component scores. The `transform`

function accepts an input data chunk and transforms it using the incremental PCA model.

## See Also

`pca`

| `pcacov`

| `pcares`

| `ppca`

| `incrementalPCA`