# iforest

Fit isolation forest for anomaly detection

## Syntax

``forest = iforest(Tbl)``
``forest = iforest(X)``
``forest = iforest(___,Name=Value)``
``[forest,tf] = iforest(___)``
``[forest,tf,scores] = iforest(___)``

## Description

Use the `iforest` function to fit an isolation forest model for outlier detection and novelty detection.

• Outlier detection (detecting anomalies in training data) — Use the output argument `tf` of `iforest` to identify anomalies in training data.

• Novelty detection (detecting anomalies in new data with uncontaminated training data) — Create an `IsolationForest` object by passing uncontaminated training data (data with no outliers) to `iforest`. Detect anomalies in new data by passing the object and the new data to the object function `isanomaly`.

example

````forest = iforest(Tbl)` returns an `IsolationForest` object for predictor data in the table `Tbl`.```
````forest = iforest(X)` uses predictor data in the matrix `X`.```
````forest = iforest(___,Name=Value)` specifies options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, `ContaminationFraction=0.1` instructs the function to process 10% of the training data as anomalies.```
````[forest,tf] = iforest(___)` also returns the logical array `tf`, whose elements are `true` when an anomaly is detected in the corresponding row of `Tbl` or `X`.```

example

````[forest,tf,scores] = iforest(___)` also returns an anomaly score in the range `[0,1]` for each observation in `Tbl` or `X`. A score value close to 0 indicates a normal observation, and a value close to 1 indicates an anomaly.```

## Examples

collapse all

Detect outliers (anomalies in training data) by using the `iforest` function.

Load the sample data set `NYCHousing2015`.

`load NYCHousing2015`

The data set includes 10 variables with information on the sales of properties in New York City in 2015. Display a summary of the data set.

`summary(NYCHousing2015)`
```Variables: BOROUGH: 91446x1 double Values: Min 1 Median 3 Max 5 NEIGHBORHOOD: 91446x1 cell array of character vectors BUILDINGCLASSCATEGORY: 91446x1 cell array of character vectors RESIDENTIALUNITS: 91446x1 double Values: Min 0 Median 1 Max 8759 COMMERCIALUNITS: 91446x1 double Values: Min 0 Median 0 Max 612 LANDSQUAREFEET: 91446x1 double Values: Min 0 Median 1700 Max 2.9306e+07 GROSSSQUAREFEET: 91446x1 double Values: Min 0 Median 1056 Max 8.9422e+06 YEARBUILT: 91446x1 double Values: Min 0 Median 1939 Max 2016 SALEPRICE: 91446x1 double Values: Min 0 Median 3.3333e+05 Max 4.1111e+09 SALEDATE: 91446x1 datetime Values: Min 01-Jan-2015 Median 09-Jul-2015 Max 31-Dec-2015 ```

The `SALEDATE` column is a `datetime` array, which is not supported by `iforest`. Create columns for the month and day numbers of the `datetime` values, and delete the `SALEDATE` column.

```[~,NYCHousing2015.MM,NYCHousing2015.DD] = ymd(NYCHousing2015.SALEDATE); NYCHousing2015.SALEDATE = [];```

The columns `BOROUGH`, `NEIGHBORHOOD`, and `BUILDINGCLASSCATEGORY` contain categorical predictors. Display the number of categories for the categorical predictors.

`length(unique(NYCHousing2015.BOROUGH))`
```ans = 5 ```
`length(unique(NYCHousing2015.NEIGHBORHOOD))`
```ans = 254 ```
`length(unique(NYCHousing2015.BUILDINGCLASSCATEGORY))`
```ans = 48 ```

For a categorical variable with more than 64 categories, the `iforest` function uses an approximate splitting method that can reduce the accuracy of the isolation forest model. Remove the `NEIGHBORHOOD` column, which contains a categorical variable with 254 categories.

`NYCHousing2015.NEIGHBORHOOD = [];`

Train an isolation forest model for `NYCHousing2015`. Specify the fraction of anomalies in the training observations as 0.1, and specify the first variable (`BOROUGH`) as a categorical predictor. The first variable is a numeric array, so `iforest` assumes it is a continuous variable unless you specify the variable as a categorical variable.

```rng("default") % For reproducibility [Mdl,tf,scores] = iforest(NYCHousing2015,ContaminationFraction=0.1, ... CategoricalPredictors=1);```

`Mdl` is an `IsolationForest` object. `iforest` also returns the anomaly indicators (`tf`) and anomaly scores (`scores`) for the training data `NYCHousing2015`.

Plot a histogram of the score values. Create a vertical line at the score threshold corresponding to the specified fraction.

```histogram(scores) xline(Mdl.ScoreThreshold,"r-",["Threshold" Mdl.ScoreThreshold])```

If you want to identify anomalies with a different contamination fraction (for example, 0.01), you can train a new isolation forest model.

```rng("default") % For reproducibility [newMdl,newtf,scores] = iforest(NYCHousing2015, ... ContaminationFraction=0.01,CategoricalPredictors=1); ```

If you want to identify anomalies with a different score threshold value (for example, 0.65), you can pass the `IsolationForest` object, the training data, and a new threshold value to the `isanomaly` function.

```[newtf,scores] = isanomaly(Mdl,NYCHousing2015,ScoreThreshold=0.65); ```

Note that changing the contamination fraction or score threshold changes the anomaly indicators only, and does not affect the anomaly scores. Therefore, if you do not want to compute the anomaly scores again by using `iforest` or `isanomaly`, you can obtain a new anomaly indicator with the existing score values.

Change the fraction of anomalies in the training data to 0.01.

`newContaminationFraction = 0.01;`

Find a new score threshold by using the `quantile` function.

`newScoreThreshold = quantile(scores,1-newContaminationFraction)`
```newScoreThreshold = 0.7045 ```

Obtain a new anomaly indicator.

`newtf = scores > newScoreThreshold;`

Create an `IsolationForest` object for uncontaminated training observations by using the `iforest` function. Then detect novelties (anomalies in new data) by passing the object and the new data to the object function `isanomaly`.

Load the 1994 census data stored in `census1994.mat`. The data set consists of demographic data from the US Census Bureau to predict whether an individual makes over \$50,000 per year.

`load census1994`

`census1994` contains the training data set `adultdata` and the test data set `adulttest`.

Train an isolation forest model for `adultdata`. Assume that `adultdata` does not contain outliers.

```rng("default") % For reproducibility [Mdl,tf,s] = iforest(adultdata);```

`Mdl` is an `IsolationForest` object. `iforest` also returns the anomaly indicators `tf` and anomaly scores `s` for the training data `adultdata`. If you do not specify the `ContaminationFraction` name-value argument as a value greater than 0, then `iforest` treats all training observations as normal observations, meaning all the values in `tf` are logical 0 (`false`). The function sets the score threshold to the maximum score value. Display the threshold value.

`Mdl.ScoreThreshold`
```ans = 0.8600 ```

Find anomalies in `adulttest` by using the trained isolation forest model.

`[tf_test,s_test] = isanomaly(Mdl,adulttest);`

The `isanomaly` function returns the anomaly indicators `tf_test` and scores `s_test` for `adulttest`. By default, `isanomaly` identifies observations with scores above the threshold (`Mdl.ScoreThreshold`) as anomalies.

Create histograms for the anomaly scores `s` and `s_test`. Create a vertical line at the threshold of the anomaly scores.

```histogram(s,Normalization="probability") hold on histogram(s_test,Normalization="probability") xline(Mdl.ScoreThreshold,"r-",join(["Threshold" Mdl.ScoreThreshold])) legend("Training Data","Test Data",Location="northwest") hold off```

Display the observation index of the anomalies in the test data.

`find(tf_test)`
```ans = 15655 ```

The anomaly score distribution of the test data is similar to that of the training data, so `isanomaly` detects a small number of anomalies in the test data with the default threshold value. You can specify a different threshold value by using the `ScoreThreshold` name-value argument. For an example, see Specify Anomaly Score Threshold.

## Input Arguments

collapse all

Predictor data, specified as a table. Each row of `Tbl` corresponds to one observation, and each column corresponds to one predictor variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

To use a subset of the variables in `Tbl`, specify the variables by using the `PredictorNames` name-value argument.

Data Types: `table`

Predictor data, specified as a numeric matrix. Each row of `X` corresponds to one observation, and each column corresponds to one predictor variable.

You can use the `PredictorNames` name-value argument to assign names to the predictor variables in `X`.

Data Types: `single` | `double`

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: `NumLearners=50,NumObservationsPerLearner=100` specifies to train an isolation forest using 50 isolation trees and 100 observations for each isolation tree.

List of categorical predictors, specified as one of the values in this table.

ValueDescription
Vector of positive integers

Each entry in the vector is an index value indicating that the corresponding predictor is categorical. The index values are between 1 and `p`, where `p` is the number of predictors used to train the model.

If `iforest` uses a subset of input variables as predictors, then the function indexes the predictors using only the subset. The `CategoricalPredictors` values do not count any variables that the function does not use.

Logical vector

A `true` entry means that the corresponding predictor is categorical. The length of the vector is `p`.

Character matrixEach row of the matrix is the name of a predictor variable. The names must match the entries in `PredictorNames`. Pad the names with extra blanks so each row of the character matrix has the same length.
String array or cell array of character vectorsEach element in the array is the name of a predictor variable. The names must match the entries in `PredictorNames`.
`"all"`All predictors are categorical.

By default, if the predictor data is in a table (`Tbl`), `iforest` assumes that a variable is categorical if it is a logical vector, unordered categorical vector, character array, string array, or cell array of character vectors. If the predictor data is a matrix (`X`), `iforest` assumes that all predictors are continuous. To identify any other predictors as categorical predictors, specify them by using the `CategoricalPredictors` name-value argument.

For a categorical variable with more than 64 categories, the `iforest` function uses an approximate splitting method that can reduce the accuracy of the isolation forest model.

Example: `CategoricalPredictors='all'`

Data Types: `single` | `double` | `logical` | `char` | `string` | `cell`

Fraction of anomalies in the training data, specified as a numeric scalar in the range `[0,1]`.

• If the `ContaminationFraction` value is 0 (default), then `iforest` treats all training observations as normal observations, and sets the score threshold (`ScoreThreshold` property value of `forest`) to the maximum value of `scores`.

• If the `ContaminationFraction` value is in the range (`0`,`1`], then `iforest` determines the threshold value so that the function detects the specified fraction of training observations as anomalies.

Example: `ContaminationFraction=0.1`

Data Types: `single` | `double`

Number of isolation trees, specified as a positive integer scalar.

The average path lengths used by the isolation forest algorithm to compute anomaly scores usually converge well before growing 100 isolation trees for both normal points and anomalies [1].

Example: `NumLearners=50`

Data Types: `single` | `double`

Number of observations to draw from the training data without replacement for each isolation tree, specified as a positive integer scalar greater than or equal to 3.

The isolation forest algorithm performs well with a small `NumObservationsPerLearner` value, because using a small sample size helps to detect dense anomalies and anomalies close to normal points. However, you need to experiment with the sample size if `N` is small. For an example, see Examine NumObservationsPerLearner for Small Data.

Example: `NumObservationsPerLearner=100`

Data Types: `single` | `double`

Predictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality of `PredictorNames` depends on how you supply the predictor data.

• If you supply `Tbl`, then you can use `PredictorNames` to specify which predictor variables to use. That is, `iforest` uses only the predictor variables in `PredictorNames`.

• `PredictorNames` must be a subset of `Tbl.Properties.VariableNames`.

• By default, `PredictorNames` contains the names of all predictor variables in `Tbl`.

• If you supply `X`, then you can use `PredictorNames` to assign names to the predictor variables in `X`.

• The order of the names in `PredictorNames` must correspond to the column order of `X`. That is, `PredictorNames{1}` is the name of `X(:,1)`, `PredictorNames{2}` is the name of `X(:,2)`, and so on. Also, `size(X,2)` and `numel(PredictorNames)` must be equal.

• By default, `PredictorNames` is `{'x1','x2',...}`.

Example: `PredictorNames=["SepalLength" "SepalWidth" "PetalLength" "PetalWidth"]`

Data Types: `string` | `cell`

Flag to run in parallel, specified as `true` or `false`. If you specify `UseParallel=true`, the `iforest` function executes for-loop iterations in parallel by using `parfor`. This option requires Parallel Computing Toolbox™.

Example: `UseParallel=true`

Data Types: `logical`

## Output Arguments

collapse all

Trained isolation forest model, returned as an `IsolationForest` object.

You can use the object function `isanomaly` with `forest` to find anomalies in new data.

Anomaly indicators, returned as a logical column vector. An element of `tf` is `true` when the observation in the corresponding row of `Tbl` or `X` is an anomaly, and `false` otherwise. `tf` has the same length as `Tbl` or `X`.

`iforest` identifies observations with `scores` above the threshold (`ScoreThreshold` property value of `forest`) as anomalies. The function determines the threshold value to detect the specified fraction (`ContaminationFraction` name-value argument) of training observations as anomalies.

Anomaly scores, returned as a numeric column vector whose values are in the range `[0,1]`. `scores` has the same length as `Tbl` or `X`, and each element of `scores` contains an anomaly score for the observation in the corresponding row of `Tbl` or `X`. A score value close to 0 indicates a normal observation, and a value close to 1 indicates an anomaly.

collapse all

### Isolation Forest

The isolation forest algorithm [1] detects anomalies by isolating anomalies from normal points using an ensemble of isolation trees.

The `iforest` function builds an isolation forest (ensemble of isolation trees) for training observations and detects outliers (anomalies in the training data). Each isolation tree is trained for a subset of training observations, sampled without replacements. `iforest` grows an isolation tree by choosing a split variable and split position at random until every observation in a subset lands in a separate leaf node. Anomalies are few and different; therefore, an anomaly lands in a separate leaf node closer to the root node and has a shorter path length (the distance from the root node to the leaf node) than normal points. The function identifies outliers using anomaly scores defined based on the average path lengths over all isolation trees.

The `isanomaly` function uses a trained isolation forest to detect anomalies in data. For novelty detection (detecting anomalies in new data with uncontaminated training data), you can train an isolation forest with uncontaminated training data (data with no outliers) and use it to detect anomalies in new data. For each observation of the new data, the function finds the average path length to reach a leaf node from the root node in the trained isolation forest, and returns an anomaly indicator and score.

For more details, see Anomaly Detection with Isolation Forest.

### Anomaly Scores

The isolation forest algorithm computes the anomaly score s(x) of an observation x by normalizing the path length h(x):

`$s\left(x\right)={2}^{-\frac{E\left[h\left(x\right)\right]}{c\left(n\right)}},$`

where E[h(x)] is the average path length over all isolation trees in the isolation forest, and c(n) is the average path length of unsuccessful searches in a binary search tree of n observations.

• The score approaches 1 as E[h(x)] approaches 0. Therefore, a score value close to 1 indicates an anomaly.

• The score approaches 0 as E[h(x)] approaches n – 1. Also, the score approaches 0.5 when E[h(x)] approaches c(n). Therefore, a score value smaller than 0.5 and close to 0 indicates a normal point.

## Tips

• After training a model, you can generate C/C++ code that finds anomalies for new data. Generating C/C++ code requires MATLAB® Coder™. For details, see Code Generation of the `isanomaly` function and Introduction to Code Generation.

## Algorithms

`iforest` considers `NaN`, `''` (empty character vector), `""` (empty string), `<missing>`, and `<undefined>` values in `Tbl` and `NaN` values in `X` to be missing values.

• `iforest` does not use observations with all missing values. The function assigns the anomaly score of 1 and anomaly indicator of `false` (logical 0) to the observations.

• `iforest` uses observations with some missing values to find splits on variables for which these observations have valid values.

## References

[1] Liu, F. T., K. M. Ting, and Z. Zhou. "Isolation Forest," 2008 Eighth IEEE International Conference on Data Mining. Pisa, Italy, 2008, pp. 413-422.

## Version History

Introduced in R2021b