Main Content

perObservationLoss

Per observation regression error of model for incremental learning

Since R2022a

    Description

    example

    Err = perObservationLoss(Mdl,X,Y) returns per observation squared error for model Mdl trained using predictors in X and true labels in Y.

    Err is an n-by-1 vector, where n is the number of observations.

    Err = perObservationLoss(Mdl,X,Y,Name=Value) specifies additional options using one or more Name=Value arguments.

    Examples

    collapse all

    Load the robot arm data set. Obtain the sample size n and the number of predictor variables p.

    load robotarm
    n = numel(ytrain);
    p = size(Xtrain,2);

    For details on the data set, enter Description at the command line.

    Create an incremental linear model for regression. Configure the model as follows:

    • Specify a metrics warm-up period of 1000 observations.

    • Specify a metrics window size of 500 observations.

    • Configure the model to predict responses by specifying that all regression coefficients and the bias are 0.

    Mdl = incrementalRegressionLinear('MetricsWarmupPeriod',1000,'MetricsWindowSize',500,...
       'Beta',zeros(p,1),'Bias',0,'EstimationPeriod',0)
    Mdl = 
      incrementalRegressionLinear
    
                   IsWarm: 0
                  Metrics: [1x2 table]
        ResponseTransform: 'none'
                     Beta: [32x1 double]
                     Bias: 0
                  Learner: 'svm'
    
    
    

    Mdl is an incrementalRegressionLinear model object configured for incremental learning. All properties are read-only.

    Preallocate the number of variables in each chunk for creating a stream of data and variables to store the performance metrics.

    numObsPerChunk = 50;
    nchunk = floor(n/numObsPerChunk);
    L = zeros(nchunk,1); % To store loss values
    PoL = zeros(nchunk,50); % To store per observation loss values

    Simulate a data stream with incoming chunks of 50 observations each. For each iteration:

    1. Call updateMetricsandFit to measure the cumulative performance and the performance within a window of observations and fit the model to the incoming data. Overwrite the previous incremental model with the new one.

    2. Call loss to compute the mean squared error on the incoming data and perObservationLoss to compute the squared error for each observation and store the performance metrics.

    for j = 1:nchunk
        ibegin = min(n,numObsPerChunk*(j-1) + 1);
        iend   = min(n,numObsPerChunk*j);
        idx = ibegin:iend;    
        Mdl = updateMetricsAndFit(Mdl,Xtrain(idx,:),ytrain(idx));
        L(j) = loss(Mdl,Xtrain(idx,:),ytrain(idx));
        PoL(j,:) = perObservationLoss(Mdl,Xtrain(idx,:),ytrain(idx));
    end

    PerObservationLoss computes the regression loss (squared error) for each observation in each chunk of data after the warm up period (after IsWarm property is 1 (or true)). PoL is an nchunk-by-numObsPerChunk matrix, which, in this example corresponds to a 143-by-50 matrix. Each row corresponds to a window of observation in the stream and each column corresponds to an observation in the corresponding window. The default warmup period is 1000 observations, which corresponds to 20 chunks of incoming data. Hence, first 19 rows of PoL only has NaN values. loss starts computing the mean squared error for each window of data, whether the model is warm or not, so computes the regression error for the first 19 chunks as well. L is a 143-by-1 vector. Each value in L corresponds to the mean of the squared error values in each row of PoL.

    Compute the difference between L and the row mean of PoL, and display the values 20 to 25.

    diff = abs(L-mean(PoL,2));
    diff(20:25)
    ans = 6×1
    10-15 ×
    
        0.2220
             0
        0.2220
        0.1110
        0.1110
        0.2220
    
    

    The difference between the two vectors is negligible.

    Input Arguments

    collapse all

    Incremental learning model, specified as an incrementalRegressionKernel or incrementalRegressionLinear model object. You can create Mdl directly or by converting a supported, traditionally trained machine learning model using the incrementalLearner function. For more details, see the corresponding reference page.

    Batch of predictor data with which to compute the per observation loss, specified as a floating-point matrix of n observations and Mdl.NumPredictors predictor variables. The value of the ObservationsIn name-value argument determines the orientation of the variables and observations.

    The length of the observation labels Y and the number of observations in X must be equal; Y(j) is the label of observation j (row or column) in X.

    Note

    perObservationLoss supports only floating-point input predictor data. If your input data includes categorical data, you must prepare an encoded version of the categorical data. Use dummyvar to convert each categorical variable to a numeric matrix of dummy variables. Then, concatenate all dummy variable matrices and any other numeric predictors. For more details, see Dummy Variables.

    Data Types: single | double

    Batch of responses with which to compute the per observation loss, specified as a floating-point vector.

    The length of Y and the number of observations in X must be equal; Y(j) is the response for observation j (row or column) in X.

    Data Types: single | double

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: ObservationsIn="columns",LossFun="epsiloninsensitive" specifies that the observations are in columns and the loss function is the built-in epsilon insensitive loss.

    Orientation of data in X, specified as either "rows" or "columns".

    Example: ObservationsIn="columns"

    Loss function, specified as a built-in loss function name or function handle.

    Available built-in loss functions for regression are "squarederror" or "epsiloninsensitive".

    To specify a custom loss function, use function handle notation. The function must have this form:

    lossval = lossfcn(Y,YFit)

    • The output argument lossval is a floating-point scalar.

    • You specify the function name (lossfcn).

    • Y is a length n numeric vector of observed responses.

    • YFit is a length n numeric vector of corresponding predicted responses.

    Example: LossFun="epsiloninsensitive"

    Example: LossFun=@lossfcn

    Data Types: char | string | function_handle

    Version History

    Introduced in R2022a