rls
(Removed) Construct recursive least squares (RLS) adaptive algorithm object
rls has been removed. Use comm.LinearEqualizer
or comm.DecisionFeedback
instead.
Syntax
alg = rls(forgetfactor)
alg = rls(forgetfactor,invcorr0)
Description
The rls
function creates an adaptive algorithm object that you
can use with the lineareq
function or dfe
function to create an equalizer object. You can then use the
equalizer object with the equalize
function to equalize a
signal. To learn more about the process for equalizing a signal, see Equalization.
alg = rls(forgetfactor)
constructs an
adaptive algorithm object based on the recursive least squares (RLS) algorithm. The
forgetting factor is forgetfactor
, a real number between 0 and 1. The
inverse correlation matrix is initialized to a scalar value.
alg = rls(forgetfactor,invcorr0)
sets the
initialization parameter for the inverse correlation matrix. This scalar value is used
to initialize or reset the diagonal elements of the inverse correlation matrix.
Properties
The table below describes the properties of the RLS adaptive algorithm object. To learn how to view or change the values of an adaptive algorithm object, see Equalization.
Property | Description |
---|---|
AlgType | Fixed value, 'RLS' |
ForgetFactor | Forgetting factor |
InvCorrInit | Scalar value used to initialize or reset the diagonal elements of the inverse correlation matrix |
Also, when you use this adaptive algorithm object to create an equalizer object
(via the lineareq
function or dfe
function), the equalizer
object has an InvCorrMatrix
property that represents the inverse
correlation matrix for the RLS algorithm. The initial value of
InvCorrMatrix
is InvCorrInit*eye(N)
, where
N
is the total number of equalizer weights.
Examples
Algorithms
Referring to the schematics presented in Equalization, define w as the vector of all weights wi and define u as the vector of all inputs ui. Based on the current set of inputs, u, and the current inverse correlation matrix, P, this adaptive algorithm first computes the Kalman gain vector, K
where H denotes the Hermitian transpose.
Then the new inverse correlation matrix is given by
(ForgetFactor
)-1(P –
KuHP)
and the new set of weights is given by
w + K*e
where the * operator denotes the complex conjugate.
References
[1] Farhang-Boroujeny, B., Adaptive Filters: Theory and Applications, Chichester, England, John Wiley & Sons, 1998.
[2] Haykin, S., Adaptive Filter Theory, Third Ed., Upper Saddle River, NJ, Prentice-Hall, 1996.
[3] Kurzweil, J., An Introduction to Digital Communications, New York, John Wiley & Sons, 2000.
[4] Proakis, John G., Digital Communications, Fourth Ed., New York, McGraw-Hill, 2001.