Real PartialSystolic Qless QR Decomposition with Forgetting Factor
Qless QR decomposition for realvalued matrices with infinite number of rows
Since R2020b
Libraries:
FixedPoint Designer HDL Support /
Matrices and Linear Algebra /
Matrix Factorizations
Description
The Real PartialSystolic Qless QR Decomposition with Forgetting Factor block uses QR decomposition to compute the economy size uppertriangular R factor of the QR decomposition A = QR, without computing Q. A is an infinitely tall realvalued matrix representing streaming data.
When the regularization parameter is nonzero, the Real PartialSystolic Qless QR
Decomposition with Forgetting Factor block initializes the first uppertriangular
factor R to λI_{n} before factoring
in the rows of A, where λ is the regularization
parameter and I_{n} =
eye(n)
.
Ports
Input
A(i,:) — Rows of real matrix A
vector
Rows of real matrix A, specified as a vector. A is an infinitely tall matrix of streaming data. If A uses a fixedpoint data type, A must be signed and use binarypoint scaling. Slopebias representation is not supported for fixedpoint data types.
Data Types: single
 double
 fixed point
validIn — Whether inputs are valid
Boolean
scalar
Whether inputs are valid, specified as a Boolean scalar. This control signal
indicates when the data from the A(i,:)
input port is valid. When
this value is 1 (true
) and the value of ready
is 1 (true
), the block captures the values at the
A(i,:)
input port. When this value is 0
(false
), the block ignores the input samples.
After sending a true
validIn
signal, there may be some delay before
ready
is set to false
. To ensure all data is
processed, you must wait until ready
is set to
false
before sending another true
validIn
signal.
Data Types: Boolean
restart — Whether to clear internal states
Boolean
scalar
Whether to clear internal states, specified as a Boolean scalar. When this value
is 1 (true
), the block stops the current calculation and clears all
internal states. When this value is 0 (false
) and the value at
validIn
is 1 (true
), the block begins a new
subframe.
Data Types: Boolean
Output
R — Uppertriangular matrix R
matrix
Economy size QR decomposition matrix R multiplied by the
Forgetting factor
parameter, returned as a matrix.
R is an upper triangular matrix. The size of matrix
R is nbyn. The output at
R
has the same data type as the input at
A(i,:)
.
Data Types: single
 double
 fixed point
validOut — Whether output data is valid
Boolean
scalar
Whether the output data is valid, specified as a Boolean scalar. This control
signal indicates when the data at output port R
is valid. When
this value is 1 (true
), the block has successfully computed the
matrix R. When this value is 0 (false
), the
output data is not valid.
Data Types: Boolean
ready — Whether block is ready
Boolean
scalar
Whether the block is ready, returned as a Boolean scalar. This control signal
indicates when the block is ready for new input data. When this value is
1
(true
) and validIn
is
1
(true
), the block accepts input data in the
next time step. When this value is 0
(false
),
the block ignores input data in the next time step.
After sending a true
validIn
signal, there may be some delay before
ready
is set to false
. To ensure all data is
processed, you must wait until ready
is set to
false
before sending another true
validIn
signal.
Data Types: Boolean
Parameters
Number of columns in matrix A — Number of columns in input matrix A
4
(default)  positive integervalued scalar
Number of columns in input matrix A, specified as a positive integervalued scalar.
Programmatic Use
Block Parameter:
n 
Type: character vector 
Values: positive integervalued scalar 
Default:
4 
Forgetting factor — Forgetting factor applied after each row of the matrix is factored
0.99 (default)  real positive scalar
Forgetting factor applied after each row of the matrix is factored, specified as a real positive scalar. The output is updated as each row of A is input indefinitely.
Programmatic Use
Block Parameter:
forgetting_factor 
Type: character vector 
Values: positive integervalued scalar 
Default:
0.99 
Regularization parameter — Regularization parameter
0 (default)  real nonnegative scalar
Regularization parameter, specified as a nonnegative scalar. Small, positive values of the regularization parameter can improve the conditioning of the problem and reduce the variance of the estimates. While biased, the reduced variance of the estimate often results in a smaller mean squared error when compared to leastsquares estimates.
Programmatic Use
Block Parameter:
regularizationParameter 
Type: character vector 
Values: real nonnegative scalar 
Default:
0 
Algorithms
Qless QR Decomposition with Forgetting Factor
The Real PartialSystolic Qless QR Decomposition with Forgetting Factor block implements the following recursion to compute the uppertriangular factor R of continuously streaming nby1 row vectors A(k,:) using forgetting factor α. It's as if matrix A is infinitely tall. The forgetting factor in the range 0 < α < 1 prevents it from integrating without bound.
$$\begin{array}{c}{R}_{0}=\mathrm{zeros}(n,n)\\ \left[\sim ,{R}_{1}\right]=\mathrm{qr}\left(\left[\begin{array}{c}{R}_{0}\\ A\left(1,:\right)\end{array}\right],0\right)\\ {R}_{1}=\alpha {R}_{1}\\ \left[\sim ,{R}_{2}\right]=\mathrm{qr}\left(\left[\begin{array}{c}{R}_{1}\\ A\left(2,:\right)\end{array}\right],0\right)\\ {R}_{2}=\alpha {R}_{2}\\ \vdots \\ \left[\sim ,{R}_{k}\right]=\mathrm{qr}\left(\left[\left[\begin{array}{c}{R}_{k1}\\ A\left(k,:\right)\end{array}\right]\right],0\right)\\ {R}_{k}=\alpha {R}_{k}\\ \vdots \end{array}$$
Qless QR Decomposition with Forgetting Factor and Tikhonov Regularization
The uppertriangular factor R_{k} after processing the k^{th} input A(k,:) is computed using the following iteration.
$$\begin{array}{c}{R}_{0}=\lambda {I}_{n}\\ \left[~,{R}_{1}\right]=\mathrm{qr}\left(\left[\begin{array}{c}{R}_{0}\\ A\left(1,:\right)\end{array}\right],0\right)\\ {R}_{1}=\alpha {R}_{1}\\ \left[~,{R}_{2}\right]=\mathrm{qr}\left(\left[\begin{array}{c}{R}_{1}\\ A\left(2,:\right)\end{array}\right],0\right)\\ {R}_{2}=\alpha {R}_{2}\\ \vdots \\ \left[~,{R}_{k}\right]=\mathrm{qr}\left(\left[\begin{array}{c}{R}_{k1}\\ A\left(k,:\right)\end{array}\right],0\right)\\ {R}_{k}=\alpha {R}_{k}\\ \vdots \end{array}$$
This is mathematically equivalent to computing the uppertriangular factor R_{k} of matrix A_{k}, defined as follows, though the block never actually creates A_{k}.
$${A}_{k}=\left[\begin{array}{c}{\alpha}^{k}\lambda {I}_{n}\\ \left[\begin{array}{cccc}{\alpha}^{k}& & & \\ & {\alpha}^{k1}& & \\ & & \ddots & \\ & & & \alpha \end{array}\right]A\left(1:k,:\right)\end{array}\right]$$
Forward and Backward Substitution
When an upper triangular factor is ready, then forward and backward substitution are computed with the current input B to produce output X.
$$X={R}_{k}\backslash \left({R}_{k}^{\text{'}}\backslash B\right)$$
Choosing the Implementation Method
Partialsystolic implementations prioritize speed of computations over space constraints, while burst implementations prioritize space constraints at the expense of speed of the operations. The following table illustrates the tradeoffs between the implementations available for matrix decompositions and solving systems of linear equations.
Implementation  Ready  Latency  Area 

Systolic  C  O(n)  O(mn^{2}) 
PartialSystolic  C  O(m)  O(n^{2}) 
PartialSystolic with Forgetting Factor  C  O(n)  O(n^{2}) 
Burst  O(n)  O(mn^{2})  O(n) 
Where C is a constant proportional to the word length of the data, m is the number of rows in matrix A, and n is the number of columns in matrix A.
For additional considerations in selecting a block for your application, see Choose a Block for HDLOptimized FixedPoint Matrix Operations.
AMBA AXI Handshake Process
This block uses the AMBA AXI handshake protocol [1]. The valid/ready
handshake process is used to transfer data and control information. This twoway control mechanism allows both the manager and subordinate to control the rate at which information moves between manager and subordinate. A valid
signal indicates when data is available. The ready
signal indicates that the block can accept the data. Transfer of data occurs only when both the valid
and ready
signals are high.
Block Timing
The PartialSystolic QR Decomposition with Forgetting Factor blocks accept and process the matrix A row by row. After accepting the first m rows, the block starts to output the R matrix as a single vector. From this point, for each row input, the block calculates a R matrix. The partialsystolic implementation uses a pipelined structure, so the block can accept new matrix inputs before outputting the result of the current matrix.
For example, assume that the input matrix A is 3by3. Additionally
assume that validIn
asserts before ready
, meaning that
the upstream data source is faster than the Qless QR decomposition.
In the figure,
A1r1
is the first row of the first A matrix,R1
is the first R matrix, and so on.validIn
toready
— From a successful row input to the block being ready to accept the next row.validIn
tovalidOut
— From a successful row input to the block starting to output the corresponding solution.
The following table provides details of the timing for the PartialSystolic Qless QR Decomposition with Forgetting Factor blocks.
Block  validIn to ready (cycles)  validIn to validOut
(cycles) 

Real PartialSystolic Qless QR Decomposition with Forgetting Factor  wl + 7  (wl + 6)*n + 3 
Complex PartialSystolic Qless QR Decomposition with Forgetting Factor  wl + 9  (wl + 7.5)*2*n + 3 
In the table, m represents the number of rows in matrix A, and n is the number of columns in matrix A. wl represents the word length of A.
If the data type of A is fixed point, then wl is the word length.
If the data type of A is double, then wl is 53.
If the data type of A is single, then wl is 24.
Hardware Resource Utilization
This block supports HDL code generation using the Simulink^{®} HDL Workflow Advisor. For an example, see HDL Code Generation and FPGA Synthesis from Simulink Model (HDL Coder) and Implement Digital Downconverter for FPGA (DSP HDL Toolbox).
In R2023a: The table below shows a summary of the resource utilization results.
This example data was generated by synthesizing the block on a Xilinx^{®} Zynq^{®}7 ZC706 evaluation board (2 speed grade).
The following parameters were used for synthesis.
Block parameters:
m = 10
n = 10
p = 1
Matrix A dimension: 10by10
Matrix B dimension: 10by1
Input data type:
sfix18_En12
Resource  Usage 

LUT  30190 
LUTRAM  10 
Flip Flop  17570 
BRAM  31 
In R2022b: The following tables show the post placeandroute resource utilization results and timing summary, respectively.
This example data was generated by synthesizing the block on a Xilinx Zynq UltraScale™ + RFSoC ZCU111 evaluation board. The synthesis tool was Vivado^{®} v.2020.2 (win64).
The following parameters were used for synthesis.
Block parameters:
n = 16
p = 1
Matrix A dimension: infby16
Matrix B dimension: 16by1
Input data type:
sfix16_En14
Target frequency: 300 MHz
Resource  Usage  Available  Utilization (%) 

CLB LUTs  112218  425280  26.39 
CLB Registers  77563  850560  9.12 
DSPs  0  4272  0.00 
Block RAM Tile  0  1080  0.00 
URAM  0  80  0.00 
Value  

Requirement  3.3333 ns 
Data Path Delay  3.191 ns 
Slack  0.125 ns 
Clock Frequency  311.69 MHz 
References
[1] "AMBA AXI and ACE Protocol Specification Version E." https://developer.arm.com/documentation/ihi0022/e/AMBAAXI3andAXI4ProtocolSpecification/SingleInterfaceRequirements/Basicreadandwritetransactions/Handshakeprocess
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using Simulink® Coder™.
Slopebias representation is not supported for fixedpoint data types.
HDL Code Generation
Generate VHDL, Verilog and SystemVerilog code for FPGA and ASIC designs using HDL Coder™.
HDL Coder™ provides additional configuration options that affect HDL implementation and synthesized logic.
This block has one default HDL architecture.
General  

ConstrainedOutputPipeline  Number of registers to place at
the outputs by moving existing delays within your design. Distributed
pipelining does not redistribute these registers. The default is

InputPipeline  Number of input pipeline stages
to insert in the generated code. Distributed pipelining and constrained
output pipelining can move these registers. The default is

OutputPipeline  Number of output pipeline stages
to insert in the generated code. Distributed pipelining and constrained
output pipelining can move these registers. The default is

Supports fixedpoint data types only.
Version History
Introduced in R2020bR2023a: Smart unrolling for improved resource utilization
When you update the diagram, the loop which composes the partialsystolic pipeline is unrolled. This updated internal architecture removes dead operations in simulation and generated code, resulting in a significant decrease in the number of hardware resources required. This block simulates with clock and bittrue fidelity with respect to library versions of these blocks in previous releases.
Resource  R2022b  R2023a 

LUT  55482  30190 
LUTRAM  10  10 
Flip Flop  32375  17570 
BRAM  45  31 
This example data was generated by synthesizing the block on a Xilinx Zynq7 ZC706 evaluation board (2 speed grade).
The following parameters were used for synthesis.
Block parameters:
m = 10
n = 10
p = 1
Matrix A dimension: 10by10
Matrix B dimension: 10by1
Input data type:
sfix18_En12
R2022a: Support for Tikhonov regularization parameter
The Real PartialSystolic Qless QR Decomposition with Forgetting Factor block now supports the Tikhonov Regularization parameter.
MATLABBefehl
Sie haben auf einen Link geklickt, der diesem MATLABBefehl entspricht:
Führen Sie den Befehl durch Eingabe in das MATLABBefehlsfenster aus. Webbrowser unterstützen keine MATLABBefehle.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
 América Latina (Español)
 Canada (English)
 United States (English)
Europe
 Belgium (English)
 Denmark (English)
 Deutschland (Deutsch)
 España (Español)
 Finland (English)
 France (Français)
 Ireland (English)
 Italia (Italiano)
 Luxembourg (English)
 Netherlands (English)
 Norway (English)
 Österreich (Deutsch)
 Portugal (English)
 Sweden (English)
 Switzerland
 United Kingdom (English)