Fast Wavelet Transform (FWT) Algorithm
In 1988, Mallat produced a fast wavelet decomposition and reconstruction algorithm [1]. The Mallat algorithm for discrete wavelet transform (DWT) is, in fact, a classical scheme in the signal processing community, known as a two-channel subband coder using conjugate quadrature filters or quadrature mirror filters (QMFs).
The decomposition algorithm starts with signal s, next calculates the coordinates of A_{1} and D_{1}, and then those of A_{2} and D_{2}, and so on.
The reconstruction algorithm called the inverse discrete wavelet transform (IDWT) starts from the coordinates of A_{J} and D_{J}, next calculates the coordinates of A_{J}_{–1}, and then using the coordinates of A_{J}_{–1} and D_{J}_{–1} calculates those of A_{J}_{–2}, and so on.
This section addresses the following topics:
Filters Used to Calculate the DWT and IDWT
For an orthogonal wavelet, in the multiresolution framework, we start with the scaling function φ and the wavelet function ψ. One of the fundamental relations is the twin-scale relation (dilation equation or refinement equation):
$$\frac{1}{2}\varphi \left(\frac{x}{2}\right)={\displaystyle \sum _{n\in Z}{w}_{n}}\varphi (x-n)$$
All the filters used in DWT and IDWT are intimately related to the sequence
(w_{n})_{n∊Z}
Clearly if φ is compactly supported, the sequence (w_{n}) is finite and can be viewed as a filter. The filter W, which is called the scaling filter (nonnormalized), is
Finite Impulse Response (FIR)
Of length 2N
Of sum 1
Of norm $$\frac{1}{\sqrt{2}}$$
Of norm 1
A low-pass filter
For example, for the db3
scaling filter,
load db3 db3 db3 = 0.2352 0.5706 0.3252 -0.0955 -0.0604 0.0249 sum(db3) ans = 1.0000 norm(db3) ans = 0.7071
From filter W, we define four FIR filters, of length 2N and of norm 1, organized as follows.
Filters | Low-Pass | High-Pass |
---|---|---|
Decomposition | Lo_D | Hi_D |
Reconstruction | Lo_R | Hi_R |
The four filters are computed using the following scheme.
where qmf
is such that Hi_R and Lo_R are quadrature mirror filters (i.e., Hi_R(k)
= (–1) ^{k }Lo_R(2N +
1 – k)) for k = 1, 2,
..., 2N.
Note that wrev
flips the filter coefficients.
So Hi_D and Lo_D are also quadrature mirror filters. The computation of these filters is performed
using orthfilt
. Next, we illustrate these properties
with the db6
wavelet.
Load the Daubechies’ extremal phase scaling filter and plot the coefficients.
load db6; subplot(421); stem(db6,'markerfacecolor',[0 0 1]); title('Original scaling filter');
Use orthfilt
to return the analysis
(decomposition) and synthesis (reconstruction) filters.
Obtain the discrete Fourier transforms (DFT) of the lowpass and highpass analysis filters. Plot the modulus of the DFT.
LoDFT = fft(Lo_D,64); HiDFT = fft(Hi_D,64); freq = -pi+(2*pi)/64:(2*pi)/64:pi; subplot(427); plot(freq,fftshift(abs(LoDFT))); set(gca,'xlim',[-pi,pi]); xlabel('Radians/sample'); title('DFT Modulus - Lowpass Filter') subplot(428); plot(freq,fftshift(abs(HiDFT))); set(gca,'xlim',[-pi,pi]); xlabel('Radians/sample'); title('Highpass Filter');
Algorithms
Given a signal s of length N, the DWT consists of log_{2}N stages at most. Starting from s, the first step produces two sets of coefficients: approximation coefficients cA_{1}, and detail coefficients cD_{1}. These vectors are obtained by convolving s with the low-pass filter Lo_D for approximation, and with the high-pass filter Hi_D for detail, followed by dyadic decimation.
More precisely, the first step is
The length of each filter is equal to 2L. The result of convolving a length N signal with a length 2L filter is N+2L–1. Therefore, the signals F and G are of length N + 2L – 1. After downsampling by 2, the coefficient vectors cA_{1} and cD_{1} are of length
$$\lfloor \frac{N-1}{2}+L\rfloor .$$
The next step splits the approximation coefficients cA_{1} in two parts using the same scheme, replacing s by cA_{1} and producing cA_{2} and cD_{2}, and so on.
So the wavelet decomposition of the signal s analyzed at level j has the following structure: [cA_{j}, cD_{j}, ..., cD_{1}].
This structure contains for J = 3 the terminal nodes of the following tree.
Conversely, starting from cA_{j} and cD_{j}, the IDWT reconstructs cA_{j–1}, inverting the decomposition step by inserting zeros and convolving the results with the reconstruction filters.
For images, a similar algorithm is possible for two-dimensional wavelets and scaling functions obtained from 1-D wavelets by tensorial product.
This kind of 2-D DWT leads to a decomposition of approximation coefficients at level j in four components: the approximation at level j + 1 and the details in three orientations (horizontal, vertical, and diagonal).
The following charts describe the basic decomposition and reconstruction steps for images.
So, for J = 2, the 2-D wavelet tree has the following form.
Finally, let us mention that, for biorthogonal wavelets, the same algorithms hold but the decomposition filters on one hand and the reconstruction filters on the other hand are obtained from two distinct scaling functions associated with two multiresolution analyses in duality.
In this case, the filters for decomposition and reconstruction are, in general, of different odd lengths. This situation occurs, for example, for “splines” biorthogonal wavelets used in the toolbox. By zero-padding, the four filters can be extended in such a way that they will have the same even length.
Why Does Such an Algorithm Exist?
The previous paragraph describes algorithms designed for finite-length signals or images. To understand the rationale, we must consider infinite-length signals. The methods for the extension of a given finite-length signal are described in Border Effects.
Let us denote h = Lo_R and g = Hi_R and focus on the 1-D case.
We first justify how to go from level j to level j+1, for the approximation vector. This is the main step of the decomposition algorithm for the computation of the approximations. The details are calculated in the same way using the filter g instead of filter h.
Let (A_{k}^{(j)})k∊Z be the coordinates of the vector A_{j}:
$${A}_{j}={\displaystyle \sum _{k}{A}_{k}{}^{(j)}{\varphi}_{j,k}}$$
and A_{k}^{(j+1)} the coordinates of the vector A_{j}_{+1}:
$${A}_{j+1}={\displaystyle \sum _{k}{A}_{k}{}^{(j+1)}{\varphi}_{j+1,k}}$$
A_{k}^{(j+1)} is calculated using the formula
$${A}_{k}{}^{(j+1)}={\displaystyle \sum _{n}{h}_{n-2k}{A}_{n}{}^{(j)}}$$
This formula resembles a convolution formula.
The computation is very simple.
Let us define
$$\tilde{h}(k)=h(-k),\text{and}{F}_{k}^{(j+1)}={\displaystyle \sum _{n}{\tilde{h}}_{k-n}}A{n}^{(j)}.$$
The sequence F^{(j+1)} is the filtered output of the sequence A^{(j)} by the filter $$\tilde{h}$$.
We obtain
A_{k}^{(j+1)} = F_{2k}^{(j+1)}
We have to take the even index values of F. This is downsampling.
The sequence A^{(j+1)} is the downsampled version of the sequence F^{(j+1)}.
The initialization is carried out using A_{k}^{(0)} = s(k), where s(k) is the signal value at time k.
There are several reasons for this surprising result, all of which are linked to the multiresolution situation and to a few of the properties of the functions φ_{j,k} and ψ_{j,k}.
Let us now describe some of them.
The family $$({\varphi}_{0,k},k\in Z)$$ is formed of orthonormal functions. As a consequence for any j, the family $$({\varphi}_{j,k},k\in Z)$$ is orthonormal.
The double indexed family
$$({\psi}_{j,k},j\in Z,k\in Z)$$
is orthonormal.
For any j, the $$({\varphi}_{j,k},k\in Z)$$ are orthogonal to $$({\psi}_{{j}^{\prime},k},{j}^{\prime}\le j,k\in Z)$$.
Between two successive scales, we have a fundamental relation, called the twin-scale relation.
Twin-Scale Relation for $$\varphi $$ $${\varphi}_{1,0}={\displaystyle \sum _{k\in Z}{h}_{k}{\varphi}_{0,k}}$$
$${\varphi}_{j+1,0}={\displaystyle \sum _{k\in Z}{h}_{k}{\varphi}_{j,k}}$$
This relation introduces the algorithm's h filter $$({h}_{n}=\sqrt{2{\omega}_{n}})$$. For more information, see Filters Used to Calculate the DWT and IDWT.
We check that:
The coordinate of φ_{j+1,0} on φ_{j,k }is h_{k} and does not depend on j.
The coordinate of φ_{j+1,n} on φ_{j,k} is equal to $$\langle {\varphi}_{j+1,n},{\varphi}_{j,k}\rangle ={h}_{k-2n}$$.
These relations supply the ingredients for the algorithm.
Up to now we used the filter h. The high-pass filter g is used in the twin scales relation linking the ψ and φ functions. Between two successive scales, we have the following twin-scale fundamental relation.
Twin-Scale Relation Between $$\psi $$ and $$\varphi $$
$${\psi}_{1,0}={\displaystyle \sum _{k\in Z}{g}_{k}{\varphi}_{0,k}}$$
$${\psi}_{j+1,0}={\displaystyle \sum _{k\in Z}{g}_{k}{\varphi}_{j,k}}$$
After the decomposition step, we justify now the reconstruction algorithm by building it. Let us simplify the notation. Starting from A_{1} and D_{1}, let us study A_{0} = A_{1} + D_{j}_{1}. The procedure is the same to calculate A = A_{j}_{+1} + D_{j}_{+1}.
Let us define α_{n}, δ_{n}, $${\alpha}_{k}^{0}$$ by
$${A}_{1}={\displaystyle \sum _{n}{a}_{n}{\varphi}_{1,n}}\text{}{D}_{1}={\displaystyle \sum _{n}{\delta}_{n}{\psi}_{1,n}\text{}}{A}_{0}={\displaystyle \sum _{k}{a}_{k}^{0}{\varphi}_{0,k}}$$
Let us assess the $${\alpha}_{k}^{0}$$ coordinates as
$$\begin{array}{l}{a}_{k}^{0}=\langle {A}_{0},{\varphi}_{0,k}\rangle =\langle {A}_{1}+{D}_{1},{\varphi}_{0,k}\rangle =\langle {A}_{1},{\varphi}_{0,k}\rangle +\langle {D}_{1},{\varphi}_{0,k}\rangle \\ ={\displaystyle \sum _{n}{a}_{n}}\langle {\varphi}_{1,n},{\varphi}_{0,k}\rangle +{\displaystyle \sum _{n}{\delta}_{n}}\langle {\psi}_{1,n},{\varphi}_{0,k}\rangle \\ ={\displaystyle \sum _{n}{a}_{n}{h}_{k-2n}}+{\displaystyle \sum _{n}{\delta}_{n}{g}_{k-2n}}\end{array}$$
We will focus our study on the first sum $${\sum}_{n}{a}_{n}{h}_{k-2n}};$$ the second sum $$\sum}_{n}{\delta}_{n}{g}_{k-2n$$ is handled in a similar manner.
The calculations are easily organized if we note that (taking k = 0 in the previous formulas, makes things simpler)
$$\begin{array}{l}{\displaystyle \sum _{n}{a}_{n}{h}_{-2n}=\dots +{a}_{-1}{h}_{2}+}{a}_{0}{h}_{0}+{a}_{1}{h}_{-2}+{a}_{2}{h}_{-4}+\dots \\ =\dots +{a}_{-1}{h}_{2}+0{h}_{1}+{a}_{0}{h}_{0}+0{h}_{-1}+{a}_{1}{h}_{-2}+0{h}_{-3}+{a}_{2}{h}_{-4}+\dots \end{array}$$
If we transform the $$({\alpha}_{n})$$sequence into a new sequence $$({\tilde{\alpha}}_{n})$$defined by
..., α_{–1}, 0, α_{0}, 0, α_{1}, 0, α_{2}, 0, ... that is precisely
$${\tilde{a}}_{2n}={a}_{n},{\tilde{a}}_{2n}+1=0$$
Then
$$\sum _{n}{a}_{n}{h}_{-2n}=}{\displaystyle \sum _{n}{\tilde{a}}_{n}{h}_{-n}$$
and by extension
$$\sum _{n}{a}_{n}{h}_{k-2n}=}{\displaystyle \sum _{n}{\tilde{a}}_{n}{h}_{k-n}$$
Since
$${a}_{k}^{0}={\displaystyle \sum _{n}{\tilde{a}}_{n}{h}_{k-n}}+{\displaystyle \sum _{n}{\tilde{\delta}}_{n}{g}_{k-n}}$$
the reconstruction steps are:
Replace the α and δ sequences by upsampled versions α˜ and $$\tilde{\delta}$$ inserting zeros.
Filter by h and g respectively.
Sum the obtained sequences.
1-D Wavelet Capabilities
Basic 1-D Objects
Objects | Description | |
---|---|---|
Signal in original time | s A_{k}, 0 ≤ k ≤ j D_{k}, 1 ≤ k ≤ j | Original signal Approximation at level k Detail at level k |
Coefficients in scale-related time | cA_{k}, 1 ≤ k ≤ j cD_{k}, 1 ≤ k ≤ j [cA_{j}, cD_{j}, ..., cD_{1}] | Approximation coefficients at level k Detail coefficients at level k Wavelet decomposition at level j, j ≥ 1 |
Analysis-Decomposition Capabilities
Purpose | Input | Output | File |
---|---|---|---|
Single-level decomposition | s | cA_{1}, cD_{1} | dwt |
Single-level decomposition | cA_{j} | cA_{j}_{+1}, cD_{j}_{+1} | dwt |
Decomposition | s | [cA_{j}, cD_{j}, ..., cD_{1}] | wavedec |
Synthesis-Reconstruction Capabilities
Purpose | Input | Output | File |
---|---|---|---|
Single-level reconstruction | cA_{1}, cD_{1} | s or A_{0} | idwt |
Single-level reconstruction | cA_{j}_{+1}, cD_{j}_{+1} | cA_{j} | idwt |
Full reconstruction | [cA_{j}, cD_{j}, ..., cD_{1}] | s or A_{0} | waverec |
Selective reconstruction | [cA_{j}, cD_{j}, ..., cD_{1}] | A_{l}, D_{m} | wrcoef |
Decomposition Structure Utilities
Purpose | Input | Output | File |
---|---|---|---|
Extraction of detail coefficients | [cA_{j}, cD_{j}, ..., cD_{1}] | cD_{k}, 1 ≤ k ≤ j | detcoef |
Extraction of approximation coefficients | [cA_{j}, cD_{j}, ..., cD_{1}] | cA_{k}, 0≤ k ≤ j | appcoef |
Recomposition of the decomposition structure | [cA_{j}, cD_{j}, ..., cD_{1}] | [cA_{k}, cD_{k}, ..., cD_{1}] 1 ≤ k ≤ j | upwlev |
To illustrate command-line mode for 1-D capabilities, see 1-D Analysis Using the Command Line. .
2-D Wavelet Capabilities
Basic 2-D Objects
Objects | Description | |
---|---|---|
Image in original resolution | s | Original image |
A_{0} | Approximation at level 0 | |
A_{k}, 1 ≤ k ≤ j | Approximation at level k | |
D_{k}, 1 ≤ k ≤ j | Details at level k | |
Coefficients in scale-related resolution | cA_{k}, 1 ≤ k ≤ j | Approximation coefficients at level k |
cD_{k}, 1 ≤ k ≤ j | Detail coefficients at level k | |
[cA_{j}, cD_{j}, ..., cD_{1}] | Wavelet decomposition at level j |
D_{k} stands for $$\left[{D}_{k}{}^{(h)},{D}_{k}{}^{(v)},{D}_{k}{}^{(d)}\right]$$, the horizontal, vertical, and diagonal details at level k.
The same holds for cD_{k}, which stands for $$\left[c{D}_{k}{}^{(h)},c{D}_{k}{}^{(v)},c{D}_{k}{}^{(d)}\right]$$.
The 2-D files are the same as those for the 1-D case, but with
a 2
appended on the end of the command.
For example, idwt
becomes idwt2
.
For more information, see 1-D Wavelet Capabilities.
To illustrate command-line mode for 2-D capabilities, see Wavelet Image Analysis and Compression..
References
[1] Mallat, S. G. “A Theory for Multiresolution Signal Decomposition: The Wavelet Representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 11, Issue 7, July 1989, pp. 674–693.