This package contains functions that implement Principal Component Analysis (PCA) and Independent Component Analysis (ICA).
PCA and ICA are implemented as functions in this package, and multiple examples are included to demonstrate their use.
In PCA, multidimensional data is projected onto the singular vectors corresponding to a few of its largest singular values. Such an operation effectively decomposes the input single into orthogonal components in the directions of largest variance in the data. As a result, PCA is often used in dimensionality reduction applications, where performing PCA yields a lowdimensional representation of data that can be reversed to closely reconstruct the original data.
In ICA, multidimensional data is decomposed into components that are maximally independent in an appropriate sense (kurtosis and negentropy, in this package). ICA differs from PCA in that the lowdimensional signals do not necessarily correspond to the directions of maximum variance; rather, the ICA components have maximal statistical independence. In practice, ICA can often uncover disjoint underlying trends in multidimensional data.
2.2.0.0  Improving implementation efficiency. 

2.1.0.0  Adding support for audioread() function in loadAudio() 

2.0.0.0   Adding support for kurtosisbased Fast ICA


1.4.0.0  Uploading .zip (omitted in last update) 

1.3.0.0  Improving documentation and code performance 

1.2.0.0  Updating myPCA() documentation 

1.1.0.0  Fixing bug in myMultiGaussian(). Needed to use lower triangular Cholesky factorization, not the upper triangular version. 
Inspired: EOF
Create scripts with code, output, and formatted text in a single executable document.
Deny Rahmalianto (view profile)
zachary (view profile)
Hi Brian
Thank you for sharing your code with us. I'm trying to applying PCA to whiten the m×n matric Z, the rows of Z represent temporal series and the columns are assosiated with space. In your code, you calculate the whitened Z as Zw = U * S^0.5 * U' * Z, where U and S are eigenvectors and eigenvalues. Alternativly, we can also get whitened Zw_ = S^0.5 * U' * Z, what' the difference?
Inaddtion, if we want to whiten Z using just a few eigenvectors and eigenvalues, which can be obtained from PCA, how to implement? I have tried to use the following code, but the results is wrong: Zw=U(:,1:r) * S(1:r,1:r)^0.5 * U(:,1:r)' * Z, where 1 to r is the order of the selected eigenvectors and eigenvalues. The final warinning is that "T \ W' is close to singularity ".
Looking forward for your reply, thanks!
Taolue Chen (view profile)
Celik (view profile)
Hello again
I found it here: A. Hyvärinen and E. Oja. A Fast FixedPoint Algorithm for Independent Component Analysis. Neural Computation, 9 (7): 14831492, October 1997.
On page 5, equation 9, the equation is given as it is presented here. There is a scalar multiplication in the beginning of the equation, which was my main curiosity. Next sentence says that: `Therefore the scalar in eq(9) is not significant and its effect can be replaced by explicit normalization, or the projection of w on the unit sphere.`
This is the end of the question
Celik (view profile)
Hello there
Although i have read the cited paper several times, chapter 4.2 and 4.2.1 particularly, there is not any equation like you presented (yes there are the main equations), but i didn't understand how you calculate Kurtosis and the other measure, provided below as you presented:
``
if USE_KURTOSIS
% Kurtosis
G = 4 * Sk.^3;
Gp = 12 * Sk.^2;
else
% Negentropy
G = Sk .* exp(0.5 * Sk.^2);
Gp = (1  Sk.^2) .* exp(0.5 * Sk.^2);
end
W = (G * Zcw') / n  bsxfun(@times,mean(Gp,2),W);
``
Could you please clarify how you decided to use the coefficients of G, Gp? and what do they present for?
Next, why did you subtract them to calculate matrix W? Which part of the paper these calculations are mentioned?
It would be fantastic if you make them clearer .
Thanks.
Ruqiang (view profile)
Yaqi Wang (view profile)
Wentao Zhao (view profile)
Ping Shen (view profile)
Carlos Andrés Ramos Romero (view profile)
TKS!!!
omar khalifa (view profile)
Hi Brian
thanks for sharing your code
my question is
I want to use ICA for EEG signal to remove any other artifacts and my data matrix (m x n) contains the channels as
columns and their outputs as rows
first, can we consider that the number of channels is the (n) samples of ddimensional data the d you mention it in the help section we can consider it as the output data of the channels?
second, the (r) which is the number of independent components to compute, is it can be any number we want?
I tried to use it as I mention above me and my result matrix contains very small values such as (0.00221, 0.0171)is that right or wrong?
Nur Qamarina (view profile)
Hye Brian. Can I know how to apply this in OFDM to reduce PAPR?
Shida Gao (view profile)
Yu Yang (view profile)
Hi Brian,
Excellent package!
Just found a small bug. In demo_ICA, when adding noise, the codes:
sigma = @(SNR,X) exp(SNR / 20) * (norm(X(:)) / sqrt(numel(X)));
should be changed to sigma = @(SNR,X) 10^(SNR / 20) * (norm(X(:)) / sqrt(numel(X)));
if the SNR is in dB unit.
You can check the result by MATLAB's function snr:
SNR = 50;
sigma = @(SNR,X) 10^(SNR / 20) * (norm(X(:)) / sqrt(numel(X)));
Znoisy = Ztrue + sigma(SNR,Ztrue) * randn(size(Ztrue));
snr(Ztrue, ZnoisyZtrue),
 The output is about 50
But the original "sigma = @(SNR,X) exp(SNR / 20) * (norm(X(:)) / sqrt(numel(X)));" would give an output about 20.
Pengyu Sun (view profile)
Stefano Buccelli (view profile)
Tahir Hassan (view profile)
Dear Brian Moore,
Thank you for this package,
I have a question, Is it possible to use your package for blind image separation (ICA)? where my data is a mixed image and from which I want to recover the original images, if yes, please provide me some information.
Many thanks
Gedehon KOUADIO (view profile)
Brian Moore (view profile)
@myetceteramail the left singular vectors of W are the same as the eigenvectors of W * W', so the two are equivalent.
Brian Moore (view profile)
@Jason. Great catch! I have no idea why I used permute() when a simple outer product would do the job. I've updated the code to use your better implementation.
myetceteramail myetceteramail (view profile)
why are you using [U, S, ~] = svd(R,'econ'); in fastICA to whiten rows??
because we have to do eigenvalue decomposition and not the singular value decomposition as mentioned in https://www.cs.helsinki.fi/u/ahyvarin/papers/NN00new.pdf. can anybody explain?
Jason (view profile)
Hi,
First, thanks for the code. I gave this a try because I was looking for something less GUI based than the FastICA2.5 package. When running this version, it was extremely slow. The culprit seems to be the use of permute. I may have dealt with it correctly or maybe not. Change the lines
Sk = permute(W * Zcw,[1, 3, 2]); to Sk=W*Zcw;
and
W = mean(bsxfun(@times,G,permute(Zcw,[3, 1, 2])),3)  ...
bsxfun(@times,mean(Gp,3),W);
to
W = n.\( G*Zcw' )  bsxfun( @times, mean( G, 2 ), W );
where n is the 2nd dimension of Zcw and its > 100 times as fast.
K>> Sk = permute(W * Zcw,[1, 3, 2]);
K>> G = 4 * Sk.^3;
K>> Gp = 12 * Sk.^2;
K>> tic
K>> W = mean(bsxfun(@times,G,permute(Zcw,[3, 1, 2])),3)  ...
bsxfun(@times,mean(Gp,3),W);
K>> toc
Elapsed time is 191.121200 seconds.
K>> Sk=W*Zcw;
K>> G = 4 * Sk.^3;
K>> Gp = 12 * Sk.^2;
K>> tic
K>> W2 = n.\( G*Zcw' )  bsxfun( @times, mean( G, 2 ), W );
K>> toc
Elapsed time is 0.377219 seconds.
on toy problems its the same answer. For large problems like the one above of course its not but very correlated.
myetceteramail myetceteramail (view profile)
Sir how do i apply ICA on an image
Ridiculous2017 (view profile)
Hi, Brian!
I'm new to Matlab and signal processing. I want to separate the gastrointestinal noise from the heart sounds recorded by a digital stethoscope, can I use this algorithm?
Manju Baarkavi Sivam (view profile)
Youcef B (view profile)
Shijie Ren (view profile)
Azhar Abdulaziz (view profile)
Xiaoyun Long (view profile)
Brandon Graham (view profile)
Brian Moore (view profile)
@Ming: The d=1 case isn't very interesting because you must choose r <= d
Ming Gui (view profile)
Dear Brian, thanks for the toolbox.
If I have a data with d=1 and unknown r, can I still used this fastICA?
Thank you
Brian Moore (view profile)
@james: sorry for the delay. Hopefully you saw that I answered a similar question here some time ago.
ICA has inherent scaling and rotation ambiguity, so the input data has to be whitened (uncorrelated, unit variance) to obtain a unique solution. So the output components do not have any inherent ordering.
However, as per "help fastICA", if you do:
[Zica, A, T] = fastICA(Z,r);
W = T \ pinv(A);
then W is a matrix such that (up to a constant offset)
Z ≈ W * Zica
As such, you can interpret norm(W(:,k))^2 as the "power" of the kth independent component, because, when norm(W(:,k))^2 is large, Zica(k,:) has a large contribution to Z. You could use this to assign an ordering to your components.
I did a quick search on the topic and found the paper below, which proposes a similar procedure:
http://doc.utwente.nl/64279/1/27_17_hendrikse.pdf
Note that, in general, if you are interested in r < d components, I recommend directly trying smaller values of r rather than computing r = d components and then attempting to select components. In general, one expects highdimensional data to be described by a few "signal" or "informative" components together with many "noise" or "nuisance" components. In ICA, the output components are made to be mutually orthogonal, so it is better to directly compute r < d components (which we hope are informative) rather than computing too many components (some "more" noisy) that interact in complicated ways when orthogonality is imposed.
Hope this helps.
Brian Moore (view profile)
@jyoti: sorry for the delay. r < d components is definitely possible (indeed, it is recommended). My interpretation of your question is that you're computing r < d components for purposes of compression, and later you want to reconstruct your signal. You can't reconstruct exactly, but, as per "help kICA", if you run:
[Zica, W, T, mu] = kICA(Z,r);
then
Zr = T \ W' * Zica + repmat(mu,1,n)
is the rdimensional approximation of your input data, Z.
james djik (view profile)
Hello
Thank you for sharing your code with us. I am trying to apply ICA on a time series data set which consists of 90 predictors. After obtaining the independent components (I chose 90 components) the goal is to use them as predictors in a linear regression.
I was wondering whether the independent components output from your code are ordered in some way, so i can use CV or ICs to select the best model (with less than 90 ICs). Alternatively is there a recommended way to order the independent components?
Thank you for all your help
jyoti mundra (view profile)
Hi Brian,
I appreciate your explanatory responses yet.
I just want to know if there is a case where r<d , then how should we proceed?
Because when i am implementing (r<d case) with kICA.m , W and T matrices are of size d*d. (instead of r*d). I have to reconstruct my signal with r2 components only, how can i implement it?
Thanks in advance.
Jiwon Lee (view profile)
Brian Moore (view profile)
@Federica: I'm not familiar with your particular application, but if you look at the code, you'll see that if you run
[Zica, W, T] = fastICA(Z,...);
then you get
Zica = W * T * (Z  meanZ)
where meanZ subtracts the rowwise means of Z. Here, W is the ICA weight matrix (I would call this the "mixing matrix"), and T is the prewhitening transformation. So I would say that you are interested in either the columns of W or the columns of W * T. Maybe try both and see which seems to work better for you?
Federica Zonzini (view profile)
@Brian. Forgive me for asking you some questions. I'm trying to find out the modal shapes in civil engineering using ICA and I have read that they are contained in the columns of the mixing matrix. So that, if I have correctly read in the previous comments, W is the estimated mixing matrix computed by your code. Have I understand it correctly? I'm asking you this question because I'm doing some practice with ICA and I've run demoICA.m contained in the package. I've noticed that the mixing matrix A (used to mix the signals Znoisy) is different from the W matrix estimated applying fastica.m. Shall I dewhite my matrix using T\W'?
Forgive me for my questions, I hope I've sufficiently explained my problem.
R P (view profile)
ivine Kuruvila (view profile)
Brian Moore (view profile)
@anurag. Type "help fastICA" to see how to use my ICA implementation. Basically you just call:
Zica = fastICA(Z,r);
where Z is the matrix whose ith row contains the EMG signal for the ith muscle, and r is the number of independent components you'd like to compute.
anurag sohane (view profile)
Dear Brian
Thanks for sharing your code.
I am new to Matlab.
I want to apply ICA on EMG signals capture from different muscles.
I have all the signals in xl file. I will call it into the matlab.
After that I am stuck in your code. Can you please help me .
Brian Moore (view profile)
@ShaneS: keep in mind that corrcoef(x,y) = corrcoef(x,y), so the sign probably flipped on the principal component (i.e., the entries of eigVecs(:,1) are probably negative). Principal components are only unique up to scaling. My interpretation of your results is that grey matter and blood flow have a strong negative correlation across brain regions.
Shane S (view profile)
Hi Brian,
Thanks for explanation again. I am slowly getting the hang of it.
[r_Zpca_GreyMatter p_BloodFlow] = corrcoef(Zpca_GreyMatter,Zpca_BloodFlow) show a correlation of r=0.6, p<0.0001.
When I did a correlation between the Zpca_GreyMatter values and raw Grey Matter volumes (averaged across the subjects), I got a correlation of r = 0.9 .. so I am still struggling to interpret what the Zpca values mean.
Can I infer that the strong positive correlation between Zpca_GreyMatter and Zpca_BloodFlow means that the "spatial pattern" are similar across the brain? Thanks a lot.
Brian Moore (view profile)
@ShaneS: yes, mu is a d x 1 vector containing the mean across the rows of the input data, so average grey matter/blood flow for each patient in your case. It sounds like grey matter and blood flow are negatively correlated.
By using the first principal component of your data, you are capturing the dominant trend of your data (in the sense that it describes the most variance in your data) across all patients. You can think of this as a "smarter" alternative to just averaging over the rows of your data
Zavg_GreyMatter = mean(GreyMatter,1);
Zavg_BloodFlow = mean(BloodFlow,1);
since the first principal component will be close to these averages if they are meaningful and something else if the averages are less meaningful somehow
ShaneS (view profile)
One more question, I noticed that the mu represents the mean grey matter volume across the variables for each subject. Is this correct? Also, after plotting the relationship between the Zpca_GreyMatter and Zpca_BloodFlow, it looks like regions with very little blood flow all have the highest (most positive) Zpca_Bloodflow scores? I am trying to interpret the Z here, thanks!
Shane S (view profile)
Wow, I did not expect such an indepth response. Thanks a lot Brian!
To give more details..
Yes, my data is arranged in 25 (patients) x 100 (brain regions) and the question I want to address is whether the dominant pattern of grey matter volume is tightly related to blood flow. The key word is dominant pattern / topography. Therefore I am focussing the correlations only on the first PCA.
Running the correlations between the eigVecs also yielded a highly significant correlation.
I am sorry if this is off topic for you, but could I get your thoughts on how this approach differ from simply doing a Pearson correlation between grey matter volume and blood flow across the 100 regions?
Have a good day!
Wooyoung Jung (view profile)
Thank you for the quick reply, so you mean that this code is based on kurtosisbased approximation G(x) = y^4.
One more question is that what does this FastICA code achieve decorrelation? Are you using a deflation scheme based on a GramSchmidtlike decorrelation? or others?
Brian Moore (view profile)
@Wooyoung: you can choose between kurtosis (default) and negentropy for the nonGaussianity measure
Wooyoung Jung (view profile)
Hello Brain,
Thank you for sharing this awesome code!
One question, though. Which nonquadratic function did you use for the FastICA algorithm?
Brian Moore (view profile)
@Shane S: this is a very reasonable experiment. Did you pass in the data as 25x100 or 100x25 matrices?
If 25x100, you've shown that there is a strong correlation (I'll assume positive) between grey matter volume and blood flow in *brain regions*, i.e., if a given brain region has large grey matter volume, it is likely to have more blood flow. Here, doing PCA(...,1) is "averaging" over patients to find the dominant trend.
If 100x25, you've shown that there's a strong correlation (again assumed positive) between grey matter volume and blood flow in *patients*, i.e., if a patient has larger aggregate grey matter volume, they are also likely to have more blood flow. Here, doing PCA(...,1) is "averaging" over brain regions to find the dominant trend.
There are negative values because the first step of PCA is to subtract the mean of the data (the "mu" vector). So negative values mean "below average".
Shane S (view profile)
Hello Brian,
Thanks for releasing this.
I have 2 datasets of 25 subjects and 100 variables (brain regions). Dataset A = grey matter volume. Dataset B = blood flow.
Zpca_DataA, U, mu, eigVecs] = PCA(DataA,1);
Zpca_DataB, U, mu, eigVecs] = PCA(DataB,1);
[r p] = corrcoef(Zpca_DataA,Zpca_DataB).
Does this look correct?
It is highly significant. Does this mean that both the main component in both dataset A and B are correlated across the brain regions? What does negative values mean in the Zpca_DataA? Thanks a lot for your kind help!!
Zhang CHW (view profile)
thank you so much！
Brian Moore (view profile)
@Md: you are welcome to explore my other File Exchange packages
Md. Tanjin Amin (view profile)
Thanks again... Do you have any package for modified independent component analysis, where ICs retain the ranking of PCA? Or any other statistical package like Restricted Boltzman Machine or Support Vector Machine? Your coding is robust.
Brian Moore (view profile)
@Md: The fastICA function whitens the input data internally, so you don't have to do it yourself. When you call
[Zica, W, T, mu] = fastICA(Z,r);
you get W = mixing matrix and T = whitening transformation
Md. Tanjin Amin (view profile)
Hi Brian,
I have some questions.
1. Should I preprocess data to zero mean and unit standard deviation before inserting?
2. What is the separating/demixing matrix matrix?
Thanks.
Brian Moore (view profile)
@Md: As per "help fastICA", you pass your data into my implementation as
Zica = fastICA(Z,r);
where Z is a d x n matrix with d = # variables and n = # samples. The output Zica is an r x n matrix containing the r independent variables for each sample.
I have no experience with your particular application, but I assume you need to (a) determine the correct "variables" to put into Z, and (b) decide how to detect faults from Zica.
Md. Tanjin Amin (view profile)
Hi, I want to apply ICA for process fault detection. I have 22 variables. I need to incorporate I2 and SPE statistics with ICA. Can you help?
Ahmed Hasan (view profile)
Thanks Brian for the explanation.
Brian Moore (view profile)
@Ahmed: Please read the documentation and the code to see exactly what you need, but calling
[Zica, W, T, mu] = fastICA(Z,r);
returns the independent components, Zica, for the data, Z, such that
Zica = W * T * (Z  repmat(mu,1,d))
where W = mixing matrix, T = prewhitening transformation, and mu = data mean
Ahmed Hasan (view profile)
Hello Brian, I need to find the mixing matrix A for my ICA problem where X=AS and Y=WX=WAS>S.
How to find the mixing matrix using this program?
Thanks
antoine ak (view profile)
thank you brian, for your nice and fast answer
Brian Moore (view profile)
@antoine: There are many variations of ICA. Even "fast ICA" refers to a family of algorithms with many choices (nonGaussianity measure, preprocessing, etc), so different code may produce different results. The only way to know is to study the documentation of the code and consult the relevant papers in the literature, .e.g,:
http://mlsp.cs.cmu.edu/courses/fall2012/lectures/ICA_Hyvarinen.pdf
antoine ak (view profile)
Hello, Brian.
First late me thank you for your package.
Second, i need your help about your algorithm. I tried 2 ICA algorithms your and one from there (https://research.ics.aalto.fi/ica/fastica/), just to compare, and understand ICA.
Problem is, that i don't obtain the same things between these 2 algorithms. i hope i succeed to explain myself.
Thank you
Brian Moore (view profile)
@Huang: yes it works given only the mixed audio. Run the demo_ICA.m script to see an example of this.
Huang Henry (view profile)
Can this function work by only giving the mixed audio? Since in real world cases, we can't always get the ''true'' audio.
Wanlu Han (view profile)
Diego Rodriguez (view profile)
Brian Moore (view profile)
@Binh: The variable Znoisy in the first half of the demo_ICA.m code is poorly named there's really no "noise" at all in the demo; the estimated components still have the same "noise" in them.
The standard Fast ICA algorithm cannot handle noise on it's own modifications are needed. I haven't implemented these changes myself, but visit the webpage below to see how the ICA experts do it:
http://research.ics.aalto.fi/ica/noisyica/
Binh Nguyen (view profile)
Dear,
In the code, you added noise before mixing independent sources.
I tried to add noise after the mixing, the code does not work well.
Do you have any suggestion for this problem?
Thank you.
afira fatima (view profile)
it means that this formula (Zr = T \ W' * Zica + repmat(mu,1,n)) would be used to exactly reconstruct the signals? and pinv(W) is the function for finding inverse??
Brian Moore (view profile)
@afira: W is r x d. You can only exactly "unmix" the signals when r == d, in which case W is square (and invertible). When r < d, you can't exactly "unmix" the signals, but the best you can do is use pinv(W). Note that W is a special matrixit has orthonormal rowsso pinv(W) = W'. That's why I wrote
Zr = T \ W' * Zica + repmat(mu,1,n)
instead of the equivalent formula
Zr = inv(T) * pinv(W) * Zica + repmat(mu,1,n)
afira fatima (view profile)
@brian do we have to take the inverse of W by ourself? if yes then plz guide because W matrix is not a square matrix.
afira fatima (view profile)
thankyou sir . @brian
Inas Yassine (view profile)
Brian Moore (view profile)
@afira: Sorry I renamed myICA to fastICA in a recent update. Please read the documentation for fastICA to learn about the outputs. For example, the documentation tells that
[Zica, W, T, mu] = fastICA(Z,r);
Zr = T \ W' * Zica + repmat(mu,1,n)
is the rdimensional ICA approximation of your input data.
afira fatima (view profile)
@brian and how can we reconstruct the original signal form the extracted independant components?
afira fatima (view profile)
@brian one more question plz . in your previous coments u said that mixing matrix is 'A' ? and i cannot find myICA help. there is no function named myICA instead there is fastICA function are both same??
Brian Moore (view profile)
@Afira: The mixing matrix is W, so the "unmixing" matrix would be W^(1). The fastICA code returns
Zica = W * Zcw
where Zica are the ICA components, W is the mixing matrix, and Zcw is the centered and whitened version of the input data
afira fatima (view profile)
hello brian,
can u please tell which variables you used for mixing and unmixing matrices
Brian Moore (view profile)
@David: Sorry for the delay. Hopefully you saw that I answered essentially the same question a few questions below. I'll repeat the response here:
ICA has inherent scaling and rotation ambiguity, so the input data has to be whitened (uncorrelated, unit variance) to obtain a unique solution.
However, as per "help myICA", if you do:
[Zica, A, T] = myICA(Z,r);
W = T \ pinv(A);
then W is a matrix such that (up to a constant offset)
Z ≈ W * Zica
As such, you can interpret norm(W(:,k))^2 as the "power" of the kth independent component, because, when norm(W(:,k))^2 is large, Zica(k,:) has a large contribution to Z.
I did a quick search on the topic and found the paper below, which proposes a similar procedure:
http://doc.utwente.nl/64279/1/27_17_hendrikse.pdf
Hope this helps,
Brian
Brian Moore (view profile)
@Jeffrey: Thanks for the tip! Fixed in 2.1
Brian Moore (view profile)
@Binh: The permute() calls are there so I can perform all the necessary computations without any for loops.
In the code, there are three relevant dimensions: r, d, and n. Some arrays are d x n, some are r x n, and sometimes I need to compute an r x d array by computing the average of n matrices which are themselves outer products of r x 1 and d x 1 vectors.
For example, suppose X is r x n, Y is d x n, then the r x d matrix (say Z) I want can be computed as
Xp = permute(X, [1, 3, 2]); % r x 1 x n
Yp = permute(Y, [3, 1, 2]); % 1 x d x n
Z = mean(bsxfun(@times, Xp, Yp), 3); % r x d
Hope this helps!
Brian Moore (view profile)
@mas saud: This sounds like a path problem. Make sure the directory where you put kICA.m is on your MATLAB path. Or that you are in that directory when trying to call the function
Binh Nguyen (view profile)
Dear Mr. Brian,
May I ask you that what is the purpose of doing "Permute" at lines 99 and 111?
Thank you.
Jeffrey Tackett (view profile)
Works great, but you may want to consider replacing wavread() function calls with audioread(). Wavread was removed starting in R15b... any version after this may result in an error
mas saud (view profile)
I am trying to use ICA for image, I first decompose the image to 2D and use
[Zica, W, T, mu] = kICA(g,3);
Undefined function 'kICA' for input arguments of type 'int8'.
Undefined function 'kICA' for input arguments of type 'double'.
any help is apprecaited
Carlos Javier Rengifo Mendez (view profile)
chao Lv (view profile)
@ Brian Moore. Dear Mr.Thanks for your software. It's great!
I tested it but I don't know how to analyse harmonic by it.
Thank you very much!
David Bacci (view profile)
@ Brian Moore. Dear Mr. Moore really thanks for your software. I have a couple of questions regarding the output. Is there a way to restore the original scaling of the signals?. I mean during the algorithm data is centered and whitened. and the principal components will have all zero mean ans 1 std. No way to restore their original values? The second question regards the variance associated to each principal component. Is there a way to calculate it like in the PCA algorithm? Thanks for your attention. David
David Bacci (view profile)
@ Brian Moore
Kanchan Chougule (view profile)
Sir I am working on a project which requires separatioN OF RGB CHANNELS using ICA of video images. Being a fresher in this coding feed I need your help to provide me with a suitable code that can work for our project. It would be of much benefit to get it.
Kanchan Chougule (view profile)
Sir I am working on a project which requires separatioN OF RGB CHANNELS using ICA of video images. Being a fresher in this coding feed I need your help to provide me with a suitable code that can work for our project. It would be of much benefit to get it.
Asad Malik (view profile)
@Brian. Please ignore my previous message. I was looking at the version of this code from 2015. The 2017 version does indeed have Gp = (1  u^2) exp(u^2/2). Thanks for your help.
Asad Malik (view profile)
@Brian: Many thanks for clarifying this. My mistake, I was interpreting g as G and not as the derivative of G. This does lead to another question though:
In your code, Gp=Sk*G. Doesn't this makes it (u^2) exp(u^2/2), not (1  u^2) exp(u^2/2)?
Brian Moore (view profile)
@Asad: Thanks for your interest in my code. I believe it's correct as written. You are correct that the variable G in my code contains g(w^Tx) from the paper, and the variable Gp contains g'(w^Tx). The use of capital letters in my code (chosen since the variables are matrices) might be confusing because there is also a function G(.) in the paper, which is the antiderivative of g. The Hyvarinen paper recommends setting
G(u) = exp(u^2/2)
as a robust approximation of negentropy, and so the actual quantities used in the FastICA algorithm would be
g(u) = G'(u) = u exp(u^2/2)
g'(u) = G''(y) = (1  u^2) exp(u^2/2)
Not sure why using g(u) = exp(u^2/2) works better for your datathe choice of G() is a heuristic, so performance may vary in practice. Hope this helps
Asad Malik (view profile)
Asad Malik (view profile)
Hi Brian,
Many thanks for sharing this code. It's immensely useful. I have a question:
For the estimation of negentropy, you use G = Sk .* exp(0.5 * Sk.^2). Based on my current understanding of Hyvarinen & Oja (2000), I think it should be G =exp(0.5 * Sk.^2), if G refers to g(w'x) in the paper (page 423). For the test data I'm using, I actually get better results with this as well. Could you please explain if G refers to g(w'x), and if so, why is it computed this way?
Many thanks.
Rene Jaros (view profile)
afira fatima (view profile)
sir,
how can i use this fastica algorithm to separate eog signal from eeg
Rajat Singh (view profile)
Sir,
I am thinking to use this code for EMG date to extract Muscle Synergies. Do I have to just input the data into your code to get a W matrix which will be the synergy of muscles.
sahana kp (view profile)
@Brian moore
i need to find ica of EHGs wavelet components.how this code can be apply to my signal..response is greatly appreciated......
Igor A. Zhutchenko (view profile)
Minkyu (view profile)
thanks!
Nurul Hidayah Abdul Manan (view profile)
Hi, does this code require multiple programs to play at once? May I know which one is for which? And how do I do multiple programs in MATLAB?
shahana Barvin M (view profile)
Igor A. Zhutchenko (view profile)
nellie (view profile)
Can this code be applied to 4D data, such as fMRI? thx
Brian Moore (view profile)
@Pär: Thanks for your interest in my code! ICA has inherent scaling and rotation ambiguity, so the input data has to be whitened (uncorrelated, unit variance) to obtain a unique solution.
However, as per "help myICA", if you do:
[Zica, A, T] = myICA(Z,r);
W = T \ pinv(A);
then W is a matrix such that (up to a constant offset)
Z ≈ W * Zica
As such, you can interpret norm(W(:,k))^2 as the "power" of the kth independent component, because, when norm(W(:,k))^2 is large, Zica(k,:) has a large contribution to Z.
I did a quick search on the topic and found the paper below, which proposes a similar procedure:
http://doc.utwente.nl/64279/1/27_17_hendrikse.pdf
Hope this helps,
Brian
Pär Nyström (view profile)
Thanks Brian for sharing your work!!! Works excellent on my test data! Just one question; the amplitude for each component is scaled to variance=1, but is it possible to estimate the relative amplitude between components or the absolute amplitude?
Nathan Zhang (view profile)
Thank you for your excellent work.
Jason (view profile)
@Brian: I've read the paper and understand the basic concept. I am applying your code to systematically test ICA on nonGaussian random variables. I thought you might know some limitation off the top of your head. I've mixed uniform, exponential, gamma, Rayleigh, and lognormal random variables. Now that I've spent a little more time on it. Lognormal doesn't seem any more troublesome than anything else. It seems to depend on the parameters of the distributions, which can adjust their similarity to Gaussian. Like you wrote, that's probably the issue.
super (view profile)
Brian Moore (view profile)
@Jason: My code implements the FastICA algorithm from (link below) with Gaussian negentropy. In other words, it produces output components that are as "far from Gaussian" as possible. As you may know, this choice of objective is justified by the Central Limit Theorem  the more independent signals you mix together, the more Gaussianlike the result  so decomposing into nonGaussian components is a reasonable way to reverse the mixing.
From this intuition, I think FastICA is failing because the lognormal distribution is too "Gaussianlike".
It sounds like you're just experimenting, but if you're really interested in recovering Gaussianlike signals, perhaps the Hyvarinen paper or some references therein propose modifications to tackle this.
http://mlsp.cs.cmu.edu/courses/fall2012/lectures/ICA_Hyvarinen.pdf
Jason (view profile)
I wrote a script to generate two statistically independent components and mix them them to synthesize a twochannel signal. I used the synthesized signal with your myICA code recently. For most cases, myICA computes components that match those used to synthesize the signal, but not in all cases. For example, when I use a lognormal and uniform distribution for the two components, it fails. For most other cases, it works very well. Can you comment on limitations to your code?
Brian Moore (view profile)
@Long B: It seems that you're doing the same thing as the previous commentator, JANANI A. That is:
It sounds like you're passing in a 1 x n scalar signal. ICA doesn't work like that. You have to pass in a d x n matrix of signals (plural), and then ICA will separate those into r <= d components
Long B (view profile)
Hi Brian,
It's my first time to use ICA in Matlab. I used Zica= myICA function to decompose the Matrix which is the signal from the mixture. But it turns out, all the decomposed matrices in the result, they are the same. I'm wondering if there is something wrong with my operation. I'm looking forward your reply. Thanks in advance.
Janani A (view profile)
Thanks for your reply Sir.
Brian Moore (view profile)
@JANANI A: It sounds like you're passing in a 1 x n scalar signal. ICA doesn't work like that. You have to pass in a d x n matrix of signals (plural), and then ICA will separate those into r <= d components
Janani A (view profile)
Sir,
Thanks a lot for your reply.
I am trying to analyse my signal using ICA and want to separate the independent sources present in the signal. According to the algorithm, the input parameters are Z and r where r is the desired number of independent components. If I give values more than 1 for 'r', I'm getting the same values for all the ICs! Suppose r=3, output matrix consist of three rows,the ICs, I find that the 2nd and 3rd row values are the same as the 1st row! Can you please tell me where I am going wrong?
Thanks in advance!
Brian Moore (view profile)
@JANANI A: Type "help myICA" and read the "Outputs" section to see how to (approximately, when r < d) reconstruct the input signal
Janani A (view profile)
Dear Brian,
Thanks so much for the package.
I have a question on the reconstruction of the original signal. The output we get are the independent components. How do we reconstruct the signal from the obtained ICs?
Any help in this regard would be greatly appreciated.
Thank You.
Brian Moore (view profile)
@Mohammed: I see the confusion now: you're thinking of F and D as the eigenvectors and eigenvalues, respectively, of the sample covariance W * W'. For me, U and S are the leftsingular vectors and singular values, respectively, of the raw weights W. The two methods are completely equivalent the eigenvalues of the sample covariance are the *squared* singular values of the raw weights, so there's no square root needed in my formulation.
Here's proof:
% Random weights
W = randn(30,70);
% Method 1
[U, S, ~] = svd(W);
W1 = U * diag(1 ./ diag(S)) * U' * W;
% Method 2
[F, D] = eig(W * W');
W2 = F * diag(1 ./ sqrt(diag(D))) * F' * W;
% Check equivalence
err = norm(W1  W2) %#ok
mohammad karami (view profile)
@Brian: Thank you very much for your answer.
I think my question was not clear.
My question was about symmetric decorrelation which is based on the following formula:
W=[F*(D^1/2)*F']*W
So, I was expecting the following command in the code:
W = U * diag(1 ./ sqrt(diag(S))) * U' * W;
instead of this one:
W = U * diag(1 ./ diag(S)) * U' * W;
Thanks again for your helps.
Mohammad
Brian Moore (view profile)
@mohammad: It's correct as written. Here's proof:
% Random data
W = randn(3,7);
% Whiten
[U, S, ~] = svd(W,'econ');
W = U * diag(1 ./ diag(S)) * U' * W;
% Verify that covariance is identity
covW = W * W' %#ok
mohammad karami (view profile)
Hi Brian,
Thank you very much for uploading the invaluable code.
As you used symmetric decorrelation (Eq. 45), do'nt we need to take root of diagonal matrix?
W = U * diag(1 ./ diag(S)) * U' * W;
Thanks again
Brian Moore (view profile)
@Vivin: it's up to you to figure out how to import your data into the correct format.
All you have to do is load the data into an m x n matrix containing n (synchronized) samples from each of your m channels and then pass that matrix into myICA.
Vivin Varghese ARIKKUDIL MATHEW (view profile)
Dear Brian,
Could you kindly answer this question if possible. Suppose i have the values of the signal in a text file. Let's say it is like an array of integers of a large length. Is it possible that i can apply your algorithm towards this signal.
Also if i have 2 channels of different length of wave files. would i be able to retrieve a signal from the output?
I have attached a copy of my code to retrieve the original signal from a text file and its corresponding histogram. and i am able to retrieve a signal from it.
%%
clear all;
clc;
x=fopen('C:\Users\pc\Documents\Limoges\My Biblio\CoolTerm Capture 20160518 145233.asc','r');
sig=fscanf(x,'%i');
sig=sig*3.3/1024/1100;
freq=9600/16;
time=1/freq;
n=length(sig);
t=(0:(n1))*time;
f=fft(sig,n);
fff=freq/2:freq/(n1):freq/2;
plot(t,sig);
plot(fff,fftshift(f));
Alger (view profile)
thank you for share
Vivin Varghese ARIKKUDIL MATHEW (view profile)
Dear Brian,
Thank you very much for you quick response. YEs i would hope that ECG would be independent from ECG and EMG. But the problem lies is that the are all sometimes mixed together. My main objective is to remove the noise produced during the movement and this noise is highly unpredictable and is caused due to the wires. So if i were to take two a sources from 2 different pairs wires. i would get 3 signals(one from ECG and 2 independant noises induced from each pair of wiring). The goal would be to somehow get the data set to matlab and process it live. But i am unable to find any resources.
Brian Moore (view profile)
@Vivin: You can certainly run ICA on your multiple channel ECG data and see what the output looks like. I have no background in medical applications, but if the ECG signal is truly independent of the other signals (EEG, EMG) that are presumably mixed into each channel, then you might see a group separation.
Type "help myICA" to see the syntax of my ICA implementation.
For removing 50Hz noise, it's probably best to use a more classical filtering method, right? Something like this:
http://www.mathworks.com/help/dsp/ug/removinghighfrequencynoisefromanecgsignal.html
If you find that ICA works well on a test dataset, you might want to Google "online independent component analysis" to learn about how people modify ICA to handle streaming data.
Note: I have no experience with online ICA, and, in particular, my PCA and ICA Package doesn't support it.
Vivin Varghese ARIKKUDIL MATHEW (view profile)
Dear Brian,
I am trying to implement your code on to separate ECG signals from its corresponding noises ( 50HZ line noise, EEG signals, EMG signals, and movement of the body ). Is it possible to use your algorithm to make this separation to filter out the ECG signal alone?
I have another question. Is it possible to apply your code for real time processing. for example, i am collecting ecg data from a patient using a measuring device with some multiple channels, would it be able to separate the sources and give me the individual outputs?
Vivin Varghese ARIKKUDIL MATHEW (view profile)
Dear Brian,
I have another question. Is it possible to apply your code for real time processing. for example, i am collecting ecg data from a patient using a measuring device with some multiple channels, would it be able to separate the sources and give me the individual outputs?
Dominik Jungtäubl (view profile)
Okay, thank you very much for the immediate response. I think I can continue working with your information ;)
Brian Moore (view profile)
@Dominik: The prewhitening just multiplies your data by a particular matrix so that the sample covariance of the "whitened" data is the identity matrix. I can't say much without seeing your data, but it must be the case that the sample covariance of your data must be rankdeficient, i.e., has one or more very small eigenvalues.
Make sure you pass in the data as an m x n matrix where m = #signals and n = # samples.
Dominik Jungtäubl (view profile)
Hi Brian,
I'm about to seperate EMG recordings into its original signals. But the independent components do have a much greater amplitude (e+005) than the mixed signals. I guess this is because of the preprocessing, i.e. the whitening. Can you tell me why the whitening changes the magnitude to such an extent?
Greetings, Dominik
srinivas balina (view profile)
srinivas balina (view profile)
Hi Brain..
Thank you very much for your work and making it available.
I would like to ask you if it is possible to separate instrumental tracks in songs by ICA.And what is the best way to separate vocals and music in audio signal.
Thanks in advance...
Brian Moore (view profile)
@Andrew: You can certainly use ICA for image analysis, but it sounds like you're embarking on some kind of research project, so I think it's in your best interest to learn how to do it yourself. ICA is a classical technique and there's tons of literature about on the web just a Google search away. If you decide to use my ICA code, feel free to read the function documentation, which will tell you all you need to know about how to use it.
Andrew (view profile)
Hi Brain.
I would like to ask you if it is possible to use your code for image analysis.
What i want to do is to analyze a group of medical images, taken at the same position of the body, for different time points.
I 'm trying to Analise these images and to separate the background from the original image. After that i would like to reduce the motion of that part of the body, because of breathing.
Thanks
Brian Moore (view profile)
@Rajkumar: Say you have d "mixed speech" waveforms, each with n samples, and you're trying to recover the r underlying speech waveforms. Put the mixed speech waveforms into a d x n matrix Z and execute the following
Zica = myICA(Z,r);
Zpca = myPCA(Z,r);
Read the documentation of the two functions to see what is returned.
Note: PCA probably won't work for the speech separation problem; this classic problem is often used as motivation for doing ICA.
RAJKUMAR SUBRAMANIAN (view profile)
thanks for your valuable program sir...
how can be the ICA and PCA analysis techniques can be used for speech detection from multiple mixed speech signals please help me sir
thanks in advance sir...
Brian Moore (view profile)
@Renan: You're welcome to modify and redistribute the code just follow the rules of the BSD license included in the download.
As for the details of switching kurtosis, I'll leave that to you. To get started, I recommend the following paper:
http://mlsp.cs.cmu.edu/courses/fall2012/lectures/ICA_Hyvarinen.pdf
Renan de Padua (view profile)
Hi Brian.
I want to use the Kurtosis measure, instead of gaussian neg.
First question: Can I change your code? I will not redistribute or anything else, just want to see the difference between the measures
Second question: Can you guide me? I will use the kurtosis function, available on matlab. How can I adjust that?
Thanks for sharing!
Darin McCoy (view profile)
Hi Brian,
This is probably a really simple request, but Matlab's version of PCA comes in a format
[coeff, score, latent, tsquared, explained, mu] = pca(x)
[Zpca T U mu eigVecs] = myPCA(x,r)
.... how can you get the same results from your version as Matlab's pca?
Brian Moore (view profile)
@Fransesco: ICA is definitely a good approach to decomposing a signal into independent spectra. The nonzero mean of your Lorentzian component is okay: the first step of the ICA algorithm is to center the data (subtract the mean), so, if it is working correctly, ICA should output a zeromean version of the Lorentzian with no problem.
As I mentioned to Hamed, my implementation of ICA will fail in the presence of Gaussians because its objective function is to minimize the Gaussianity of the output components. Clearly this won't work when one of the output components IS Gaussian.
Here's a few other (probably better) ICA packages I recommend trying:
http://research.ics.aalto.fi/ica/fastica/code/dlcode.shtml
http://www.cis.hut.fi/projects/ica/fastica/
http://cogsys.imm.dtu.dk/toolbox/ica/
Francesco Lelj (view profile)
Hi Brian,
thank you very much for your work and making it available.
I am trying to use your routines to anylize the data of some UvVis spectra of a mixture of chemical compounds. I know that in the mixture I have consists of more than 1 compound but less than the number of different spectra I have at different temperature. I have checked the routines against syntetic data I generated modifying demo_ICA using linear combination of gaussian functions, trigoneometric functions and lorentzian functions before using the routines. I have observed that whereas the trigonometric functions are very well identified those of Lorentzian and gaussian are not. I have read in the answers to Hamed that this is because gaussian can be expressed as sum of other gausssians but this is not true with lorentzian. Even with my real data whent I try to analyze them I get negative "spectra" as independent component which is physically impossible. I am not an expert in signal processing and I got the impressin that part of the problem might be the fact that my signals do not have null average as sine and cosine have. Cause I have read that the method has been used also in the study in medicine in EEC and EEG I thought it was a really good method also for analysing superimposition of independent spectra.
Thank you in advance for any advice.
Best regards
Francesco
Brian Moore (view profile)
@Hamed: The sum of independent Gaussians is again a Gaussian, so your question is illposed: there's not a unique decomposition.
Even if only one component was Gaussian, ICA still wouldn't work: its objective function is to *minimize* Gaussianity of the output signals.
Brian Moore (view profile)
@YeonMo Yang: Sorry for the delay... rng() is a builtin MATLAB function that controls its random number generator. I use it to set the generator seed so the results of myICA() are deterministic, but this isn't necessary. You can comment out the line, its harmless.
rng() was introduced around 2010, so you must have a pre2010 MATLAB distribution...
Hamed (view profile)
Hello Again,
I forgot to mention that that noises are independent.
Hamed (view profile)
Hello Brian,
I've a summation of two normal distribution and I was wondering if I decompose it into its two normal components using ICA.
Thank you.
AJ Zadeh (view profile)
Hi dear Brian And thanks so much for your code !
I loaded tow speech signals with Fs=16000 and n=16000 (1sec) then mixed together using random mixing matrix. I gave mixed signals as input data (Z) and r=2 , d=2 but separated sources was not good at all ! and estimated mixing matrix(A) was not correct too.
I need your help !
zhou zexun (view profile)
Thanks for your code, it is very helpful for me!
YeonMo Yang (view profile)
Brian!
What is rng(1) in the first line of the source code demo.
When I tried it failed. I need your help. Thanks!
Brian Moore (view profile)
@Thomas: The ICA weights have to be decorrelated so they don't converge to the same values. My code implements the FastICA algorithm (reference in multiple previous comments on this package); please refer to the literature for details about the ICA algorithm.
The only restriction on the code is computational complexity: if the code completes, everything should be fine.
In the example you gave, the input to myPCA/myICA needs to be 3 x 300000 matrix: the columns should contain samples
Thomas Sim (view profile)
Sorry again brian, hope i am not bothering you. May i know why do you need to decorrelate your weight matrix after the 4 steps ICA implementation?
Also just to check are there any restriction for your algorithm to work on large datasets?
Because i tried replacing your MVG samples with a 300000x3 matrices of 3 sound signals in ICA_demo and replaced maxsamples in myICA to my matrices length.
The results of the ica demixing is very poor for both with or without pca.
On contrary, the result of your pca is reasonable on large datasets.
Brian Moore (view profile)
@Thomas: The principal components (PCs) produced by SVD are orthogonal, but they aren't unit norm, so the second whitening step is just normalizing each component to have unit norm (i.e., T is a diagonal matrix).
In many applications, the PC *magnitudes* are important; in those cases, you'll want to comment out the second whitening step.
I chose to normalize the PCs by default to allow direct comparison with ICA, which, by construction, returns unit norm components.
Hope this helps!
Thomas Sim (view profile)
Hi Brian, hope this isn't a stupid question, mind if i ask for pca, you center and svd the data once but why do you need to whiten it again with svd? It's like doing it twice?
Brian Moore (view profile)
@zongbao: not possible. A quote from Wikipedia:
"In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals."
The first step of ICA is centering + whitening, which removes magnitude information. Indeed, after running myICA(), you can check that cov(z_ic') == identity matrix
zongbao (view profile)
How can I find z_ic that corresponds to a specific eigenvalue and eigenvector? I mean I want to find the independent component that corresponds to the maximum eigenvalue. Thank you.
TS Sharma (view profile)
Thanks Brian!
Brian Moore (view profile)
@TS Sharma: Yes, the columns of z_ic_new are the ICA coefficients corresponding to each input sample (i.e., columns of z_new)
TS Sharma (view profile)
Referring to your reply to my previous comment, I think it is z_ic_new.
TS Sharma (view profile)
Another basic question. May I know which matrix contains ICA coefficients?
TS Sharma (view profile)
Thanks a lot Brian!! That was very kind of you.
Brian Moore (view profile)
@TS Sharma: First, call myICA() like this:
[z_ic A T mean_z] = myICA(z,NUM);
Then, if new data is in (m x 1) vector z_new, you can project into ICA space by computing
z_ic_new = A * T * (z_new  mean_z);
Here, (A,T,mean_z) are the ICA transformation matrix, whitening matrix, and mean vector, respectively, of the original data
TS Sharma (view profile)
Thank you for the code. This is a rather basic question. How do I project new (test) data onto the newly found ICA basis?
pratika tiwari (view profile)
can we use this ica package for feature extraction??? kindly do reply.
pratika tiwari (view profile)
pratika tiwari (view profile)
thanks for the package..!! but how to use for speech signal?? i am too new to this field. that's why i need your guidance .
Bicheng Ying (view profile)
Thanks for this nice package.
Keven Laboy (view profile)
That makes perfect sense. Thank you for the quick response!
Brian Moore (view profile)
@Kevin: The mixing matrix is "A", which is indeed orthogonal. Check out the last line of myICA.m: you'll see the statement
z_ic = A * z_cw;
which computes the independent components (z_ic) from the whitened input data (z_cw).
The matrix "T \ pinv(A)" transforms the independent components back to the original (unwhitened) domain, but it's not, in general, orthogonal because the whitening matrix "T" isn't orthogonal. Hope this helps!
Keven Laboy (view profile)
Thank you for the package.
One question, to my understanding if you whiten and center the data, the matrix containing the basis functions (mixing matrix) should be orthogonal. When I run your code and get the mixing matrix by doing T \ pinv(A) (as specified in myICA.m) the resulting matrix is not orthogonal. Is this an issue?
Thank you in advance
Marthed (view profile)
Thanks For providing this package ^_^
M'hamed (view profile)
How to select the best dimension of this best algorithm ICA ?
Thank's
Peixin (view profile)
Brian Moore (view profile)
@LA_2012: Please read my previous comments for more detail about my implementation of ICA and references to the literature.
To apply ICA to images, you could vectorize each image and store them as columns of the input matrix
LA_2012 (view profile)
Is this fixed point ICA or some other ?
How to use it on images?
How to modify the code according to requirement of some other ICA?
From where can I get reference data to understand ICA in a better way?
Thanks in advance :)
Thanks for providing this code
Guilherme (view profile)
Thanks for sharing your code Mr. Moore.
I'm using part of your ICA function to implement my own.
I wasn't get good results using the weight vector decorrelation as you did so I implemented the GramSchmidt decorrelation as proposed by Hyvarinen pq 15 (http://mlsp.cs.cmu.edu/courses/fall2012/lectures/ICA_Hyvarinen.pdf).
Here is the code I inserted right after the weight normalization, still inside the for loop:
% GramSchmidt
summ = zeros(nOb,1);
for j = 1 : p1
wj = W(j,:)';
summ = summ + W(p,:)*wj*wj;
end
W(p,:) = W(p,:)  summ';
W(p,:) = W(p,:)/norm(W(p,:));
Body (view profile)
I get it. Thanks a lot!
Brian Moore (view profile)
@Susanne: The forloop containing "gp" is implementing steps 23 of the algorithm on pg. 14. So, in particular, gp is related to g'(.) from the document
Susanne (view profile)
Ok I think I got a better grasp on it thank you so much your code helped me a lot. I got on question though, what is the 'gp' you're using in your ICA code?
Brian Moore (view profile)
@Body: No, the "err" vector keeps track of the how much the weight vectors are changing at each iteration (used only for stopping criterion). If you do as you suggested, you'll just be measuring how close each weight vector is to being unit norm (*spoiler alert*  they're always unit norm)
Brian Moore (view profile)
@Susanne: I suggest you read the help info and comments in the myICA.m function. If you need more detail, you'll need to refer to the literature. For example I recommend
http://mlsp.cs.cmu.edu/courses/fall2012/lectures/ICA_Hyvarinen.pdf
FYI: I'm using the 4step algorithm on pg. 14 along with the symmetric decorrelation step involving the W matrix from (45) on pg. 15
Body (view profile)
Thanks for your code! ICA is so difficult to understand. U help a lot! I have a question. Can
err(i) = 1  w(i,:) * w(i,:)'
instead of
err(i) = 1  w(i,:) * w_old(i,:)'?
Susanne (view profile)
I have some trouble to completely understand the ICA algorithmen zou are using. Would you mind heling me out?
Susanne (view profile)
Brian Moore (view profile)
@LionelB: Good point  I'm disappointed in myself for not mentioning bsxfun()  its MATLAB's secret sauce for vectorizing
LionelB (view profile)
Or even
z0 = bsxfun(@minus,z,sum(z,2)/n);
R = (z0*z0')/(n1);
:) (Matlab uses this idiom internally, see e.g. the 'cov' function).
Anyhow, I only raised this because I almost gave up on your fine package until I realised the fix was trivial.
Brian Moore (view profile)
@LionelB: Yes, I'm aware. I wrote the MATLAB code so it could be directly translated into, e.g., C++. If I were maximizing MATLAB efficiency, I would have written
z0 = (1 / (n  1)) * (z  repmat(mean_z,[1 n]));
R = z0 * z0';
instead of
z0 = z  mean_z*ones(1,n);
R = (z0*z0') / (n1);
LionelB (view profile)
Nice package, but the 'myWhiten' routine is a terrible computational bottleneck for large datasets, because of the inefficient sample covariance calculation. This can easily be done without looping: simply demean z, e.g.:
z0 = zmean_z*ones(1,n);
then calculate
R = (z0*z0')/(n1)
Delivers an ordersofmagnitude speedup.
Ilia (view profile)
yes i was talking about z_LD, i was afraid that it could be a too strong restriction for the solution, because I'm not really expert with this kind of analysis, and it is the first time i find the possibility to compute the low dimension z.
Thanks you for the answer.
Brian Moore (view profile)
@Ilia: Are you referring to "z_LD" as described in the myICA help? It's a matrix of the same size as the input "z" that approximates z from a linear combination of the "NUM" independent components in output "z_ic"
Ilia (view profile)
thank you for sharing the great job,I've a question.
I'm using your 'myICA.m' and i would like to understand what ICA_LD really do.
can you suggest some lecture about it or explain in few words,if it is possible.
thanks you very much
Brian Moore (view profile)
@Prarinya Check out the FastICA algorithm from
http://mlsp.cs.cmu.edu/courses/fall2012/lectures/ICA_Hyvarinen.pdf
I'm using the 4step algorithm on pg. 14 along with the symmetric decorrelation step involving the W matrix from (45) on pg. 15
Cheers!
Prarinya Siritanawan (view profile)
Great contribution. I have a request
Could you please tell us the reference of the algorithm you use? I'm curious about the update rule for creating the transformation matrix 'A' in the code.
I tried to study the ICA algorithm from the ground. However update rule you use are quite different from the document I have. Therefore, I realized that there are more than one approach to update the transformation matrix.
Arjuna (view profile)
better than FastIca ;)
Bosi (view profile)
Abdelrahman (view profile)
super good
Brian Moore (view profile)
Pierre,
You must be using an old version of MATLAB (<= 2009b I believe) where ~ is not supported. Please replace the line with something like
[U,S,temp] = svd(w,'econ');
The variable temp is not used; the name is arbitrary
PierrePascal (view profile)
Hi thanks for providing this code! super useful.
line 97 on my ICA returns an error
[U,S,~] = svd(w,'econ');
it seems to hate the ~ as an output
did I miss something?
Morteza Abdollahzadeh (view profile)
Shivakumar (view profile)
Sir, there is an error at line 31. Can you please solve that? thank you for providing this file.
Jim (view profile)
Wang (view profile)
Brian Moore (view profile)
Why the 2 star ranking Eugene? Please provide feedback so I can improve this package!
Eugene (view profile)