problem defining linear regression condition

10 Ansichten (letzte 30 Tage)
fima v
fima v am 12 Mär. 2020
Hello, i have built the first code( the top one which uses matlabs regress command to aproximate [m10;m5] (NX1) vector with
[repmat([0.04,10],n,1);repmat([0.04,12],n,1)] NX2 matrices getting B(2X1) vector. it works great.
then i tried to do the same thing by myself using the linear regression theory and using itterations to get the same result as regress.
However I am stuck on the "fixing condition" the gradient descent. D_m(2)=(-2./n)*sum(x.*(y-y_pred));
is there some one who can help me with the gradient descent?
Thanks.
Regress function code:
n=10000;
min_value = 30+273-5;
max_value = 30+273+5;
%We need all out samples to be in the range of max_value min_value
T = min_value + (max_value - min_value) * rand(n,1);
%our emesivity samples
e_delta=0.31622*randn(n,1);
var_e=var(e_delta); %emesivity variance
mean_e=mean(e_delta); %emesivity avarage
e = 0.9+0.1*e_delta;
var(e)
mean(e)
eT = e.*T;
m10 = 0.04*eT+10*e ;
m5 = 0.04*eT+12*e ;
[B,BINT,R] = regress([m10;m5],[repmat([0.04,10],n,1);repmat([0.04,12],n,1)]);
tested_T=B(1)/B(2)
plot(T);
xlabel('Sample number');
ylabel('Temperature');
ylim([min_value-2, max_value+2]);
My itterative code:
n=100000;
x=[repmat([0.04,10],n,1);repmat([0.04,12],n,1)];
m=[0;0];
L=0.0001;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
min_value = 30+273-5;
max_value = 30+273+5;
T = min_value + (max_value - min_value) * rand(n,1);
e_delta=0.31622*randn(n,1);
e = 0.9+0.1*e_delta;
eT = e.*T;
m10 = 0.04*eT+10*e ;
m5 = 0.04*eT+12*e ;
y=[m10;m5];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%plot(x,y,'.b');
%hold;
for i=1:100
y_pred=x*m;
D_m(1)=(-2./n)*sum(x.*(y-y_pred));
D_m(2)=(-2./n)*sum(x.*(y-y_pred));
m=m-L*D_m;
end

Antworten (0)

Kategorien

Mehr zu Descriptive Statistics finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by