Filter löschen
Filter löschen

Accelerate a loop involving the built-in integral command

1 Ansicht (letzte 30 Tage)
pluton schmidt
pluton schmidt am 2 Jun. 2023
Kommentiert: pluton schmidt am 6 Jun. 2023
A Matlab script has the following instructions:
for p = 1 : N
Output(2*p-1,1) = integral(@(t) cos((2*p-1)*t).*my_fun(t,param), 0, T);
Output(2*p,1) = integral(@(t) sin((2*p-1)*omega*t).*my_fun(t,param), 0, T);
end
Computing the function
my_fun(t,param)
for a given t is costly and the issue with the above loop is that the function
my_fun(t,param)
is called twice. Assuming that the quadrature points in the integral command are the same, calling the function twice is not efficient. Is there a remedy to this by keeping the structure of the proposed code?

Antworten (2)

Steven Lord
Steven Lord am 2 Jun. 2023
Assuming that params doesn't change, consider using the memoize function to create an object you can use in your integrand function to cache the inputs and corresponding outputs, so if you call the function multiple times with the same inputs it will use the cached results.
memSin = memoize(@sin);
results = zeros(1, 5);
for k = 1:5
results(k) = integral(@(x) memSin(x).^k, 0, pi);
memSin.stats.Cache % Show cache usage
end
ans = struct with fields:
Inputs: {{1×1 cell}} Nargout: 1 Outputs: {{1×1 cell}} HitCount: 0 TotalHits: 0 TotalMisses: 1
ans = struct with fields:
Inputs: {{1×1 cell}} Nargout: 1 Outputs: {{1×1 cell}} HitCount: 1 TotalHits: 1 TotalMisses: 1
ans = struct with fields:
Inputs: {{1×1 cell}} Nargout: 1 Outputs: {{1×1 cell}} HitCount: 2 TotalHits: 2 TotalMisses: 1
ans = struct with fields:
Inputs: {{1×1 cell}} Nargout: 1 Outputs: {{1×1 cell}} HitCount: 3 TotalHits: 3 TotalMisses: 1
ans = struct with fields:
Inputs: {{1×1 cell}} Nargout: 1 Outputs: {{1×1 cell}} HitCount: 4 TotalHits: 4 TotalMisses: 1
results
results = 1×5
2.0000 1.5708 1.3333 1.1781 1.0667
Note that each time after the first, the number of TotalMisses (inputs that weren't already cached) didn't change while the HitCount (the number of inputs that were already cached) increased.
  2 Kommentare
pluton schmidt
pluton schmidt am 2 Jun. 2023
Verschoben: John D'Errico am 2 Jun. 2023
Thank you! Interesting! I tried
mem_my_fun = memoize(@my_fun);
Output = Zeros(2*N,1);
for p = 1 : N
Output(2*p-1,1) = integral(@(t) cos((2*p-1)*t).*mem_my_fun(t,param), 0, T);
Output(2*p,1) = integral(@(t) sin((2*p-1)*omega*t).*mem_my_fun(t,param), 0, T);
end
but it is slower, by about 50%. However, I am not sure about
mem_my_fun = memoize(@my_fun);
Is this the right way to define the function with input param which does not change?
pluton schmidt
pluton schmidt am 2 Jun. 2023
Verschoben: John D'Errico am 2 Jun. 2023
With your example, the version without the memoize command is also faster by about 50%:
clearAllMemoizedCaches
memSin = memoize(@sin);
results = zeros(1, 5);
tic
for k = 1:5
results(k) = integral(@(x) memSin(x).^k, 0, pi);
%memSin.stats.Cache % Show cache usage
end
toc
clearAllMemoizedCaches
results = zeros(1, 5);
tic
for k = 1:5
results(k) = integral(@(x) sin(x).^k, 0, pi);
%memSin.stats.Cache % Show cache usage
end
toc

Melden Sie sich an, um zu kommentieren.


Torsten
Torsten am 2 Jun. 2023
Bearbeitet: Torsten am 2 Jun. 2023
You will have to reorder the "Output" array, but I think the code should be faster.
my_fun = @(t,param) t.^2;
T = 2;
omega = 5;
N = 2;
param = 2;
fun = @(t) [cos((2*(1:N)-1).*t),sin((2*(1:N)-1).*omega.*t)].*my_fun(t,param);
Output = integral(fun,0,T,'ArrayValued',true)
Output = 1×4
0.1540 0.0749 0.5548 -0.0592
  3 Kommentare
Torsten
Torsten am 2 Jun. 2023
Bearbeitet: Torsten am 2 Jun. 2023
Very interesting indeed, both for the parallelization and the fact that my_fun does not have to be computed twice.
Twice ? It only has to be computed once instead of 2*N times. If you don't see a gain in computation time, your function cannot be that time-consuming as you think it is.
pluton schmidt
pluton schmidt am 6 Jun. 2023
yes, sorry not twice but 2N times. Ok for your comment on the computation time. Thanks!

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu Parallel Computing Fundamentals finden Sie in Help Center und File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by