Fast Elementwise Matrix-Multiplications

26 Ansichten (letzte 30 Tage)
thengineer
thengineer am 11 Jun. 2019
Kommentiert: Jan am 12 Jun. 2019
Hello,
I'm trying to optimize some code for speed and my code currently has a few bottlenecks in lines where a lot of elementwise multidimensional matrix-matrix multiplications are performed.
A simplified example:
M1=rand(1e2,2e2,1e4);
M2=rand(1e2,2e2,1e4);
M3=rand(1e2,2e2,1e4);
% .. and more
M = M1.*M2 + M1.*M3 + M2.*M3; % ... actually more multiplications
% example lines:
% detJdr = dxdrr.*dyds.*dzdt-dxdrr.*dydt.*dzds-dxdrs.*dydr.*dzdt+dxdrs.*dydt.*dzdr+dxdrt.*dydr.*dzds-dxdrt.*dyds.*dzdr-dydrr.*dxds.*dzdt+dydrr.*dxdt.*dzds+dydrs.*dxdr.*dzdt-dydrs.*dxdt.*dzdr-dydrt.*dxdr.*dzds+dydrt.*dxds.*dzdr+dzdrr.*dxds.*dydt-dzdrr.*dxdt.*dyds-dzdrs.*dxdr.*dydt+dzdrs.*dxdt.*dydr+dzdrt.*dxdr.*dyds-dzdrt.*dxds.*dydr;
% drdxJdt = dydst.*dzdt-dydtt.*dzds-dzdst.*dydt+dzdtt.*dyds;
The matrix sizes used in the example indicate typical sizes used in the actual code.
I've already tried Matlab Coder to auto-generate mex files, but the result was a much longer and a little slower code.
Can anyone comment on, if my code would benefit from coding these routines in C/Fortran where I could use v?Mul for fast elementwise multiuplications? Or does Matlab already use these routines for the .* operation already?
  2 Kommentare
Jan
Jan am 11 Jun. 2019
It would be useful if you mention, what exactly "actually more multiplications" is. It is easier to optimize code, if it is known exactly, which code is meant.
I guess, that a specific C code can be faster. The current example looks, like there is a pattern in the matrices to be multiplied - perhaps: sum of all ordered pairs of inputs. If you define this pattern, exploiting it is possible.
thengineer
thengineer am 11 Jun. 2019
@Jan: Thanks for lookin into this. I've updated my question with some typical lines. As you can see, the pattern is covered in the minimal example above. I am actually looking for a fast implementation of the times() operator, rather than a c-coded version of my whole function. That would be more flexible.

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Jan
Jan am 11 Jun. 2019
Bearbeitet: Jan am 11 Jun. 2019
Of course times is implemented efficiently already and most likely it does use the MKL, but this is not documented and a reverse enineering of Matlab is prohibitted by the license conditions. There is no magically faster version of times for users, who need more speed.
If the matrices are used multiple times in the list of multiplication, this can be exploited to gain more speed. Unfortunately this example does not shed light on the underlying pattern:
dxdrr.*dyds.*dzdt-dxdrr.*dydt.*dzds-dxdrs.*dydr.*dzdt+ ...
dxdrs.*dydt.*dzdr+dxdrt.*dydr.*dzds-dxdrt.*dyds.*dzdr-...
dydrr.*dxds.*dzdt+dydrr.*dxdt.*dzds+dydrs.*dxdr.*dzdt-...
dydrs.*dxdt.*dzdr-dydrt.*dxdr.*dzds+dydrt.*dxds.*dzdr+...
dzdrr.*dxds.*dydt-dzdrr.*dxdt.*dyds-dzdrs.*dxdr.*dydt+...
dzdrs.*dxdt.*dydr+dzdrt.*dxdr.*dyds-dzdrt.*dxds.*dydr
If you do know, which pattern is applied here, be so kind and explain this. This would be more efficient than letting me guess, what the pattern is. If the pattern is known, you can apply arithmetic simplifications to reduce the number of multiplications. e.g.:
dxdrr .* dyds .* dzdt - dxdrr .* dydt .* dzds
% ==>
dxdrr .* (dyds .* dzdt - dydt .* dzds)
% 3 instead of 4 multiplications!
Try it:
M1=rand(1e2,2e2,1e4);
M2=rand(1e2,2e2,1e4);
M3=rand(1e2,2e2,1e4);
tic;
for k = 1:100
y2 = M1 .* M2 + M2 .* M3 + M1 .* M3;
end
toc
tic;
for k = 1:100
y1 = M1 .* (M2 + M3) + M2 .* M3;
end
toc
  2 Kommentare
thengineer
thengineer am 12 Jun. 2019
Well then the good news is that the code is already as fast as it can be :) (except for potential algebraic optimizations of course)
Just out of curiousty: Do you think a complete implememtation of my routine that has almost 200 lines of such multiplications would give a performance gain?
Jan
Jan am 12 Jun. 2019
@thengineer: A "complete implementation"? A performance gain compared to what?
Exploiting the underlying pattern to reduce the number of arithmetic operations is the first step for optimizing code. With reordering the terms you can save 33% of the multiplications in my example. It is hard to beat this with using highly optimized libraries. At first reduce the work, and trying to work more efficiently afterwards.
The next step is to use efficient methods for the memory access: RAM is accessed in block of 64 bytes and transfered to the CPU cache. Therefore neighboring elements are faster to process than blocks far away from eachother.
If the array sizes exceed the available RAM, splitting the work into blocks is useful to avoid using the slow disk as virtual RAM.
Code optimization is a serious job. Speculations help to be motivated to try it, but it remains a trial-and-error procedure.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

James Tursa
James Tursa am 11 Jun. 2019
Bearbeitet: James Tursa am 11 Jun. 2019
The element-wise times operation in MATLAB is already multi-threaded. You are not going to beat it by writing your own low level code.

Kategorien

Mehr zu Startup and Shutdown finden Sie in Help Center und File Exchange

Produkte


Version

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by