Arrayfun/gpuArray CUDA kernel need to be able to remember previous steps
1 Ansicht (letzte 30 Tage)
Ältere Kommentare anzeigen
Background
- The problem can be separated into a large number of independent sub-problems.
- All sub-problems share the same matrix parameters.
- Each sub-problem needs to remember the indices itself has visited up to this point.
- The goal is to process the sub-problems in paralell on the GPU.
Array indexing and memory allocation is not supported in this context. Is this function possible to achieve?
0 Kommentare
Antworten (1)
Joss Knight
am 29 Mär. 2024
This is a bit too vague to answer. Without indexing, how can each subproblem retrieve its subset of the data? If you just mean indexed assignment is not allowed then sure, you could write an arrayfun perhaps that solves some independent problem for a subset of an array, as long as all the operations are scalar and the output is scalar. Not if the subproblems are completely different algorithms though.
Anyway, sorry, but not enough information to help.
0 Kommentare
Siehe auch
Kategorien
Mehr zu Matrix Indexing finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!