Filter löschen
Filter löschen

Arrayfun/gpuArray CUDA kernel need to be able to remember previous steps

2 Ansichten (letzte 30 Tage)
Anders
Anders am 25 Mär. 2024
Beantwortet: Joss Knight am 29 Mär. 2024
Background
  1. The problem can be separated into a large number of independent sub-problems.
  2. All sub-problems share the same matrix parameters.
  3. Each sub-problem needs to remember the indices itself has visited up to this point.
  4. The goal is to process the sub-problems in paralell on the GPU.
Array indexing and memory allocation is not supported in this context. Is this function possible to achieve?

Antworten (1)

Joss Knight
Joss Knight am 29 Mär. 2024
This is a bit too vague to answer. Without indexing, how can each subproblem retrieve its subset of the data? If you just mean indexed assignment is not allowed then sure, you could write an arrayfun perhaps that solves some independent problem for a subset of an array, as long as all the operations are scalar and the output is scalar. Not if the subproblems are completely different algorithms though.
Anyway, sorry, but not enough information to help.

Kategorien

Mehr zu Matrix Indexing finden Sie in Help Center und File Exchange

Produkte


Version

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by