faster lean bilinear imresize / improved gpuArray/imresize
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hi,
I'm currently processing lots of images in a convolutional neural network, and for that the resize.m function is currently the major bottleneck (Googling this resulted in a few other people complaining about imresize as well). However, digging a bit into the code there is about 60% overhead for this function by checking arguments and performing extra function calls. So I made a leaner version, but this requires access to the private imresizemex function. Is it possible in a future release to, for example:
- create a lean imresize_bilinear function (as attached here)?
- move imresizemex outside of the private directory so it can be accessed? (I work on several different servers, often with a different matlab version, so copying imresizemex does not work too well for me).
Related:
- Are you working on making gpuArray/imresize work in format: out = imresize(im, [numRows numCols])?
Example code is attached. Output:
>> testImresize
original bilinear resize: 2.364755
lean bilinear resize: 0.922745
Thanks, Jasper
0 Kommentare
Antworten (0)
Siehe auch
Kategorien
Mehr zu Introduction to Installation and Licensing finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!