creating an image convolution code

76 Ansichten (letzte 30 Tage)
Blob
Blob am 5 Nov. 2022
Bearbeitet: DGM am 13 Nov. 2022
So I have coded this image convolution script, but I get this error (Error using .*
Integers can only be combined with integers of the same class, or scalar doubles error in imF=A0.*double(H))
I am stuck, can someone please help?
H=[1 0 1;0 1 0;1 0 1];
for i = 1:x-1
for j = 1:y-1
A0=A(i:i+1, j:j+1);
imF=A0.*H;
S(i,j)=sum(sum(imF));
end
end
imshow(S)

Akzeptierte Antwort

DGM
DGM am 5 Nov. 2022
Bearbeitet: DGM am 5 Nov. 2022
Here's a start.
% this image is class 'uint8'
A = imread('cameraman.tif');
% for the math to work, you need it to be floating-point class
A = im2double(A);
[y x] = size(A);
H = [1 0 1;0 1 0;1 0 1];
% filter is sum-normalized if the goal is to find the local mean
%H = H/sum(H(:));
% preallocate output
S = zeros(y,x);
% image geometry is [y x], not [x y]
% treating edges by avoidance requires indexing to be offset
% A0 needs to be the same size as H
for i = 2:y-1
for j = 2:x-1
A0 = A(i-1:i+1, j-1:j+1);
imF = A0.*H;
S(i,j) = sum(sum(imF));
end
end
imshow(S)
Why is it blown out? That's because the filter kernel is not sum-normalized. As a result, the brightness of the image is increased proportional to the sum of H. If you do want the sum, then you're set. So long as we stay in 'double', the supramaximal image content is still there, but it can't be rendered as anything brighter than white. If we cast back to an integer class, that information will be lost.
If you want an averaging filter instead, normalizing the kernel is cheaper than dividing the result.
Why are the edges black? Because they aren't processed. There are various ways of handling the edges. One common way is to simply pad the edges.
There are numerous examples of 2D filters. The following answer is one and links to several others.
  8 Kommentare
Blob
Blob am 13 Nov. 2022
Bearbeitet: Blob am 13 Nov. 2022
I am trying to convolute the image 'Lena' with the filter Hy = [0 -1 0;0 0 0;0 1 0];
The difference is in the eyes, not the edges
DGM
DGM am 13 Nov. 2022
Bearbeitet: DGM am 13 Nov. 2022
This may be what you're talking about.
A = imread('cameraman.tif');
% for the math to work, you need it to be floating-point class
A = im2double(A);
[y x] = size(A);
H = [0 -1 0; 0 0 0; 0 1 0];
% preallocate output
S = zeros(y,x);
% image geometry is [y x], not [x y]
% treating edges by avoidance requires indexing to be offset
% A0 needs to be the same size as H
for i = 2:y-1
for j = 2:x-1
A0 = A(i-1:i+1, j-1:j+1);
imF = A0.*H;
S(i,j) = sum(sum(imF));
end
end
% use conv2()
Sc2 = conv2(A,H,'same');
% use imfilter()
Sif = imfilter(A,H);
% comare the results
% since these are all in the range [-1 1]
% rescale for viewing
outpict = [S; Sc2; Sif];
outpict = (1+outpict)/2;
imshow(outpict)
Unless you've changed something, The prior example behaves generally like imfilter() (except at the edges). Note that the example with conv2() gives the opposite slope. Transitions from dark to light have negative slope.
In order to get the same behavior out of conv2(), rotate the filter by 180 degrees.
H = rot90(H,2); % correlation vs convolution
Note that imfilter() supports both, but defaults to correlation.
There are other differences in behavior between the three that may influence how the results are displayed by image()/imshow(), but knowing if that's the case would require an example of how exactly you're creating the two images.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Denoising and Compression finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by