How can I correct inhomogeneous intensity in image?

17 Ansichten (letzte 30 Tage)
Rahul shetty
Rahul shetty am 21 Okt. 2016
Kommentiert: Image Analyst am 31 Aug. 2021
Dear,
I am new to MATLAB, i need some help from you guys.The below image has inhomogeneous intensity distribution, the intensity is max at the top and it is decreasing linearly as it is going down.
  • Idea was to consider the small portion of the images top corner and bottom corner and to loop through those pixels, get the intensity difference and set it to the max intensity value.
  • for example if the intensity is getting reduced in step of 5 after reaching center of the image than see the difference and correct it by adding that difference
Thank you in advance
The image:

Akzeptierte Antwort

Thorsten
Thorsten am 21 Okt. 2016
Bearbeitet: Thorsten am 21 Okt. 2016
I = im2double(imread('blob.bmp'));
y = mean(I(:, 1:100), 2);
sigma = 30; % choosen by visual inspection
G = fspecial('gaussian', 3*sigma+1, sigma);
yb = imfilter(y, G, 'replicate');
Ic = bsxfun(@plus, I, yb(1) - yb);
subplot(1,2,1), imshow(I)
subplot(1,2,2), imshow(Ic)
  11 Kommentare
Qing Wang
Qing Wang am 31 Aug. 2021
Hi Thorsten,
What is the principle of your method?
Thanks,
Qiang
Image Analyst
Image Analyst am 31 Aug. 2021
@Qing Wang, click the link "Show 10 older comments" and read them all. It's discussed there.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

Image Analyst
Image Analyst am 21 Okt. 2016
The BEST way to do it, and I do it all the time, is to take a "blank shot" and then divide your image by it.
In other words, take a picture of that white stuff (assuming it's truly flat and uniform, like a sheet of paper), without the yellow stuff. This will give you the pattern of exposure due to the combination of lens shading and lighting non-uniformity. So let's say some pixel in the corner had 90% as much light reaching the sensor as at the middle. Well, you'd want to divide by 90% to bring the value up to what it's supposed to be, wouldn't you? (Note, radiographs and fluorescence images might also need a subtraction). OK, now you need to get rid of small defects in the image like scratches, specks, video noise, etc. So you can either blur the image or fit it to a 2-D polynomial to get the perfect, noise-free background (I do the latter). Now you normalize your noise-free image so that it goes between 0 and 1. It will be 1 at the brightest point and the min, in the corners, will be whatever it is, like 0.6 or 0.8 or whatever, depending on how severe your shading it. The severity of lens shading varies greatly with manufacturers. We've tried stock lenses from manufacturers like Nikon and Canon and they show severe shading - they're junk. You'll get much flatter fields from specialty (non mass market) lens makers like Schneider (my favorite) and Linos. But their lenses cost twice as much. Obviously it's better to start with a better image than a crummy image if you're going to try to fix it. OK, now you just divide your actual image by the test image to get the flat field image:
correctedImage = double(testImage) ./ backgroundImage;
There are ways to improve upon that and be even more accurate, but I don't want to deliver my whole class on it here. They're more complicated and I think this may be enough for you.
If you can't get a blank shot, there are ways to estimate the background with that yellow spot still in there. You just have to remove it as best you can and fit the background to what's left. A simple way to do that is attached in the demo, but I'd actually do it a somewhat different way.

Kategorien

Mehr zu Image Processing Toolbox finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by