Contrast enhancement techniques in HSV or LAB

My first question is if it is in general better to apply the contract enhancement techniques like imadjust(), histeq() in HSV (on V) or LAB (on L).
Besides this, I have a question about the italic part of the definition of imadjust(). Can someone explain this to me with a visualization or something like that?
  • imadjust increases the contrast of the image by mapping the values of the input intensity image to new values such that, by default, 1% of the data is saturated at low and high intensities of the input data.
Are there besides imadjust(), histeq(), adapthisteq() other contrast enhancement techniques I could try? Imadjust() seems to have the best effect, however I would expect histeq() would have the best effect, because it ensures a uniform histogram in stead of the tops that are still seen at imadjust().

 Akzeptierte Antwort

Image Analyst
Image Analyst am 3 Dez. 2021

0 Stimmen

I think doing it in any of those color spaces will produce an approximately similar results. Note that increasing contrast is almost never necessary prior to doing image analysis. It may make the image easier to see but doing something like binarizing the image will not be affected. It will still choose a threshold, just a different one than if you had not doine contrast adjustment, but the binary image will be the same.
Histogram equalization is something beginners learn because it seems like a neat trick, but it is rarely needed. I can say that in over 40 years of image analysis I've never needed to do histogram equalization. Now adaptive histogram equalization, like adapthisteq() where the contrast adjustment varies as you move around the image, can be useful to flatten the background and allow for global thresholding. But global histogram equalization is almost never necessary or desired. As you found out, it often does a non-linear histogram stretch that usually gives an unnatural appearance as compared to the linear stretch that imadjust() gives.

14 Kommentare

S.
S. am 9 Dez. 2021
Thanks for your response (and sorry for the late reply, I was a few days off). If I understand correctly, image contrast enhancement makes just the visualization easier by the bare eye (as I also initially thought), and doesn't influence substracting the fruit (the interesting part) from the background (leaves, sky, trunk etc.) based on the hue value? I have added an example image with from left to right: original image, imadjust(), histeq(), adapthisteq() (see attachement).
My redenation was that imadjust() is the best, because it makes the leaves and the trunk darker and therefore (by the bare eye) the fruit more seperated from the background. I thought this was the proper redenation to choose this contrast enhancement technique (and that's the goal of changing the brightness).
Two other questions:
Does ''1% of the data is saturated at low and high intensities of the input data '' mean that a part of intensity values is spread out to the ends of the intensity range i.e. 0-0.1 and 0.9-1?
Do you know any other contrast enhancement algorithmes which I maybe can try out?
Image Analyst
Image Analyst am 9 Dez. 2021
Bearbeitet: Image Analyst am 9 Dez. 2021
Correct that global histogram equalization is probably never needed and it just gives a crappy looking image, as you can see.
imadjust() gives a much more normal and pleasing appearance because it's a linear stretch. The percentage is how many pixels in the image will be saturated at each end -- at 0 and 255 -- not the intensity range like you said. So basically it finds the tiny tip of the tail of the histogram and slides those outward to the ends (0 and 255). Where this tiny tip's gray levels are could be anywhere (depends on the histogram). It's not at 0.1 and 0.9 (or 25.2 and 229.5 for images with gray levels in the 0-255 range). For example if your histogram goes from around 100 to 150, then it would send the pixels at 100 down to 0 and the pixels at 150 up to 255 with everything else between linearly scaled.
You could use a bilinear stretch, which Photoshop calls "Shadows and Highlights". This is good for images that have two predominant ranges - dark regions and light regions. First it identifies the dark regions by some algorithm like thresholding. Then it computes the histogram in those regions. Same for the bright regions. Then it identifies, from the bright end of the dark histogram and the dark end of the bright histogram, a gray level to divide the two regions. Then it does a linear stretch between 0 and that gray level for the dark regions and that gray level and 255 for the bright regions. You can probably find the exact algorithm online somewhere.
An example of someone doing it in a published paper is here:
They increase the contrast but they would get the same result (binary image), with just a different threshold, if they didn't do histogram equalization.
S.
S. am 14 Dez. 2021
Thanks for the answer again! I have some last questions.
So 1% of the total amount of pixels in the original image is saturated at each end -- at 0 and 1 (the value/intensity histogram), if I understand you correctly (correct me please if I'm wrong)? And the rest of the pixels are linearly stretched across the range [0,1]. Why exactly is the number of pixels increasing after applying the imadjust() command? Is it because the linear stretching of the values between the ends after saturation -- at 0 and 1. So if you choose instead of 1%, 0.1% the number of pixels will be lower. Besides this, you speak about gray values between 0 and 255, while my image is a color image. What is the reason exactly?
Moreover, about the adapthist() technique, you mention that it is useful for flattening the background. What does this exactly mean? Is it to ensure that the background of the image has the same intensity/brightness value? Furthermore, the graph is non-linear because not all intensity bins are stretched with the same factor, right?
Lastly, I will try the above mentioned contrast enhancement technique you proposed. Thanks for sharing the algorithm.
The first part is correct. But the number of pixels does not increase. Not sure where you got that idea. The number of pixels is the number of pixels in the image and nothing you do to change their intensity values will change that.
adapthisteq() scans the image and for each tile makes the min 0 and the max 255, so all tiles have the same range. So a tile that was dark overall (say in the range 0-40), and one that was bright (say in the range 210-255) will both end up looking similar with mins of 0 and maxes of 255 and (perhaps) means close to mid-gray, like 128. So all tiles will look similar and the image will look "flattened".
histogram equalization is a non-linear process since different bins move by different ratios. Some gray levels are not a constant factor times their original gray level.
If you do contrast stretching on a color image, each channel independently, you could get color artifacts, especially around edges. For this reason contrast stretching on a color image is done by converting to gray scale, like with rgb2hsv(), then adjusting the contrast of only the intensity channel (V channel), then converting back to RGB color space.
I got this idea, because if you look at the histograms in ''contrast.png'' I have sent before, the y-axis gives (I assume) the number of occurances/pixels that has a specific intensity value. As you can see, for the original RGB image the surface under the histogram is much lower compared to the histogram after applying imadjust(). How is this then possible?
About adapthisteq() , clear! With tile you refer to a specific region in the image (squared region), right?
About the last point, indeed you change the contrast of only the V channel, but this ranges between 0 and 1 (i.e. 0% and 100%) and not between 0 and 255.
I have also read the article about bilinear stretch.
I have two questions about it:
  1. L is the maximum gray level intensity in image, however as the range for the intensity is between 0 and 1, how do I choose this value? You will get then namely 0 or a negative value.
  2. CDF(i) is the CDF of the i'th grey level? Here I have the similar question. I have intensity values and not grey levels. So how should I do it?
Beneath the code I have so far:
[rows, columns, numberOfColorChannels] = size(RGB_Image_Re);
hnorm = imhist(RGB_Image_Re)./numel(RGB_Image_Re);
CDF = cumsum(hnorm);
for x = 1:rows
for y = 1:columns
RGB_Image_Re(x,y) = ((CDF - min(CDF)) / (rows*columns - 1 )) * (L - 1);
end
end
After you call imadjust() some of the pixels will be put into the lowest bin and highest bin. So if you display that, those bins may be higher than they were before and thus the middle of the histogram may appear lower. This would happen only if at least one of the outside bins were already taller than any of the middle bins.
If you integrate the number of pixels (counts) in between the first and last bin, there may be less than before IF some of the pixels needed to be shifted into one of the two outer bins.
For your equation, I'm not sure where the 1 is coming from or why it's there. Maybe L is the bin number rather than the value the bin represents. Also not sure why the denominator is M*N-1. Maybe it's not including the pixel being transformed or something.
With adapthisteq() I think it divides the image up into fixed tiles, like a grid of 8 by 8 tiles. Then it equalizes each tile. It interpolates values across the tile boundaries to avoid noticeable and distracting lines/discontinuities at the tile boundaries. Here is the official description:
CLAHE operates on small regions in the image, called tiles, rather than the entire image. adapthisteq calculates the contrast transform function for each tile individually. Each tile's contrast is enhanced, so that the histogram of the output region approximately matches the histogram specified by the 'Distribution' value. The neighboring tiles are then combined using bilinear interpolation to eliminate artificially induced boundaries.
S.
S. am 16 Dez. 2021
Bearbeitet: S. am 16 Dez. 2021
However, if you compare the area under the histogram before (i.e. the original RGB image) and after imadjust(), these are not the same. You would expect that they are the same, but they are respectively, 3072 and 9216. You can also see that the y-axis of the histogram of imadjust() has much higher values. So that's very weird as the number of pixels can't change, so I'm still wondering why that is.
The equation is from the article you have sent. I have also find a similar one, but there it is - cdf_min instead of -1:
Do you know the answer of the questions about this formula (about cdf(v) and L) I have posted before?
The number of pixels does not change. Why do you think that just because the heights of the bins change that the number of total pixels in the image changes? It does not. If you add up the counts in all the bins you'll find out that they sum to the number of pixels in the image.
S.
S. am 17 Dez. 2021
I found my mistake. I accidently plotted the wrong histogram, namely the one of the total image (so of the hue, saturation and value channel). Now when I plot only the value channel, it makes sense and it is equal to the total number of pixels in the image!
What remains is the bilineair stretch algorithm you have sent me. As it is for grayscale images and I am using the value channel of an HSV image. Do you have an answer on the questions I have sent you before about it? That would be nice.
There is a lot in this lengthy discussion. Could you recap the unanswered questions in a numbered list?
S.
S. am 17 Dez. 2021
Firstly, linear stretching means the linear stretching of the intensity values, right? So if the highest pixel value in an image is 0.8 (so the upper edge), it is now 1 due to linear stretching (a constant factor of 1.25, which is used for all intensity values, so a pixel value of 0.6 will be now scaled to 0.75 etc.), as now the whole possible range between 0 and 1 is used.
Secondly, you proposed an algorithm ''bilinear stretching'' which I could apply as another contrast enhancement technique. However, the article you have sent me the algorithm which is used, assumes a gray scale image (and is different from the formula I have found, see the second formula):
So the questions:
  1. Which formula is correct?
  2. L is the maximum gray level intensity in image, however as the range for the intensity values (hsv colorspace) are between 0 and 1, how do I choose this value? You will get then namely 0 or a negative value for O/P Img(x,y).
  3. CDF(i) is the CDF of the i'th grey level? Here I have the similar question. I have intensity values and not grey levels. So how should I translate it?
@S. first paragraph is correct.
Second paragraph uses the CDF to construct a redistributed histogram, essentially a simple (and bad) histogram equalization. Don't do it. Histogram equalized images almost always look harsh and bad and are usually not necessary.
The reason for the -1 when using CDF is that bin 1 holds the count for gray level 1, bin 2 holds the counts for gray levels 0-1, bin 3 holds the counts for gray levels 0-2, etc. So the bin number is always one more than the gray level since indexes start at 1 while gray levels start at 0.
S.
S. am 21 Dez. 2021
So you propose not to try the contrast enhancement technique you proposed?
Sure, you can try it, just to drive home that fact that it's probably neither necessary nor does it produce a pleasing image as compared to a linear stretch. Once you try it you might understand better. In over 40 years of image processing I don't think I've ever run across a situation where histogram equalization was really required or even helpful, not to mention the fact that it makes the images look unnatural.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

yanqi liu
yanqi liu am 4 Dez. 2021

0 Stimmen

yes,sir,may be use imtool to interactively adjust relevant parameters and analyze the visualization effect

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by