MATLAB Answers

0

How to read each pixel size contribution in an image?

Asked by Nimit Jain on 28 Jun 2016
Latest activity Commented on by Image Analyst
on 1 Jul 2016
I tried to create a login in which I can read 256x256 image pixel by pixel and print the information of each pixel contributing in the size of the image.
Please help

  8 Comments

Muhammad Usman Saleem comments, in response to my indication that he commented "Need explanation":
instead of giving personal attacks, please let's cooperate together for mankind....
Muhammad, most people cannot read what you comment when you "flag" a message, or do not know how to read the content of the flag. Flagging a message is a request for response from the moderators of the forum, but is not generally readable.
When I (acting as moderator) go through the recently flagged messages, if I see something that should have been public instead of readable only to the moderators, then I post what was written, indicating who wrote it. Thus when I posted that you wrote "Need explanation", it was not a personal attack of any kind: I just took what you had written when you flagged IA's message "The contribution of one pixel [etc]" and made it public.
I'd help if I could, but honestly, I'm not sure what is wanted, so I don't know how to answer.

Sign in to comment.

2 Answers

Answer by Muhammad Usman Saleem on 28 Jun 2016

this function read each pixel of your image
myimage=imread('youimage.jpg');

  2 Comments

I have written some code and I know this basic things. What I am trying to achieve the RGB concentration or size taken by each pixel.
Please explain further. What is an "RGB concentration"? How does it take up size? You gave the example of 65000 pixels and 500 kb before, but that does not seem to have a relationship to RGB ?

Sign in to comment.


Answer by Walter Roberson
on 28 Jun 2016

If you use imfinfo() on the image file, then it might have a structure field named DigitalCamera . That will be the EXIF information if it is present at all. If it is, then it might have a field indicating the distance the camera's autofocus figures the target object was, perhaps named 'SubjectDistance', and it might have a field indicating the camera aperature, or it might have information about the focal length, possibly named 'FocalLength'. With those in hand, perhaps together with information from the camera manufacturer about the sensor size, you can use formulae similar to those shown at http://photo.stackexchange.com/questions/12434/how-do-i-calculate-the-distance-of-an-object-in-a-photo to calculate the real-world height of the target object.... if that is what you meant by "size".

  2 Comments

Every pixel contributes equal size to the stored image unless you are using a compression algorithm to store the image. If you are using a compression algorithm, then the amount of storage required to encode any particular pixel depends a lot on the compression algorithm and on the surrounding pixels.
With some algorithms and some pixel values, the "cost" of a given pixel might turn out to be 0. For example if you are doing run-length encoding then if you use a fixed 8 bits for the count, then any count from the minimum to the maximum takes no more space, so anything past the first would be "free" in one sense. But perhaps you would want to amortize the cost of the run length header (typically one fixed byte to mark the header, then a variable byte to mark the count) over the number of bytes in the run -- that would be a design choice in your counting algorithm.
The answer can be very complicated. For example, one compression method involves first doing run length encoding and then doing arithmetic encoding of what results. You might have used 3 bytes to represent a particular run, but those 3 bytes would form part of a pattern, and the arithmetic storage required would depend upon the probability of the pattern given what was encoded before it in the file.
Or it is common in some image compression methods to use "pulse" encoding together with zig-zag encoding, where a "pulse" is about the difference relative to the previous pixel, so pixels that are similar have low deltas. And then you might take that result and run-length encode it, and then you might arithmetic encode it. Or perhaps you will take the zig-zag and do DCT on it, and then you might choose to throw away some of the DCT coefficients to meet a lossy compression "quality" target.
Remember too that if you are dealing with some kind of image file format, that the file format probably has overhead bytes and header bytes; it might, for example, have EXIF information, or it the compression software might decide to autograph the file with the software name and software version. Those bytes are not productive for encoding the image itself, but they count towards the space used on disk.
Are you trying to calculate the relative proportions (or total number of pixels) of each color? If so then are you looking for exact matches, or is there a way of deciding if two colors are close enough that they should be counted as a single color?

Sign in to comment.