Replace the image background

15 views (last 30 days)
Marceline
Marceline on 7 Dec 2022
Commented: Image Analyst on 9 Dec 2022
  3 Comments
DGM
DGM on 8 Dec 2022
Edited: DGM on 8 Dec 2022
I don't mind helping. That's what we're here for.
That said, most people are going to expect a show of effort in cases of homework. Showing your work isn't just a token obligation to labor. It's a means of communicating your thought process and the scope of your current understanding. Dumping the assignment in whole without explanation tends to rub people the wrong way.
If you want something more specific than what's given, you're still free to add more description about where you are in the task, or what specifically you want help with. You're free to attach copies of your code in progress and the source images (if they exist).
As I mentioned in the answer, this isn't an easy composition to do with basic techniques. It's hard for me to know if the other parts of the assignment suggest a specific approach to the task. It's also hard to know how perfect the results need to be. You have the opportunity to clear that up as well.
For what it's worth, I gravitate toward these sorts of image composition/adjustment questions, and I tend to be open about my criticism of the assignments and presentation. I expect that you understand that I know you weren't the one who decided to put a midget giraffe in Tennessee.

Sign in to comment.

Accepted Answer

DGM
DGM on 8 Dec 2022
Edited: DGM on 9 Dec 2022
I don't know what you expect. Handing off your assignment in whole with zero effort shown on your part shouldn't be expected to be a successful strategy for persuading others to help. To make sure that nobody does help, you've neglected to actually provide the images required.
Providing tiny, heavily-compressed JPG thumbnails is not equivalent. The methods used to process clean images are going to be fairly impractical on the copies that are provided, and vice-versa. Anything we could demonstrate won't be useful to you in practice.
Since there is now less value in solving your homework than there is in providing an example to future readers, I'll continue without the need to make concessions. These methods probably won't be particularly ideal with the clean images. I'm also using methods and purpose-built tools that are probably contrary to those intended by the instructor.
The commonly-recommended method will be to simply do color-based thresholding (you can use the Color Thresholder App as Image Analyst suggests). Pick a color model, select your thresholds, and export. Now you have a hard-edged binary mask. The problem is that a binary mask cannot cleanly select a soft-edged object. It can only overselect (leaving blue jagged fringes) or underselect (leaving the object undersized and jagged). This is made worse by two things. First, the aggressive JPG compression ruins the spatial resolution in the color channels that are being used for mask generation. Second, the object has hair, so its edges are covered in effectively semitransparent regions with visually-important detail. If you crop the hair off a person's head, it's conspicuously noticeable, because humans don't look like that. Neither do giraffes. Content consideration is important.
% read images
BG = imread('girbg.png'); % RGB
FG = imread('girfg.png'); % RGB
% create a mask using HSV
FGhsv = rgb2hsv(FG);
hlim = [0.476 0.668];
slim = [0.758 1];
vlim = [0.364 1];
mask = (FGhsv(:,:,1) >= hlim(1)) & (FGhsv(:,:,1) <= hlim(2)) ...
& (FGhsv(:,:,2) >= slim(1)) & (FGhsv(:,:,2) <= slim(2)) ...
& (FGhsv(:,:,3) >= vlim(1)) & (FGhsv(:,:,3) <= vlim(2));
% composite images
outpict = replacepixels(FG,BG,~mask);
% display
imshow(outpict)
You can try to tighten up the mask by adjusting the thresholds, but you're not going to get a binary mask that includes the giraffe and excludes the sky, because the image is a linear combination of both.
Alternatively, you might try something that's not a binary process. Since the object colors are generally separated from the sky, and the sky is fairly uniform, maybe a simple color distance map would suffice as an alpha map?
% read images
BG = imread('girbg.png'); % RGB
FG = imread('girfg.png'); % RGB
% construct mask
targetblue = [57.27 4.975 -57.96];
D = rgb2lab(FG); % convert
D = sqrt(sum((D-ctflop(targetblue)).^2,3)); % color distance in LAB
FGA = imadjust(simnorm(D),[0.15 0.5]); % normalize, adjust
% composite images
outpict = replacepixels(FG,BG,FGA);
% display
imshow(outpict)
That's a lot better, but note that we're now both underselecting and overselecting. There are still blue fringes, but there are now also soft holes in the FG due to diffuse reflections of the sky on the object. In fact, there's a blue cast over the entire giraffe. Even if the mask were perfect, it would still look out of place, since the cast is wrong. Note also that the light is coming from the wrong direction.
Let's just throw a bunch of stuff at it. It's not like it really matters.
% read images
BG = imread('girbg.png'); % RGB
FG = imread('girfg.png'); % RGB
% find color distance
targetblue = [57.27 4.975 -57.96];
D = rgb2lab(FG); % convert
D = sqrt(sum((D-ctflop(targetblue)).^2,3)); % color distance in LAB
% construct FG alpha from the distance map
FGAsoft = imadjust(simnorm(D),[0.15 0.5]); % normalize, adjust
FGAblur = imgaussfilt(FGAsoft,2); % just feather the edges
FGA = FGAsoft.*FGAblur;
% fill interior minima in FGA
FGAfill = imadjust(simnorm(D),[0.2 0.35]); % normalize, adjust
FGAfill = bwareafilt(FGAfill>0.01,1);
FGAfill = mat2gray(bwdist(~FGAfill),[0 4]);
FGA = max(FGA,FGAfill);
% adjust FG color cast
Tr = [0 0 0; 0 0 0; 0 0.05 0.05];
Tg = [0 0 0; 0 0 0; 0 0.05 0.1];
Tb = [0 0 0; 0 0 0; 0 -0.1 -0.2];
Tsat = [0.5 0.7 1 1 1];
FG = tonergb(FG,Tr,Tg,Tb,Tsat,'preserve');
% flip BG so that the lighting looks slightly less ridiculous
BG = fliplr(BG);
% add highlight overlay to BG for emphasis
gradpict = lingrad(size(BG),[0.5 0; 0.25 1],[0 0 0; 68 102 104],'linear');
BG = imblend(gradpict,BG,1,'colordodge',0.8);
% composite images
outpict = replacepixels(FG,BG,FGA);
% display
imshow(outpict)
Ehh. That's better, but it's still pretty questionable. Ultimately, color adjustment can't fix the fact that the giraffe must be standing in a hole in order for this perspective to make any sense. It's conspicuous things like that which guarantee that the viewer will notice any imperfections in other parts of the composition.
All of these examples use tools from MIMT (replacepixels(), lingrad(), tonergb(), imblend(), simnorm(), ctflop()). I'm not going to do it the hard way for this. Otherwise, there are many examples of doing basic image composition on the forum. Here are a couple, and there are many more for the simplified case of a binary mask.

More Answers (1)

Image Analyst
Image Analyst on 7 Dec 2022
Edited: Image Analyst on 8 Dec 2022
This looks like a homework problem. If you have any questions ask your instructor or read the link below to get started:
Obviously we can't give you the full solution because you're not allowed to turn in our code as your own.
Here are the basic steps of one way to do it, which might be the easiest for you to understand, but by no means the most compact or only way to do it.
  1. Use the Color Thresholder on the Apps tab of the tool ribbon to find the blue background. Export the masking function.
  2. Get the mask and scan it pixel by pixel with a double for loop. If the pixel is black (meaning it's in the giraffe), take that pixel and paste in into the underlying target forest image.
[EDIT] See my generic attached demo. Adapt as needed, if you want.
  4 Comments
Image Analyst
Image Analyst on 9 Dec 2022
I made the separate foreground and background images, from the original MATLAB demo image, in Adobe Photoshop. It was quicker to do that way than to write code to do it in MATLAB.

Sign in to comment.

Categories

Find more on Convert Image Type in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by