Image Processing Made Easy
Learn how MATLAB makes it easy to get started with image processing.
Image processing is the foundation for building vision-based systems with cameras. You might have a new idea for using your camera in an engineering or scientific application but have no idea where to start. While image processing can seem like a black art, there are a few key workflows to learn that will get you started.
In this webinar, using real-world examples, we will demonstrate how MATLAB makes it easy to:
- Pre-process images using enhancement and filtering techniques
- Separate objects of interest using segmentation techniques
- Test your algorithm on large sets of images
Previous knowledge of MATLAB is not required.
- MATLAB help documentation and examples help with getting started quickly
- Interactive apps and live scripts enable exploration of different techniques
- Extensive library of built-in image processing algorithms
- Automated testing of algorithm on large datasets
About the Presenter
Sandeep Hiremath is a product manager in the image processing and computer vision area at MathWorks. During his 13 years at MathWorks, has been in various customer facing roles supporting MATLAB and Simulink users. In his previous role at MathWorks, he was a technical evangelist supporting academic users of MATLAB. He holds an M.S. degree in Mechanical Engineering from Clemson University, USA, and a B.E. in Electrical Engineering from University of Madras, India.
Recorded: 21 Oct 2020
Welcome to the "Image Processing Made Easy" webinar. My name is Sandeep, and I'm into product marketing team here at MathWorks. Image processing is a very popular field that we see a lot of our customers use extensively for designing vision-based systems across a variety of different application spaces like automated driving, robotics, machine vision, and medical imaging, just to name a few. Image processing is primarily used in these applications as a pre-processing step. But often, it requires more than just image pricing techniques to design the complete vision-based solution.
So what is image processing? Image processing or digital image processing involves performing certain operations on an image in order to get an enhanced image or to extract some useful information from it before performing further analysis on it. Like, in this simple example here, where the main goal is to detect stamps in the video and perhaps condemn, image processing has been used for noise removal, edge detection, and filtering operations to pre-process the frames in the video.
These processed frames can then be used for detecting the stamps using computer vision and perhaps deep learning techniques. In this video, we will see some vision-based examples where we will focus primarily on the image processing part.
We see many engineers and scientists who would like to use image processing in their own projects. However, for someone new to this field, there are numerous challenges with getting started. Some of these typical challenges are-- how do I get started? Or how do I learn about different techniques? Or how do I explore options to solve my problem? Or maybe how do I test my ideas on large data sets of images?
In this webinar, our goal is to help you get started with applying image processing in your projects through some real-world examples and to show you how getting started with image processing can be made easy and quick using MATLAB. So, in the next 30 minutes or so, we are going to see two vision-related problems in real-world examples, and how image processing techniques will be used in each one of them.
First, you will see how to improve visibility of objects in underwater images, and second, to identify colored cones or objects in a robot's view. So let's get started with solving some problems.
Looking for objects underwater to help with navigation is a common task for submarines. The image on the right is of a small scale autonomous submarine designed by some engineering students to perform some autonomous navigation tasks. One of the tasks is to locate the submerged cage underwater, like in the image on the right, and autonomously navigate through them.
However, there is a problem here that you can see that the gate in this underwater image is not clearly visible. Typically, the visibility of objects under water are affected due to various reasons like water movement, debris, or poor lighting conditions. So the primary objective here is to adjust the image to counter for poor visibility issues before using more advanced techniques to analyze and detect the gate in this image.
The solution for this problem is image enhancement. Image enhancement involves applying image pricing techniques to adjust image properties, thus preparing the images for further analysis. For example, improving low-lighting conditions and images to improve visibility of objects, or increasing sharpness to improve visibility of certain features like edges in an image.
Let's now bring up MATLAB and see how to go about solving this problem. So this is my MATLAB environment. I'm going to start off with my MATLAB live script here, where the first thing that I'm going to do is to read in the image of the underwater scene with the submerged gate that we saw earlier.
So to do, this I'm going to use the imread function from the image processing tool box, which takes the name of the image file as input and returns the image data in a variable. Next, I would like to view this image in MATLAB. To do this, I'm using the imshow function, again, from the image processing tool box.
So let's run this function and look at the output. So here is my output embedded in the live script, which makes it really easy to follow as we execute this live script section by section. I can also dock out this figure window to take a closer look at it.
Now, my goal here is to detect the location of the submerged gate structure in the image. For this, I have a very simple custom function that I've written called find gates. Before we look at this function, let's go ahead and run this next section where I pass the image as input to find gate's function.
This should return the underwater image annotated with the location of the gate structure. The output result should look something like this. However, as you can see, it fails to find the gates.
Now, let's look quickly at how the find gates function has been implemented. Without going too much into the details here, the input to this function is an image. First, it is converting the image to a binary.
Then, it is refining the binary image to remove any negligible pixel regions. And then finally, using region and property analysis to detect the remaining large segments, it should give us the vertical structure of the gates. So the output should be two annotations on the vertical structure of the gate.
Now, given that the water appears to be murkier than we expected in the image, the find gates algorithm failed to work on this underwater image. This is a common scenario where such failures occur when we design algorithms like find gates even before knowing the actual conditions in the images that we are working with. However, instead of changing the algorithm to work with this image, we could use some simple enhancement techniques available in MATLAB to pre-process the image to see if that would help with the detection of the gate structure using the same find gates function before going ahead and changing the underlying algorithm.
OK. So we want to pre-process the image, but what repricing pricing or enhancement techniques do we use here for the murky underwater image? Well, what I can do here is to get some help from MATLAB documentation. To do this, I can go to this Help search bar and search for Enhance Image. This should open up MATLAB Help Processor with the search results for this term.
The very first thing here is Image Processing Toolbox. And further down, I see a search result on low-light image enhancement. I can define my search results to say I want to only see examples. So here is my refined search.
Now, in examples, let's look at the one that say Low-Light Image Enhancement. If you go through this example, you will see a few different techniques that have been used to enhance images with low-light conditions. MATLAB gives you the ability to open the script and explore it in an editor window, make modifications, and run it to see the outputs without having to implement from scratch.
Also here, I can go to the relevant help topic, which is Image Filtering and Enhancement and learn more about the extensive set of techniques that are available to pre-process your images using the Image Processing Toolbox. OK. So based on my quick research of these pages and some relevant examples, I was able to pick a few image enhancement algorithms that I would like to try with the underwater image to see if I can make it work with the find gates function.
So let's look at this. I have three techniques that I will show here one by one. First, based on the example that we just saw on low-light image enhancement, I'm using the dehazing algorithm. For this, I'm using imreducehaze function from the Image Processing Toolbox. And this should return an output image that has been processed to remove haziness in the image.
Typically, dehazing or haze reduction techniques help with adjusting image for atmospheric haze like images taken from a security camera in foggy conditions to help improve visibility if objects are overall seen in the image. So let's go ahead and try haze reduction to see if this would help remove some of the murkiness in the underwater image. So let's run the section and see the output next to the original image.
As you can see, there are some improvements to certain regions in the image. But is this good enough for our gate detection algorithm? Well, let's go ahead and test this image with the find gates function. So here in the section where I have find gates, I will pick dehazed image from the drop down menu. Once I pick that, the section is executed automatically. And here is the gate detection result for the dehazed image. So this technique did not really work as we expected.
Now, I have an option to go back to iamreducehaze function and tweak a few input parameters to see if that gives me a result that would work with the find gates function. I can find out more about these input parameters to iamreducehaze function by looking at the Help Reference page for the function. But for now, in my case, based on the default syntax here, the resulting image hasn't been successful the decay detection algorithm.
Well, let's move on to the second enhancement technique here, which is image sharpening. Image sharpening should help with sharpening the edges and the gate structure, which appears a little blurry in the image, mostly caused due to water movement. For this, I'm going to use the imsharpen function from the Image Processing Toolbox.
Let's run the section and look at the sharpened image. It might be hard to notice in this video, but there is some improvement in the sharpness of the edges of the gate structure, but also across the entire image. Now, I want to test the sharpened image with the find gates function. So let's go head to the section again and pick Sharpened Image from the dropdown menu, and the section executes automatically.
As you can see, find gates has failed again to detect the gate's structure. So image sharping technique hasn't worked either. So far, I've tried two pre-processing techniques, and I have been able to quickly test them with my find gates function, and neither one of them have worked.
Next, I'm going to use contrast adjustment to pre-process the image. Adjusting the contrast of the pixels in the image helps with making some lighter pixel stand out in comparison to the darker pixels. And as you can see, since the gate structure has lighter pixels, this technique might help. So let's try it out.
Now to apply contrast adjustment, there are numerous functions available in the Image Processing Toolbox. I've chosen imadjust function here. Imadjust works with both grayscale and color images. OK. So in this section here, I pass the original image as input, and this should return the contrast adjusted image.
Let's run the section and see the output. Well, as you can see, it has thrown an error that the syntax of Imadjust that we have here is only supported for grayscale images. For a color image, there are additional input arguments that are needed here. So if you look at the help info for imadjust, you'll find out that for color images, we need to provide the low and high range for each color channel, r, g, and b.
I can type in these numbers of parameters to the function. However, Live Script provides us with interactive controls that we could use to tune the parameters and see the results update in life. I can tweak these values until I'm happy with the final result.
Let's go ahead and see if this pre-processed image will work with the find gates function. So let's go back to the code section where we have find gets. And here I pick Contrast Adjusted Image in the dropdown menu, and this should good and run the section.
And here is the output for the detection result. As you can see, it has successfully worked with this image, as both gate structures have been annotated. And this is the desired result from our gate detection algorithm.
So what we are seeing here is that we were able to explore and try different pre-processing techniques in MATLAB without knowing a whole lot about these techniques before using contrast adjustment to successfully pre-process image and make it work with the find gates function. Now note that we had to do some amount of manual tweaking of color and levels in imadjust to get the desired result. And this was just for one image.
Now, imagine if we had a large sequence of, say, 500 to 1,000 images or frames in a video containing the gates structure in changing lighting and underwater conditions. Here, it would be very hard to expect the same pre-pricing technique and settings to work successfully in detecting the gate structures across all these images. In this case, it'll be required to run and verify the pre-processing steps across all the images and then possibly go back and tune the pre-processing algorithm and also the find gates function and make them robust enough to work across all these images.
Image Batch Processing app makes this whole workflow very quick and easy within in MATLAB. Let's go ahead and look at this app and use it for automating the pre-processing of a sequence of images of the underwater scene. To open this app, let's go to the Apps tab and click to expand the dropdown list.
Here, look for Image Batch Processor app under the Image Processing and Computer Vision section. Notice that there are many other apps that are available under this section. Apps are a great way to get started with exploring and trying different techniques or algorithms, especially if you're new to image processing our computer vision.
Let's open the Image Batch Processor app for now. Now, I would like to read the sequence of images of the underwater scene that I have and then apply the same contrasts adjustment algorithm to pre-process all these images. To do that, I'm going to load my image set. You can see all the loaded images as thumbnails in the left panel here.
And here is a preview of the selected image. Next, I'm going to select the algorithm that I want to use on these images. I have my script containing the contrast adjustment part saved as a new function. It's called pre-processed image.m. If you wish to learn more in detail about the steps to create this function, please click on the question mark button to refer to the documentation.
Next, I'm going to select Process All to run the function on all the images. Once it is done executing, you can view the results in the right panel here. You can expand and inspect them further. Also note that the app allows accelerating batch processing with parallel computing.
This is specifically useful when you want to accelerate working with a very large image data set. You can then export these results to MATLAB for testing with find gates function. And this as well could be automated in a script. So it is important to note here that the image batch processor app enables us to easily automate processing large sets of images and then use these results to go back and find tune our algorithm to make it work across all the images.
As we have seen, processing multiple images is easy and can be accelerated using the batch processor app. Please also note that MATLAB now provides an ImagaDataStore object that makes it easy to access and manage large collection of images. Now, what we have seen in our demo were just a couple of simple enhancement techniques. But there are many other techniques available in MATLAB to use depending on our problem.
For example, deblurring, which is very useful when you're trying to remove blurring from your images or image filtering techniques to remove noise and images. You can learn more about filtering and enhancement techniques by referring to the link in the description below.
That brings us to the end of this demo. In summary, we saw how easy it is to get started with MATLAB using the help documentation, which provides you with a very good set of ready-to-use examples and comprehensive function reference pages. We saw that MATLAB live scripts are interactive and make it easy to explore and iterate on different techniques and their parameters and to view intermediate results.
We also saw MATLAB provides you with an Image Batch Processor app through the image processing tool box that lets you automate testing your algorithm with large sets of images. In the previous demo, we saw how to enhance images. Now let's look at the next demo on how to process images to extract useful information from them.
One of the primary goals of cameras on robots, or UAVs, is to detect objects in front of them especially in robots that are performing tasks like pick and place, obstacle avoidance, or target detection. Here is a simple example of a pick and placed robot in the image on the right, whose goal is to move to a specific colored cone, pick them up and move them to a bin. On the left is an image of what the robot sees from its front facing camera.
Now, this might seem an easy task to find the cone to a naked eye, but not necessarily for the robot. So why could this be a challenge? If you notice, there are other objects around the target objects that is the cones. Also, the position, orientation, and scale of the cones make it hard to find them. This is a common challenge in object detection problems, like when designing traffic monitoring systems or vehicle or pedestrian detectors for self-driving cars, or pick and place robots in warehouses.
So the main objective here is to separate or segment out the regions of interest from the rest of the clutter in the image before detecting their actual position in the scene. Image segmentation will help us solve this problem. Segmentation techniques enable extracting meaningful information from an image by assigning all the pixels in a region with similar properties the same value. This is called masking.
For example, a binary mask would involve assigning all the pixels in an image, either a black or a white value. This way, we can separate out the right screens in the foreground from the background. Now, you can analyze this binary image further to say, for instance, count the number of grains in the image.
Now, image segmentation techniques are used in a variety of different applications like an optical character recognition, where sometimes a text in the image needs to be first extracted before actually recognizing them. In visual inspection where you are segmenting the image based on intensity before finding the defective object, or in medical imaging where you perform color-based segmentation on tissue images to extract the regions of interest for further analysis.
For our problem, you might wonder, how do we know which technique to use? Well, like in the first demo, by doing a simple search in MATLAB Help on segmentation and looking through some examples, I was able to quickly figure out the color thresholding would work well in our case. Color thresholding will enable separating out the individual colored cones and then use the results to detect the exact location of the cones in the image.
Now, let's go to MATLAB and see how to perform color thresholding. So how do I get started with color thresholding in MATLAB? For this, I'm going to first navigate to the Help browser. And here, I'll search color thresholding. And here are my search results for color thresholding.
The first result here says color thresholder. Let me go ahead and click on this. And here, I can see that there is a color thresholder app that I can actually use to perform color thresholding in MATLAB. I also see that there is an example that I can open up and learn more about how the app works.
Now, let me go back to MATLAB and open up Color Thresholder app. Like what we did before, I'll go to the Apps tab and scroll down to the Image Processing and Computer Vision section. And here, I can look for the Color Thresholder app, which is right here. So this is our Color Thresholder app.
Next, I'm going to import my image that I'm going to be working with. To do that, I'll click on Load Image and select Load Image from File. Here are my image options. I'm going to select the first one, which is cones1.jpeg.
So the app now loads the image and displays it along with different color spaces with coin clouds representing the image in each of these four color spaces. And these color spaces are RGB, HSV, YCbCr and L*a*b. Here, I can select the color space that provides the best color separation from a image. I can learn more about these different color spaces and how to work with them from MATLAB help documentation.
For now, I'm going to go ahead and pick the HSV color space by clicking on the HSV button here. So this should open up a new tab called HSV and display the image. You can also see on the right here we have some interactive controls. These controls let you adjust the three values that represent this color space.
H is a pinwheel with all the hue values, which basically represent the different dimensions of color. S is nothing but the saturation values, which represent the darkness or lightness of that color. And V is the intensity value of the color in each pixel in the image.
You can also see that there is a 3D point control at the bottom. Now, I will go ahead and interact with the three HSV controls on the top to adjust these values. As I change these values, you can see that the image on the left is updated in live by filtering out the pixels based on the HSV values that I select. I could also use the Point Cloud Control to perform similar filtering operations.
Let's go ahead and select all the hue colors that match yellow and shades of yellow. So what we see now based on our filtering operation on the left is a masked image which contains only the pixels that match the HSV value selected. I can move the pinwheel value around to select a different color.
Say, let's choose red and shades of red. Now you can see it actually filter out all the colors except for red and shades of red. I can then fine tune my mask by adjusting the saturation intensity values.
Next, you can also try color thresholding in a different color space by selecting new color space to start a new session. This way, you could explore thresholds in different color spaces and pick the most suitable result for further analysis. In MATLAB, I could have performed color thresholding programmatically as well. But as you can see, the color thresholder app makes this workflow much more interactive and easier to explore the different options.
We could get a decent result really quickly without knowing a whole lot about the color spaces and the values representing them. Now that I'm happy with my results, I can go ahead and export the masked image to MATLAB workspace. I can do that by going to the Export option and clicking on Export Images.
As you can see, I can also choose to export the binary mask and the original input RGB image. This way, I can use those results to further analyze and detect the cones in the image using a few other techniques. Now, what if I wanted to apply these steps to a large set of images? To do this, I can use, again, the Export option and click on Export Function. This way, I can export the color thresholder app session as a MATLAB function.
And now, I can use this function in future to programmatically threshold more images with the same algorithm and same settings. So let's go ahead and look at this function that got created. It's called Create Mask, which takes in an RGB input image and gives out a binary mask and a masked RGB image as the output, very similar to the color thresholder app.
Let's go ahead and save this function to our current folder with the name, create mask. Now, I have a live script here that I've created that uses this function and a few other analysis and detection techniques to detect the cones in the image. So let's go ahead and take a look at the script.
In the first section here, I read a new input image, which is cones4.jpg. The second line here is the Create Mask Function. Let's go ahead and run the section. So here is the output, the three images. The first is the input image, then the masked RGB image, and then the binary image.
The next section here, I'm going to use a segmentation technique called morphology to further refine the binary mask. In this case, I'm going to filter out smaller pixel regions in the image. And the way I do that is using the Iamopen function. And then I use the bwconcomp function to find all the connected components in the image.
Let's go ahead and run the section. And here is the output of the Iamopen function. As you can see, that the binary mask has been further refined.
Finally, in this section, I'm going to use the regionprops command to find the position of these connected competence, which are the cones in this image, and then annotate the original image with the information that I have from the region props, which is the position of these connected components. So let's go ahead and run this section.
And here is my final result, which is the original image that has been annotated with the position of the red cones in this image. So this way, I can perform this operation on multiple input images and use the information to go back and fine tune my threshold values and the detection algorithm. Now, current thresholding is just one of the many segmentation techniques available that you could consider for solving your problem.
MATLAB actually provides you with a comprehensive list of many other segmentation and analysis techniques. There are also functions to perform object and region property analysis similar to what we did with detecting the cones using the regionprops command. You'll also find that there are numerous well-explained examples that show you how to use these different segmentation and analysis techniques based on real-world problems.
So we saw there are many other segmentation techniques available in MATLAB as ready-to-use functions. MATLAB also provides an app through the Image Processing Toolbox to make segmentation fast and interactive. You can explore different techniques on your image and refine your results until you get
Your desired results. You can export these results to MATLAB for further analysis. You can even choose to generate MATLAB function for your session and use them to automated testing the same algorithms on a large set of images. This is especially helpful if you're applying segmentation to sequential frames in a video.
So in summary, in this demo, we have seen how color thresholding app was very effective in quickly exploring the different color spaces and then tuning the values to get the desired segmentation results. We saw that MATLAB provides with a variety of segmentation algorithms to use in our work with solid documentation and examples to get started with applying them quickly. We also learned that Image Segmenter app is available through the Image Processing Toolbox that enables exploring and trying out some of the segmentation techniques interactively on our images.
So that brings us to the end of this talk. In summary, we have seen how image processing can be made easy to get started with using MATLAB. We specifically saw through the demos that MATLAB's comprehensive help documentation makes it easy and quick to get started with discovery and learning. The ready-to-use examples provide a quick starting point to try out different algorithms with the own data. Interactive apps and live scripts make it convenient and fast to explore and iterate on different techniques and their parameters before deciding which ones work best in your case.
Finally, testing an algorithm on a large data set of images or a sequence of images is so much easier in MATLAB. So where you go from here? Well, here are some next steps. image Processing Toolbox can do more than what we covered in this video. To learn more about this toolbox, go to the product page on Mathworks.com.
If you're ready to explore and evaluate the Image Processing Toolbox, get a trial license of the product today. You can also get a more detailed and hands-on experience with the toolbox by signing up for an image processing with MATLAB training course. This is also available as an online course.
If you're interested in solving computer vision problems specifically, then MATLAB's Computer Vision Toolbox can help you with that. Many of our customers are interested in using deep learning techniques and their vision-based applications. If you didn't know already, we have a deep learning toolbox that can enable you with using deep learning in MATLAB. Please refer to the description below for links to these resources. Thank you for your attention.
Wählen Sie eine Website aus, um übersetzte Inhalte (sofern verfügbar) sowie lokale Veranstaltungen und Angebote anzuzeigen. Auf der Grundlage Ihres Standorts empfehlen wir Ihnen die folgende Auswahl: .
Sie können auch eine Website aus der folgenden Liste auswählen:
So erhalten Sie die bestmögliche Leistung auf der Website
Wählen Sie für die bestmögliche Website-Leistung die Website für China (auf Chinesisch oder Englisch). Andere landesspezifische Websites von MathWorks sind für Besuche von Ihrem Standort aus nicht optimiert.
- América Latina (Español)
- Canada (English)
- United States (English)
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- United Kingdom (English)
- Australia (English)
- India (English)
- New Zealand (English)
- 日本Japanese (日本語)
- 한국Korean (한국어)