letter reconstruction and filling for ocr
7 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hi
I am trying to detect text on tires which is black engraved text on black background. I am using several pre-processing stages including edge detection and erosion/dilation. However, the characters appear with discontinued/broken edges.
I am looking for a solution as to how to fill the gap between the strokes of the letters and reconstruct the edges. One sample image is attached.
Looking forward to your feedback.
Best Regards
Wajahat
0 Kommentare
Antworten (2)
Ghada Saleh
am 20 Jul. 2015
Hi Wajahat,
I understand you want to fill the gaps in the text, One possible way to accomplish this is by dilating the image with a strel constructor. You can find an example of doing this in http://www.mathworks.com/help/images/ref/imdilate.html#examples. In your case, dilating the text in the image should fill the gaps between the points and the white edges. You can try using different structure elements using strel and choose the one that best fits your case.
I hope this helps,
Ghada
2 Kommentare
Image Analyst
am 21 Jul. 2015
I could be wrong but I don't think you need to do that. There is an easier approach if you just think outside the box. You don't want to turn what you got into perfect letters. No - what you need to do is to create a new alphabet and recognize that you got, that is, decide which letter in your new alphabet best matches your unknown/test/mystery letter. So you don't need to have a perfect binary mask of the D letter for example. But if you can assume that your lighting is the same for all images (illuminated at a glancing angle from the lower right) then you want to define a D as that gray scale pattern. It doesn't matter what it is, it just matters that what it is, is defined by you as a D. So whenever it sees that same pattern of bright, dark, and gray pixels, it will say it's a D. So you make up a library of all letters and numbers with those actual patterns and associate them with the letter that makes that pattern. So for example, you cut out the bounding box of that shadow cast D letter and call that "D.png". Do the same for all the other letters and numbers. OK, now you have your library.
Next what you want to do is to compute the Hu's moments for each letter's image. Then you isolate a blob that represents a letter by any reasonable technique and you compute it's Hu's moments. Then you compare that letters Hu's moments to the Hu's moments of each of your library of letters and see which letter in your library is the closest match to your unknown letter.
See https://www.youtube.com/watch?v=Nc06tlZAv_Q for a nice example of hou you can use Hu's moments to recognize patterns in an image, regardless of scaling, rotation, and location.
Please give it a try - it should not be too difficult.
2 Kommentare
Image Analyst
am 22 Jul. 2015
For what it's worth, an alternative method, though probably more complicated than my suggestion, is to use your Canny edges with the Hausdorf distances. See the jet-finding example on this page: http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/98/normand/main.html
Siehe auch
Kategorien
Mehr zu Convert Image Type finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!