I. Template Image Processing
(1) Gray-scale map, binary map conversion
template = ('C:/Users/bwy/Desktop/') template_gray = (template, cv2.COLOR_BGR2GRAY) cv_show('template_gray', template_gray) # Form a binary image as to do contour detection ret, template_thresh = (template_gray, 127, 255, cv2.THRESH_BINARY_INV) cv_show('template_thresh', template_thresh)
The results are shown in Fig:
(2) Perform contour extraction accepting the parameter as a binary image to get the information of the numbers, RETR_EXTERNAL is that just the outer contour is needed, cv2.CHAIN_APPROX_SIMPLE retains only the endpoint coordinates.
template_contours, hierarchy = (template_thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) (template,template_contours,-1,(0,0,255),2) cv_show('template',template)
-1: represents the outline of what we have, we draw out 10 outlines here. (You can verify it with code)
print((refCnts,-1,(0,0,255),3)
Results: 10
The results are shown in Fig:
(3) We need to sort the contours by size (the data template we got is not necessarily in order from 0-9 as we showed earlier, so we need to sort and resize.
def contours_sort(contours, method=0): if method == 0: contours = sorted(contours, key=lambda x: (x)[0]) else: contours = sorted(contours, key=lambda x: (x)[0], reverse=True) return contours
We call the function # to sort the contours, and the position from smallest to largest is the information about the number. Then we traverse the template and use to get the position of the contour, extract the picture corresponding to the position, and combine it with the number to construct a template dictionary, dsize = (55, 88), to unify the size.
dict_template = {} for i, contour in enumerate(template_contours): # Draw its outer matrix to get its position information x, y, w, h = (contour) template_img = template_thresh[y:y + h, x:x + w] # Use change template size template_img = (template_img, dsize) cv_show('template_img{}'.format(i), template_img) dict_template[i] = template_img
The results are shown in Fig:
。。。。。。。。。。
Second, credit card image pre-processing
(1) Perform a gray value
card_gray = (card, cv2.COLOR_BGR2GRAY) cv_show('card_gray',card_gray)
(2) Form a binary image, because to do contour detection, explain the parameter: THRESH_OTSU will automatically find a suitable threshold, suitable for double peaks, need to threshold parameter set to zero Binarization
card_thresh =(card_gray,0,255,cv2.THRESH_BINARY|cv2.THRESH_OTSU)[1] cv_show('card_thresh',card_thresh)
The results are shown in Fig:
(3) We look at the picture, we recognize the number of pictures but there will also be interference in the yellow and red boxes, this time we can think of the morphological operations learned earlier bowler hat, closed operations ...
Start with a bowler hat operation to highlight the brighter areas:
kernel=((9,3),np.uint8) card_tophat=(card_gray,cv2.MORPH_TOPHAT,kernel) cv_show('card_tophat',card_tophat)
The results are shown in the figure:
(4) We perform contour detection of the image taking only the outer contour. In this figure there are different regions, how can we distinguish between them, we can use the size of h for estimation, this data depends on the project
bankcard_contours, hierarchy = (card_thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) banck_card_cnts = [] draw_img = () for i, contour in enumerate(bankcard_contours): x, y, w, h = (contour) # The x-coordinate of the number is in a certain range of positions # if 0.5 * card_h < y < 0.6 * card_h: banck_card_cnts.append((x, y, w, h)) draw_img = (draw_img, pt1=(x, y), pt2=(x + w, y + h), color=(0, 0, 255), thickness=2) # Drawing this rectangle will draw on the original image cv_show_image('rectangle_contours_img', draw_img)
The results are shown in the figure:
(5) Template matching and reading out the image.
for i, locs in enumerate(banck_card_cnts): x, y, w, h = locs[:] # Positional information in the original image is preserved dst_img = card_thresh[y:y + h, x:x + w] # Get the position and area of the current image dst_img = (dst_img, dsize) cv_show('rectangle_contours_img', dst_img) tm_vals = {} for number, temp_img in dict_template.items(): # Template matching, using a calculated correlation coefficient, where larger values are more relevant res = (dst_img, temp_img, cv2.TM_CCOEFF_NORMED) min_val, max_val, min_loc, max_loc = (res) tm_vals[number] = max_val number_tm = max(tm_vals, key=tm_vals.get) # Draw the result on the image draw_img = (draw_img, pt1=(x, y), pt2=(x + w, y + h), color=(0, 0, 255), thickness=2) (draw_img, str(number_tm), (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.65, color=(0, 0, 255), thickness=2) cv_show_image('final_result', draw_img)
The results are shown in Fig:
Just showing part of it (output in reverse order)
To this point this article on Python OpenCV credit card number to achieve the method of digital recognition of the article is introduced to this, more related Python OpenCV credit card number to identify the contents of the search for my previous articles or continue to browse the following related articles I hope you will support me in the future more!