SoFunction
Updated on 2024-11-16

python+opencv edge extraction and each function parameter analysis

Previously: As a newcomer to machine vision, I found it difficult to understand machine vision syntax when I learned it in the first class.

A lot of people's experience, I found that they are all the same, the function function is not parsed, the parameters are not explained, just a code, so here will be collected parsing and case taken out to summarize!!!!

I. opencv+python environment building

In fact, can write python can write opencv, but the tool is very important, the code prompts are also very important, you may use tools such as submit vs, submit coding personally feel that not intelligent enough, vs, then after the year I learned the direction of the inconsistency, so it is not used!

testimonialspycharm , install it in the project interpreter in the project setting.opencv-python The python environment is also very easy to set up.

II. Edge Extraction Cases

import cv2
def edge_demo(image):
  Gaussian Smoothing of #GaussianBlur Images
  blurred = (image, (3, 3), 0)
#(3, 3) means that the length and width of the Gaussian matrix are 3, meaning that each pixel point is sampled around the 3*3 matrix for the average, and the standard deviation is taken as 0
  gray = (image, cv2.COLOR_BGR2GRAY)
#Color mode converted to grayscale image in cv2.COLOR_BGR2GRAY mode
  
  edge_output = (gray, 50, 150)
# Extract the edges of the image processed in the previous step, 50 and 150 represent the low and high thresholds respectively, the high threshold is used to distinguish objects from the background, and the low is used to smoothly connect the segments produced by the high thresholds to make the image as a whole
  ("canny edge", edge_output)# Output grayscale images
#Original and grayscale images with operations to cut and sum the original image according to the grayscale image
  dst = cv2.bitwise_and(image, image, mask=edge_output)  ("color edge", dst)# Outputs images with color fringes


if __name__ == '__main__':
  img = ("")
  # ("input image", cv2.WINDOW_AUTOSIZE)
  ("input image", img)
  edge_demo(img)

  (0)#Wait for keyboard input, no input, infinite wait.
  ()#Clear all windows

III. Interpretation of functional functions

Actually, the above code is also used by others, but the vast majority of it is not explained, which is not very friendly to newbies like me

Gaussian processing

The commonly used filtering algorithms in image processing are mean filtering, median filtering, and Gaussian filtering.

Comparison of three filters:

Filter type Basic principle Characteristics

Mean filter Use the average value of all pixels in the template to replace the gray value of the center pixel of the template Easy to receive the interference of noise, can not completely eliminate the noise, but only a relative attenuation of the noise.

Median Filter Calculates the median value of all pixels in the template and changes the gray value of the center pixel of the template with the calculated median value It is not so sensitive to noise, and can eliminate pretzel noise better, but it is easy to cause discontinuity in the image.

Gaussian filtering When smoothing pixels in the neighborhood of an image, pixels at different locations in the neighborhood are assigned different weights Smoothing the image while preserving more of the image's overall gray-scale distribution characteristics

The idea is to make your image have a more even distribution of grayscale, with the pixels at each point all around theSample around the 3*3 matrix for the mean, and take 0 for the standard deviation.

 blurred = (image, (3, 3), 0)
Gaussian Smoothing of #GaussianBlur Images
#(3, 3)means that the length and width of the Gaussian matrix are both3,It means that each pixel point presses3*3The matrices are sampled around for the mean value of the,,The standard deviation is taken as0

Gray scale conversion ----" is also called binarization processing

As the name suggests is converted to black and white images, the latter parameter cv2.COLOR_BGR2GRAY is actually the color mode, so the function is called cvtColor (color mode conversion)

cvtColor() is used to convert the image from one color space to another color space conversion (the current common color space are supported), and in the process of conversion to ensure that the type of data remains unchanged.That is, the data type and bit depth of the converted image are the same as those of the source image.

 gray = (image, cv2.COLOR_BGR2GRAY)
#The color mode is converted tocv2.COLOR_BGR2GRAYGray scale image in mode

Edge Recognition Extraction

This step is to extract the edges from the binarized image, 50 and 150 represent the low and high thresholds respectively, the high threshold is used to distinguish the object from the background, and the low is used to smooth the segments produced by connecting the high thresholds to make the image as a whole

In short, the small for small places to deal with, large macro processing ---- "large threshold for separating the background and contours, know for splicing the small contours, you can form a whole!

edge_output = (gray, 50, 150)
#Extract the edges of the image processed in the previous step,50cap (a poem)150分别代表低阈值cap (a poem)高阈值,High thresholds are used to distinguish objects from the background,Low for smooth linkage of fragments generated by high thresholds,monolithic

output can be, the small side of the function is just a comparison of learning, you can not use the

(for dst = cv2.bitwise_and(image, image, mask=edge_output) ("color edge", dst)#output image with color edge)

)

to this article on python + opencv edge extraction and analysis of the parameters of the function is introduced to this article, more related to python opencv edge extraction content, please search for my previous articles or continue to browse the following related articles I hope you will support me in the future more!