0. Introduction
When I downloaded the example code from the dlib website, I didn't know how to use it at first, but I figured out how to use it after some fiddling;
I am sharing the usage of face_detector.py and face_landmark_detection.py py;
1. Introduction
python:3.6.3
dlib: 19.7
The feature extractor of dlib is utilized for the feature extraction of the rectangular frame of the face:
dets = dlib.get_frontal_face_detector(img)
Face 68-point feature extraction using dlib's 68-point feature predictor:
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") shape = predictor(img, dets[0])
Effect:
(a) face_detector.py
b) face_landmark_detection.py
Introduction to Document Functions
face_detector.py :
Recognizes one or more faces in the image file and identifies the face with a rectangular box;
link: /cnn_face_detector.
face_landmark_detection.py : Based on face_detector.py's recognition of faces, it recognizes the specific feature parts of the human face: chin outline, eyebrows, eyes, mouth, and also identifies the facial features with markers;
link: /face_landmark_detection.
2.1. face_detector.py
The official website gives face_detector.py
#!/usr/bin/python # The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt # # This example program shows how to find frontal human faces in an image. In # particular, it shows how you can take a list of images from the command # line and display each on the screen with red boxes overlaid on each human # face. # # The examples/faces folder contains some jpg images of people. You can run # this program on them and see the detections by executing the # following command: # ./face_detector.py ../examples/faces/*.jpg # # This face detector is made using the now classic Histogram of Oriented # Gradients (HOG) feature combined with a linear classifier, an image # pyramid, and sliding window detection scheme. This type of object detector # is fairly general and capable of detecting many types of semi-rigid objects # in addition to human faces. Therefore, if you are interested in making # your own object detectors then read the train_object_detector.py example # program. # # # COMPILING/INSTALLING THE DLIB PYTHON INTERFACE # You can install dlib using the command: # pip install dlib # # Alternatively, if you want to compile dlib yourself then go into the dlib # root folder and run: # python install # or # python install --yes USE_AVX_INSTRUCTIONS # if you have a CPU that supports AVX instructions, since this makes some # things run faster. # # Compiling dlib should work on any operating system so long as you have # CMake and boost-python installed. On Ubuntu, this can be done easily by # running the command: # sudo apt-get install libboost-python-dev cmake # # Also note that this example requires scikit-image which can be installed # via the command: # pip install scikit-image # Or downloaded from /. import sys import dlib from skimage import io detector = dlib.get_frontal_face_detector() win = dlib.image_window() for f in [1:]: print("Processing file: {}".format(f)) img = (f) # The 1 in the second argument indicates that we should upsample the image # 1 time. This will make everything bigger and allow us to detect more # faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for i, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( i, (), (), (), ())) win.clear_overlay() win.set_image(img) win.add_overlay(dets) dlib.hit_enter_to_continue() # Finally, if you really want to you can ask the detector to tell you the score # for each detection. The score is bigger for more confident detections. # The third argument to run is an optional adjustment to the detection threshold, # where a negative value will return more detections and a positive value fewer. # Also, the idx tells you which of the face sub-detectors matched. This can be # used to broadly identify faces in different orientations. if (len([1:]) > 0): img = ([1]) dets, scores, idx = (img, 1, -1) for i, d in enumerate(dets): print("Detection {}, score: {}, face_type:{}".format( d, scores[i], idx[i]))
import dlib from skimage import io # Use the feature extractor frontal_face_detector detector = dlib.get_frontal_face_detector() # path is the path to the image path = "F:/code/python/P_dlib_face/pic/" img = (path+"") # Instantiation of the feature extractor dets = detector(img) print("Number of faces:", len(dets)) # Output the four coordinate points of the face rectangle for i, d in enumerate(dets): print("No.", i, "Coordinates of personal face d:", "left:", (), "right:", (), "top:", (), "bottom:", ()) # Drawing pictures win = dlib.image_window() # Clear the overlay #win.clear_overlay() win.set_image(img) # Overlay the resulting matrix with win.add_overlay(dets) # Hold the image dlib.hit_enter_to_continue()
pairs to perform face detection:
Results:
Image window results:
Output results:
face number (i.e. number of faces): 1 (prefix indicating ordinal number, e.g. first, number two etc) 0 personal face: left: 79 right: 154 top: 47 bottom: 121 Hit enter to continue
Detection results for multiple faces:
2.2 face_landmark_detection.py
face_detector.py from the official website
#!/usr/bin/python # The contents of this file are in the public domain. See LICENSE_FOR_EXAMPLE_PROGRAMS.txt # # This example program shows how to find frontal human faces in an image and # estimate their pose. The pose takes the form of 68 landmarks. These are # points on the face such as the corners of the mouth, along the eyebrows, on # the eyes, and so forth. # # The face detector we use is made using the classic Histogram of Oriented # Gradients (HOG) feature combined with a linear classifier, an image pyramid, # and sliding window detection scheme. The pose estimator was created by # using dlib's implementation of the paper: # One Millisecond Face Alignment with an Ensemble of Regression Trees by # Vahid Kazemi and Josephine Sullivan, CVPR 2014 # and was trained on the iBUG 300-W face landmark dataset (see # /resources/facial-point-annotations/): # C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic. # 300 faces In-the-wild challenge: Database and results. # Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016. # You can get the trained model file from: # /files/shape_predictor_68_face_landmarks.dat.bz2. # Note that the license for the iBUG 300-W dataset excludes commercial use. # So you should contact Imperial College London to find out if it's OK for # you to use this model file in a commercial product. # # # Also, note that you can train your own models using dlib's machine learning # tools. See train_shape_predictor.py to see an example. # # # COMPILING/INSTALLING THE DLIB PYTHON INTERFACE # You can install dlib using the command: # pip install dlib # # Alternatively, if you want to compile dlib yourself then go into the dlib # root folder and run: # python install # or # python install --yes USE_AVX_INSTRUCTIONS # if you have a CPU that supports AVX instructions, since this makes some # things run faster. # # Compiling dlib should work on any operating system so long as you have # CMake and boost-python installed. On Ubuntu, this can be done easily by # running the command: # sudo apt-get install libboost-python-dev cmake # # Also note that this example requires scikit-image which can be installed # via the command: # pip install scikit-image # Or downloaded from /. import sys import os import dlib import glob from skimage import io if len() != 3: print( "Give the path to the trained shape predictor model as the first " "argument and then the directory containing the facial images.\n" "For example, if you are in the python_examples folder then " "execute this program by running:\n" " ./face_landmark_detection.py shape_predictor_68_face_landmarks.dat ../examples/faces\n" "You can download a trained facial shape predictor from:\n" " /files/shape_predictor_68_face_landmarks.dat.bz2") exit() predictor_path = [1] faces_folder_path = [2] detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(predictor_path) win = dlib.image_window() for f in ((faces_folder_path, "*.jpg")): print("Processing file: {}".format(f)) img = (f) win.clear_overlay() win.set_image(img) # Ask the detector to find the bounding boxes of each face. The 1 in the # second argument indicates that we should upsample the image 1 time. This # will make everything bigger and allow us to detect more faces. dets = detector(img, 1) print("Number of faces detected: {}".format(len(dets))) for k, d in enumerate(dets): print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format( k, (), (), (), ())) # Get the landmarks/parts for the face in box d. shape = predictor(img, d) print("Part 0: {}, Part 1: {} ...".format((0), (1))) # Draw the face landmarks on the screen. win.add_overlay(shape) win.add_overlay(dets) dlib.hit_enter_to_continue()
Modification:
Draw two overlays, the matrix box and the facial features.
import dlib from skimage import io # Use the feature extractor frontal_face_detector detector = dlib.get_frontal_face_detector() # 68-point model for dlib path_pre = "F:/code/python/P_dlib_face/" predictor = dlib.shape_predictor(path_pre+"shape_predictor_68_face_landmarks.dat") # Path to the image path_pic = "F:/code/python/P_dlib_face/pic/" img = (path_pic+"") # Generate image window for dlib win = dlib.image_window() win.clear_overlay() win.set_image(img) # Instantiation of the feature extractor dets = detector(img, 1) print("Number of faces:", len(dets)) for k, d in enumerate(dets): print("No.", k, "Coordinates of personal face d:", "left:", (), "right:", (), "top:", (), "bottom:", ()) # Forecasting using predictors shape = predictor(img, d) # Drawing facial contours win.add_overlay(shape) # Drawing matrix outlines win.add_overlay(dets) # Hold the image dlib.hit_enter_to_continue()
Results:
face number (i.e. number of faces): 1 (prefix indicating ordinal number, e.g. first, number two etc) 0 personal facedcoordinates of: left: 79 right: 154 top: 47 bottom: 121
Image window results:
The blue ones are drawnwin.add_overlay(shape) The red ones are drawnwin.add_overlay(dets)
For multiple face detection results:
The official routine is to use [] to read the command line input, in fact, for convenience I wrote the file path, if you have doubts about [], you can refer to the following summary:
* :: With regard to the use of [].
( See here if you don't understand the use of [] in code )
Used to get cmd command line parameters, for example, get the XXXXX parameter of the cmd command "python XXXXX", which can be used to read the path of the file entered by the user under cmd;
If you don't understand you can read the image directly within the python code img = imread("F:/*****/") instead of img = imread([1]);
Code examples are used to aid understanding:
1. ([0], referring to the path to the code file itself)
:
import sys a=[0] print(a)
cmd input:
python
cmd output:
2. ([1], the first character in the parameter string obtained by cmd input)
:
import sys a=[1] print(a)
cmd input:
python what is your name
cmd output:
what
([1:], from the first character to the end of the parameter string obtained by cmd input)
:
import sys a=[1:] print(a)
cmd input:
python what is your name
cmd output:
[“what”,“is”,“your”,“name”]
3. ([2], second character in the parameter string obtained by cmd input)
:
import sys a=[2] print(a)
cmd input:
python what is your name
cmd output:
"is"