SoFunction
Updated on 2024-11-19

Face++ API Implementation of Gesture Recognition System Design

Recognition through the ordinary camera to take out the photo is a great difficulty, but there are difficulties to find a better way to solve. In Baidu roughly look for the case of sign language recognition, very few. API just saw Face + + released gesture recognition, in my article when I saw Baidu released gesture recognition API, after that will try to go to use.

This time we use Face++ API, Face++ API is found before, the function is still relatively powerful, but there is no offline version, you need to upload the data, and then parse the JSON to get the result.

This is a Demo given by the official website, the recognition rate is quite good, the last given is a distribution probability on 20 kinds of gestures, next we call our own API to analyze their own gestures.

1. Check the official API. find the Gesture API and read what it says first.

Call parameters:

There are also some official instructions for calling the parameters returned by the error, so if you are interested, you can go to the official website.

An example of calling the API using the command line is also given:

As you can see from the example, the default parameters for sending a request to /humanbodypp/beta/gesture are api_key, api_secret, and image_file. api_key and api_secret can be generated from the console.

Next start writing the code calls, for the Python version, similar for other versions.

We encapsulate the API into a class Gesture:

Replace the key and secret in it with your own and it will work:

'''
# -*- coding:utf-8 -*-
@author: TulLing
'''
import requests 
from json import JSONDecoder 
 
gesture_englist = ['big_v','fist','double_finger_up','hand_open','heart_d','index_finger_up','ok','phonecall','palm_up','rock','thumb_down','thumb_up','victory']
gesture_chinese = ["I'm the most handsome.",
   "Fist, stop.",
   "I swear.",
   "The number five.",
   "Compare hearts.",
   "Number 1",
   "Okay then, OK.",
   "Call.",
   "Palms up.",
   "Love you, 520.",
   "Bad reviews, bad ones.",
   "Rave reviews, Good, great.",
   "Victory. Happy."]
# Sort the dictionary
def sort_dict(adict):
 return sorted((),key= lambda item:item[1])
 
class Gesture(object):
 def __init__(self):
 self.http_url = '/humanbodypp/beta/gesture'
  = '*****'
  = '******'
  = {"api_key":,"api_secret":}
 
 
 # Getting gesture information
 def get_info(self,files):
 response = (self.http_url,data=,files=files)
 req_con = ('utf-8')
 req_dict = JSONDecoder().decode(req_con)
 #print(req_dict)
 if('error_message' not in req_dict.keys()) and (len(req_dict['hands'])):
 # Get
  hands_dict = req_dict['hands']
  #print(type(hands_dict))
  # Get the dictionary of the rectangle in hand
  gesture_rectangle_dict = hands_dict[0]['hand_rectangle']
  # Get the dictionary for the gesture
  gesture_dict = hands_dict[0]['gesture']
  
  return gesture_dict,gesture_rectangle_dict
 else:
  return [],[];
 
 # Get the gesture text message
 def get_text(self,index):
 return gesture_chinese[index]
 
 # Get the probability that the gesture corresponds to
 def get_pro(self,gesture_dict,index):
 # print(gesture_dict)
 if(gesture_dict is None or gesture_dict == []):
  return 0
 return gesture_dict[gesture_englist[index]]
 
 # Get the position of the gesture
 def get_rectangle(self,gesture_rectangle_dict):
 if(gesture_rectangle_dict is None or gesture_rectangle_dict == []):
  return (0,0,0,0)
 x = gesture_rectangle_dict['top']
 y = gesture_rectangle_dict['left']
 width = gesture_rectangle_dict['width']
 height = gesture_rectangle_dict['height']
 return (x,y,width,height)

After encapsulating the Gesture class next is to call: first the official gesture given in the picture saved, in order to facilitate the retention of only one-handed gestures, and then generate a random number to read the gesture image, we go to imitate the gesture, the background display is the probability of the correct gesture as well as the specific location, if there is no gesture in the image then the probability of 0, the location of the (0,0,0,0).

'''
# -*- coding:utf-8 -*-
@author: TulLing
'''
import sys
("../gesture/")
 
import os
import random
import cv2 as cv
import time
import LearnGesture
 
def gestureLearning():
 ("cls")
 print("Go into learning gesture mode!")
 print("We do.13motion,Come learn from me.!(At the end of each session there is an option to enter Q\q abort!)")
 while(True):
 pic_num = (0,12) # Generate the number of the displayed image (random number: 0 - 13)
 print(pic_num)
 pic_path = '../gesture/pic/gesture' + str(pic_num) + ".jpg" # Generate image paths
 
 pic = (pic_path) # Load images
 pic = (pic,(120,120))
 ("PIC",pic) # Show gestures to be learned
 
 print("Cameras will be turned on soon, you have 5 seconds to prepare your hand signals and 5 seconds to hold them!")
 write_path = "../gesture/pic/"
 cap = (1)
 while(True):
  _,frame = ()
  ("Frame",frame)
  key = (10)
  if(key == ord('Q') or key == ord('q')):
  (write_path,frame)
  (200)
  ()
  ()
  break
  
 # There should be gesture recognition here
 files = {"image_file":open(write_path,'rb')}
 gesture = ()
 
 # Get the gesture text
 ge_text = gesture.get_text(pic_num)
 # Getting gesture information
 gesture_dict,gesture_rectangle_dict = gesture.get_info(files)
 # Getting the probability of a gesture
 ge_pro = gesture.get_pro(gesture_dict,pic_num)
 # Get the coordinates of the gesture
 ge_rect = gesture.get_rectangle(gesture_rectangle_dict)
 print("The gesture you are learning is:",ge_text)
 print("Similarity reached:",ge_pro)
 print("Specific location:",ge_rect)
 
 
 # print("End of the learning round, do you want to continue? (Y/N)")
 # Exit the program, return to the main menu or continue
 commend = input("End of study round,Continuing studies or not?(Y/N):")
 print(commend)
 
 if( commend == 'N' or commend == "n"):
  break
gestureLearning()

The path where the image is saved: . /pic/

Run results:

Randomized gestures displayed

Imitated gestures (typing yards, mainly looking at the hands)

After clicking Q:

The gestures are done a little substandard, but that's okay, the system works.

This concludes the article on calling Face++ API. The code will be uploaded when it is packaged. The link address will be changed afterwards.

This is the whole content of this article.