SoFunction
Updated on 2024-11-10

OpenCV implementation of camera calibration

In this article, we share the example of OpenCV to achieve the specific code of the camera calibration, for your reference, the details are as follows

I. Camera and pinhole camera models

1. Camera model

With modern technology, cameras have become sophisticated consumer products, and their optics are much more complex than they were at the beginning of their existence.
Typical SLR camera optical structure.

Among the many camera models, the pinhole camera aka projection camera model is a relatively simple and commonly used model. Simply put, the pinhole camera model is a simplification of the camera into a simple small hole imaging, it can be imagined that this simplification for the high accuracy requirements of the situation or special lens camera is not applicable.
Principle of small hole imaging:

2. Introduction of lenses

Lenses are not considered in the small-hole imaging model alone, and in real-world conditions, a lens consisting of one or more lenses is required to make the image of a camera that utilizes the small-hole imaging principle clear while maintaining the brightness of the picture. So we need to introduce lenses into the model.
Principle of lens imaging:

But a new problem has arisen: defocus,distortion
Generally, we call this radial aberration, which means that light rays are more curved in the center of the lens than near the center. Radial aberrations are divided into short and medium focal lengths, close to the barrel-shaped aberrations and long focal lengths, long distance will appear pillow-shaped aberrations.

II. Camera parameters

1. Coordinate system conventions

Let's agree on three coordinate systems

1. World coordinate system matrix: X

2、Camera coordinate system:Xc,

3. Image (pixel) coordinate system: x

4. Camera matrix: P

2. projection from image plane to pixel plane

Take a point in three-dimensional space and take a plane parallel to the pixel plane over that point, which is the image plane. Let this three-position point P, with chi-square coordinates x. Project as image point P', with plane coordinates x.
Pinhole camera model:

The relationship between pixel coordinates and image coordinates in the pinhole camera model:
λx = PX
where λ is the inverse depth of the triple point. p is the camera matrix, which can be decomposed as:
P = R[K|t]
R is a rotation matrix describing the orientation of the camera, and t is a 3D translation vector describing the position of the center of the camera.The internal calibration matrix K Describes the projective properties of the camera. The calibration matrix is related only to the camera itself and can usually be written as:

The focal length f is the distance from the image in plane to the center of the pixel plane. s is the tilt parameter and α is the aspect ratio parameter.
When the pixel array is not skewed on the sensor and the pixels are square, one can set s = 0 and α = 1. The calibration matrix can be simplified as:

III. Camera calibration

Experimental pictures are shown below:

The code is as follows:

import cv2
import numpy as np
import glob

# Find the corner of the chessboard grid
# Threshold
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Tessellated formwork specifications
w = 7   # of interior corner points, which are points connected to the rest of the grid
h = 7

# Checkerboard grid points in the world coordinate system, e.g. (0,0,0), (1,0,0), (2,0,0) .... ,(8,5,0), remove the Z-coordinate and write it down as a 2D matrix
objp = ((w*h,3), np.float32)
objp[:,:2] = [0:w,0:h].(-1,2)
# Store pairs of world and image coordinates of checkerboard grid corners
objpoints = [] # Three-dimensional points in the world coordinate system
imgpoints = [] # 2D points in the image plane

images = ('picture/*.jpg')
for fname in images:
    img = (fname)
    gray = (img,cv2.COLOR_BGR2GRAY)
    # Find the corner of the chessboard grid
    # Board image (8-bit grayscale or color image) Board size Location of stored corner points
    ret, corners = (gray, (w,h),None)
    # If enough point pairs are found, store them
    if ret == True:
        # Precise detection of corners
        # Input image Initial coordinates of corner points Search window for 2*winsize+1 dead zone Find iteration termination condition for corner points
        (gray,corners,(11,11),(-1,-1),criteria)
        (objp)
        (corners)
        # Display the corner points on the image
        (img, (w,h), corners, ret)
        ('findCorners',img)
        (1000)
()
#Calibrated, de-distorted
# Input: position in the world coordinate system, pixel coordinates, pixel size of the image, 3*3 matrix, in-camera parameter matrix, aberration matrix.
# Output: Calibration results Camera's internal parameter matrix Aberration coefficients Rotation matrix Translation vectors
ret, mtx, dist, rvecs, tvecs = (objpoints, imgpoints, [::-1], None, None)
# mtx: matrix of inner parameters
# dist: distortion factor
# rvecs: rotation vector (outer parameter)
# tvecs : translation vector (outer parameter)
print (("ret:"),ret)
print (("mtx:\n"),mtx)        # Internal parameter matrix
print (("dist:\n"),dist)      # distortion cofficients = (k_1,k_2,p_1,p_2,k_3)
print (("rvecs:\n"),rvecs)    # Rotation vectors # External parameters
print (("tvecs:\n"),tvecs)    # Translation vectors # External parameters
# De-distortion
img2 = ('picture/')
h,w = [:2]
# We've got the camera internal reference and the distortion coefficients, and before we de-distort the image
# We can also optimize the inner parameters and distortion coefficients by using ()
# By setting the free-freedom scaling factor alpha. when alpha is set to zero, the
# will return a clipped inner parameter and distortion factor that removes unwanted pixels after de-distortion;
# When alpha is set to 1, an inner parameter with extra black pixels and an aberration factor will be returned, and an ROI will be returned to crop it out
newcameramtx, roi=(mtx,dist,(w,h),0,(w,h)) # Free scale parameters

dst = (img2, mtx, dist, None, newcameramtx)
# Crop the image according to the previous ROI area
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
('',dst)

# Backprojection error
# With the inverse projection error, we can evaluate how good the results are. The closer to 0, the better the result.
# Calculate the projection of a 3D point onto a 2D image from the previously computed inner parameter matrix, distortion coefficients, rotation matrix, and translation vectors using ().
# The error between the points obtained by backprojection and the points detected on the image is then calculated, and finally an average error is calculated for all calibrated images, this value is the backprojection error.
total_error = 0
for i in range(len(objpoints)):
    imgpoints2, _ = (objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    error = (imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
    total_error += error
print (("total error: "), total_error/len(objpoints))

This is the whole content of this article.