Calibrate Monocular Vision Cameras With Narrow-Angle Lens

Basic Camera Calibration

Cannonical theories and methods of pinhole camera calibration have been deeply investigated for about two centuries. Camera calibration has already been widely used in photogremmetry, surveying and mapping, etc. Basically, the pinhole camera model of imaging from the 3D world frame to the 2D image plane is simply denoted by the following ill-conditioned equation:

\[\begin{split}\begin{pmatrix} \vec{u} \\ \vec{v} \end{pmatrix} =P_{2*3} \begin{pmatrix} \vec{X} \\ \vec{Y} \\ \vec{Z} \end{pmatrix}\end{split}\]

, where

\[\begin{split}\begin{pmatrix} \vec{u} \\ \vec{v} \end{pmatrix} = \begin{pmatrix} u_1 & u_2 & \cdots & u_n \\ v_1 & v_2 & \cdots & v_n \end{pmatrix}\end{split}\]

represents the projected 2D points’ coordinates on the image plane,

\[\begin{split}\begin{pmatrix} \vec{X} \\ \vec{Y} \\ \vec{Z} \end{pmatrix} = \begin{pmatrix} X_1 & X_2 & \cdots & X_n \\ Y_1 & Y_2 & \cdots & Y_n \\ Z_1 & Z_2 & \cdots & Z_n \end{pmatrix}\end{split}\]

represents the 3D points’ coordinates in the world coordinate system. \(P_{2*3}\) is a \(2*3\) projection matrix, \(n\) represents the number of points under concern. Normally, \(n>=4\).

By using Homogeneous Coordinates for both the 2D image point, as well as the 3D world point, the above equation can be re-denoted as:

\[\begin{split}\begin{pmatrix} \vec{u} \\ \vec{v} \\ \vec{w} \end{pmatrix} =P_{3*4} \begin{pmatrix} \vec{X} \\ \vec{Y} \\ \vec{Z} \\ \vec{1} \end{pmatrix}\end{split}\]

By refering to the above references, \(P_{3*4}\) in the above pinhole camera model can be decomposed to:

\[\begin{split}P_{3*4}&=A_{3*3}[R_{3*3}|t_{3*1}] \\ &=\begin{pmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} r_{11} & r_{12} & r_{13} & t_1 \\ r_{21} & r_{22} & r_{23} & t_2 \\ r_{31} & r_{32} & r_{33} & t_3 \end{pmatrix}\end{split}\]

, where

\[\begin{split}A_{3*3}=\begin{pmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix}\end{split}\]

is the camera matrix, which is also called camera’s intrinsic parameters;

\[\begin{split}R_{3*3}=\begin{pmatrix} r_{11} & r_{12} & r_{13} \\ r_{21} & r_{22} & r_{23} \\ r_{31} & r_{32} & r_{33} \end{pmatrix}\end{split}\]

is the rotation matrix;

\[\begin{split}t_{3*1}=\begin{pmatrix} t_1 \\ t_2 \\ t_3 \end{pmatrix}\end{split}\]

is the translation.

The combination of rotation matrix and translation

\[\begin{split}[R_{3*3}|t_{3*1}]= \begin{pmatrix} r_{11} & r_{12} & r_{13} & t_1 \\ r_{21} & r_{22} & r_{23} & t_2 \\ r_{31} & r_{32} & r_{33} & t_3 \end{pmatrix}\end{split}\]

is just called camera’s extrinsic parameters, which also defines the camera’s relative posture.

Camera Matrix

Clearly, there are several obvious systematic errors:

  • the process of camera assembling cannot guarantee the principal optic axis pass through the center of CMOS (complementary metal oxide semiconductor). In such, the center we expect to be at the center of the image will not be exactly the real scenary center.

  • the manufacturing of camera lens cannot guarantee a perfect isotropic lens surface, therefore the focal length in every direction are not perfectly equal. And the lens assembling process cannot guarantee the focal length to be always of the same length. The focal length(s) are even adjustable sometimes. For simplicity, focal lengths in two directions \(x\) and \(y\) are assumed to be fixed and used to represent the camera’s focal lengths in all directions.

Therefore, the following 4 parameters \(f_x, f_y, c_x, c_y\) are to be calculated, where \((f_x, f_y)\) are two focal lengths in the direction of \(x\) and \(y\), and \((c_x, c_y)\) represent the image center. Clearly, if we write these 4 parameters in the form of the following camera matrix

\[\begin{split}\begin{pmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix}\end{split}\]

, the above projection function will turn into:

\[\begin{split}\begin{pmatrix} \vec{x} \\ \vec{y} \\ \vec{z} \end{pmatrix} = \begin{pmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \vec{X} \\ \vec{Y} \\ \vec{Z} \end{pmatrix}\end{split}\]

Distortion

Besides the above systematic errors, there are two other image imperfections caused by distortion, respectively, radial distortion and decentering distortion (namely, tangential distortion). Please refer to the following academic papers for the detailed deductions.

Both Camera calibration With OpenCV and Matlab Camera Calibration cope with radial distortion and tangential distortion by using the following formulas respectively:

  • Radial:

\[\begin{split}x_{distorted} = x(1+k_1r^2+k_2r^4+k_3r^6) \\ y_{distorted} = y(1+k_1r^2+k_2r^4+k_3r^6)\end{split}\]
  • Tangential:

\[\begin{split}x_{distorted} = x+[2p_1xy+p_2(r^2+2x^2)] \\ y_{distorted} = y+[p_1(r^2+2y^2)+2p_2xy]\end{split}\]

In such, we have 5 distortion parameters to calculate:

  • 3 parameters \(k_1, k_2, k_3\) for radial distortion;

  • 2 parameters \(p_1, p_2\) for tangential distortion.

Distortion parameters are normally used for monocular vision cameras with wide-angle lens or fisheye lens.

Calibration Pattern

As shown above, \(n\) points are used to compute the 4 parameters in camera matrix in the projection function. That means, at least 4 independent equations are required. If 5 additional distortion parameters are also to be computed, at least 4+5=9 independent equations are required. What’s more, in order to avoid the manual annotation, a well-designed calibration pattern is used for auto detection of such \(n\) points. In fact, in each adopted frame, a number of key points will be localized. Supposing 10 frames are captured, and in each frame 35 key points are accurately localized, the projection function is going to be composed of 10*35=350 independent equations, which is surely adequate.

Camera calibration With OpenCV discussed three popular calibration patterns, including:

Classical Black-white Chessboard

Classical black-white chessboard

Classical black-white chessboards

Symmetrical Circle Pattern

Clearly, since a circle itself is symmetrical, if the circle pattern is designed as symmetrical at the same time, this will cause the ambiguity when the calibration board is turned around 180 degree. That’s possibly why a symmetrical circle pattern is NOT provided in OpenCV, which is also NOT recommended from us.

Asymmetrical circle pattern

Asymmetrical circle pattern

Asymmetrical circle pattern

The BEST low-cost way to design such a pattern is NOT to have it printed on a hardboard, but directly have it displayed on an extra monitor. In such a way,

  • you not only ensure the perfect flatness of the calibration board,

  • but also move a portable handy camera in front of the monitor rather than hold the calibration board and move it around the fixed camera.

Demonstrations

Preparation

Now, let’s calibrate a popular monocular vision camera Logitech C930e with its default narrow-angle lens. We’ll use a classical black-white chessboard in Demo 1 and an asymmetrical circle pattern in Demo 2 respectively. Before running the demo code, guarantee Logitech C930e is correctly connected.

  1. Plug in Logitech C930e to your host computer via USB.

  2. Open up a terminal, and type in lsusb. You’ll see device info about Logitech C930e.

➜  ~ lsusb
......
Bus 001 Device 006: ID 046d:0843 Logitech, Inc. Webcam C930e
......
  1. Make sure python packages numpy and cv2 have been installed on your system.

➜  ~ pip show numpy
Name: numpy
Version: 1.18.3
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
Author-email: None
License: BSD
Location: /home/longervision/.local/lib/python3.6/site-packages
Requires:
Required-by: ......
➜  ~ python
Python 3.6.9 (default, Nov  7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'master-dev'
>>> cv2.__file__
'/usr/local/lib/python3.6/dist-packages/cv2/python-3.6/cv2.cpython-36m-x86_64-linux-gnu.so'
>>> exit()
➜  ~

Demo 1: Calibration Based On Classical Black-white Chessboard

Code Snippet: chessboard.py

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
################################################################################
#                                                                              #
#                                                                              #
#           IMPORTANT: READ BEFORE DOWNLOADING, COPYING AND USING.             #
#                                                                              #
#                                                                              #
#      Copyright [2017] [ShenZhen Longer Vision Technology], Licensed under    #
#      ******** GNU General Public License, version 3.0 (GPL-3.0) ********     #
#      You are allowed to use this file, modify it, redistribute it, etc.      #
#      You are NOT allowed to use this file WITHOUT keeping the License.       #
#                                                                              #
#      Longer Vision Technology is a startup located in Chinese Silicon Valley #
#      NanShan, ShenZhen, China, (http://www.longervision.cn), which provides  #
#      the total solution to the area of Machine Vision & Computer Vision.     #
#      The founder Mr. Pei JIA has been advocating Open Source Software (OSS)  #
#      for over 12 years ever since he started his PhD's research in England.  #
#                                                                              #
#      Longer Vision Blog is Longer Vision Technology's blog hosted on github  #
#      (http://longervision.github.io). Besides the published articles, a lot  #
#      more source code can be found at the organization's source code pool:   #
#      (https://github.com/LongerVision/OpenCV_Examples).                      #
#                                                                              #
#      For those who are interested in our blogs and source code, please do    #
#      NOT hesitate to comment on our blogs. Whenever you find any issue,      #
#      please do NOT hesitate to fire an issue on github. We'll try to reply   #
#      promptly.                                                               #
#                                                                              #
#                                                                              #
# Version:          0.0.1                                                      #
# Author:           JIA Pei                                                    #
# Contact:          jiapei@longervision.com                                    #
# URL:              http://www.longervision.cn                                 #
# Create Date:      2017-03-19                                                 #
# Modified Date:    2020-01-18                                                 #
# Modified Date:    2020-04-21                                                 #
################################################################################

import numpy as np
import cv2


# termination criteria
# 30: maximum specified number of iterations
# 0.001: specified/desired accuracy, epsilon
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

cap = cv2.VideoCapture(2)
num = 20
found = 0
while(found < num):			# Here, 20 can be changed to whatever number you like to choose
    ret, img = cap.read()	# Capture frame-by-frame
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)	# BGR to GRAY

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (7,6), None)

    # If found, add object points, image points (after refining them)
    if ret == True:
        objpoints.append(objp)	# Certainly, every loop objp is the same, in 3D.

        corners2 = cv2.cornerSubPix(gray,corners,(5,5),(-1,-1),criteria)	# 2D Projection
        imgpoints.append(corners2)

        # Draw and display the corners
        img = cv2.drawChessboardCorners(img, (7,6), corners2, ret)

        # Enable the following 2 lines if you want to save the calibration images.
        filename = str(found).zfill(2) +".jpg"
        cv2.imwrite(filename, img)

        found += 1

    cv2.imshow('img', img)
    cv2.waitKey(10)


# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

# Calibration
# objpoints - 3D points in real world coordinates.
# imgpoints - objpoints' 2D projections on gray, with further refinement.
# gray.shape[::-1] - image size.
# ret - function return value. A non-zero value of ret means function is successfully run.
# mtx - camera matrix.
# dist - distortion coefficients.
# rvecs - rotation vectors.
# tvecs - translation vectors.
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)


#  Python code to write the image (OpenCV 4.3)
fs = cv2.FileStorage('calibration.yml', cv2.FILE_STORAGE_WRITE)
fs.write('camera_matrix', mtx)
fs.write('dist_coeff', dist)
fs.release()

Intermediate Images: Chessboard

00.jpg 01.jpg 02.jpg 03.jpg
04.jpg 05.jpg 06.jpg 07.jpg
08.jpg 09.jpg 10.jpg 11.jpg
12.jpg 13.jpg 14.jpg 15.jpg
16.jpg 17.jpg 18.jpg 19.jpg

Results: calibration_chessboard.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
%YAML:1.0
---
camera_matrix: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1.3791275848728299e+03, 0., 1.1424618419580618e+03, 0.,
       1.3814642193483867e+03, 7.5694652936749662e+02, 0., 0., 1. ]
dist_coeff: !!opencv-matrix
   rows: 1
   cols: 5
   dt: d
   data: [ 8.0913920339187262e-02, -2.8105574608293149e-01,
       1.7830770264422226e-03, 4.2064577323684092e-03,
       2.9991508240001869e-02 ]

Clearly,

\[\begin{split}&\begin{pmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix} = \\ &\begin{pmatrix} 1.3791275848728299e+03 & 0. & 1.1424618419580618e+03 \\ 0. & 1.3814642193483867e+03 & 7.5694652936749662e+02 \\ 0. & 0. & 1. \end{pmatrix}\end{split}\]
\[\begin{split}\begin{pmatrix} k_1 & k_2 & p_1 & p_2 & k_3 \end{pmatrix} = \begin{pmatrix} 8.0913920339187262e-02 \\ -2.8105574608293149e-01 \\ 1.7830770264422226e-03 \\ 4.2064577323684092e-03 \\ 2.9991508240001869e-02 \end{pmatrix}^T\end{split}\]

Demo 2: Calibration Based On Asymmetrical Circle Pattern

Code Snippet: circle_grid.py

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
################################################################################
#                                                                              #
#                                                                              #
#           IMPORTANT: READ BEFORE DOWNLOADING, COPYING AND USING.             #
#                                                                              #
#                                                                              #
#      Copyright [2017] [ShenZhen Longer Vision Technology], Licensed under    #
#      ******** GNU General Public License, version 3.0 (GPL-3.0) ********     #
#      You are allowed to use this file, modify it, redistribute it, etc.      #
#      You are NOT allowed to use this file WITHOUT keeping the License.       #
#                                                                              #
#      Longer Vision Technology is a startup located in Chinese Silicon Valley #
#      NanShan, ShenZhen, China, (http://www.longervision.cn), which provides  #
#      the total solution to the area of Machine Vision & Computer Vision.     #
#      The founder Mr. Pei JIA has been advocating Open Source Software (OSS)  #
#      for over 12 years ever since he started his PhD's research in England.  #
#                                                                              #
#      Longer Vision Blog is Longer Vision Technology's blog hosted on github  #
#      (http://longervision.github.io). Besides the published articles, a lot  #
#      more source code can be found at the organization's source code pool:   #
#      (https://github.com/LongerVision/OpenCV_Examples).                      #
#                                                                              #
#      For those who are interested in our blogs and source code, please do    #
#      NOT hesitate to comment on our blogs. Whenever you find any issue,      #
#      please do NOT hesitate to fire an issue on github. We'll try to reply   #
#      promptly.                                                               #
#                                                                              #
#                                                                              #
# Version:          0.0.1                                                      #
# Author:           JIA Pei                                                    #
# Contact:          jiapei@longervision.com                                    #
# URL:              http://www.longervision.cn                                 #
# Create Date:      2017-03-20                                                 #
# Modified Date:    2020-01-18                                                 #
# Modified Date:    2020-04-21                                                 #
################################################################################

# Standard imports
import numpy as np
import cv2


# termination criteria
# 30: maximum specified number of iterations
# 0.001: specified/desired accuracy, epsilon
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

########################################Blob Detector##############################################
# Setup SimpleBlobDetector parameters.
blobParams = cv2.SimpleBlobDetector_Params()

# Change thresholds
blobParams.minThreshold = 8
blobParams.maxThreshold = 255

# Filter by Area.
blobParams.filterByArea = True
blobParams.minArea = 64     # minArea may be adjusted to suit for your experiment
blobParams.maxArea = 2500   # maxArea may be adjusted to suit for your experiment

# Filter by Circularity
blobParams.filterByCircularity = True
blobParams.minCircularity = 0.1

# Filter by Convexity
blobParams.filterByConvexity = True
blobParams.minConvexity = 0.87

# Filter by Inertia
blobParams.filterByInertia = True
blobParams.minInertiaRatio = 0.01

# Create a detector with the parameters
blobDetector = cv2.SimpleBlobDetector_create(blobParams)
###################################################################################################


###################################################################################################
# Original blob coordinates, supposing all blobs are of z-coordinates 0
# And, the distance between every two neighbour blob circle centers is 72 centimetres
# In fact, any number can be used to replace 72.
# Namely, the real size of the circle is pointless while calculating camera calibration parameters.
objp = np.zeros((44, 3), np.float32)
objp[0]  = (0  , 0  , 0)
objp[1]  = (0  , 72 , 0)
objp[2]  = (0  , 144, 0)
objp[3]  = (0  , 216, 0)
objp[4]  = (36 , 36 , 0)
objp[5]  = (36 , 108, 0)
objp[6]  = (36 , 180, 0)
objp[7]  = (36 , 252, 0)
objp[8]  = (72 , 0  , 0)
objp[9]  = (72 , 72 , 0)
objp[10] = (72 , 144, 0)
objp[11] = (72 , 216, 0)
objp[12] = (108, 36,  0)
objp[13] = (108, 108, 0)
objp[14] = (108, 180, 0)
objp[15] = (108, 252, 0)
objp[16] = (144, 0  , 0)
objp[17] = (144, 72 , 0)
objp[18] = (144, 144, 0)
objp[19] = (144, 216, 0)
objp[20] = (180, 36 , 0)
objp[21] = (180, 108, 0)
objp[22] = (180, 180, 0)
objp[23] = (180, 252, 0)
objp[24] = (216, 0  , 0)
objp[25] = (216, 72 , 0)
objp[26] = (216, 144, 0)
objp[27] = (216, 216, 0)
objp[28] = (252, 36 , 0)
objp[29] = (252, 108, 0)
objp[30] = (252, 180, 0)
objp[31] = (252, 252, 0)
objp[32] = (288, 0  , 0)
objp[33] = (288, 72 , 0)
objp[34] = (288, 144, 0)
objp[35] = (288, 216, 0)
objp[36] = (324, 36 , 0)
objp[37] = (324, 108, 0)
objp[38] = (324, 180, 0)
objp[39] = (324, 252, 0)
objp[40] = (360, 0  , 0)
objp[41] = (360, 72 , 0)
objp[42] = (360, 144, 0)
objp[43] = (360, 216, 0)
###################################################################################################


# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.


cap = cv2.VideoCapture(2)
num = 20
found = 0
while(found < num):  		# Here, 20 can be changed to whatever number you like to choose
    ret, img = cap.read()	# Capture frame-by-frame
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)	# BGR to GRAY

    keypoints = blobDetector.detect(gray) # Detect blobs.

    # Draw detected blobs as red circles. This helps cv2.findCirclesGrid() . 
    im_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0,255,0), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
    im_with_keypoints_gray = cv2.cvtColor(im_with_keypoints, cv2.COLOR_BGR2GRAY)
    ret, corners = cv2.findCirclesGrid(im_with_keypoints, (4,11), None, flags = cv2.CALIB_CB_ASYMMETRIC_GRID)   # Find the circle grid

    if ret == True:
        objpoints.append(objp)  # Certainly, every loop objp is the same, in 3D.

        corners2 = cv2.cornerSubPix(im_with_keypoints_gray, corners, (5,5), (-1,-1), criteria)    # Refines the corner locations.
        imgpoints.append(corners2)

        # Draw and display the corners.
        im_with_keypoints = cv2.drawChessboardCorners(img, (4,11), corners2, ret)

        # Enable the following 2 lines if you want to save the calibration images.
        filename = str(found).zfill(2) +".jpg"
        cv2.imwrite(filename, im_with_keypoints)

        found += 1


    cv2.imshow("img", im_with_keypoints) # display
    cv2.waitKey(2)


# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

# Calibration
# objpoints - 3D points in real world coordinates.
# imgpoints - objpoints' 2D projections on gray, with further refinement.
# gray.shape[::-1] - image size.
# ret - function return value. A non-zero value of ret means function is successfully run.
# mtx - camera matrix.
# dist - distortion coefficients.
# rvecs - rotation vectors.
# tvecs - translation vectors.
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)


#  Python code to write the image (OpenCV 4.3)
fs = cv2.FileStorage('calibration.yml', cv2.FILE_STORAGE_WRITE)
fs.write('camera_matrix', mtx)
fs.write('dist_coeff', dist)
fs.release()

Intermediate Images: Circle Grid

00.jpg 01.jpg 02.jpg 03.jpg
04.jpg 05.jpg 06.jpg 07.jpg
08.jpg 09.jpg 10.jpg 11.jpg
12.jpg 13.jpg 14.jpg 15.jpg
16.jpg 17.jpg 18.jpg 19.jpg

Results: calibration_circle_grid.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
%YAML:1.0
---
camera_matrix: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1.4167295372250414e+03, 0., 1.0987014294575711e+03, 0.,
       1.4143477979365002e+03, 7.8219583855213511e+02, 0., 0., 1. ]
dist_coeff: !!opencv-matrix
   rows: 1
   cols: 5
   dt: d
   data: [ 6.9131941945763054e-02, -5.8538182599906861e-02,
       1.6830698943192594e-03, -9.4239470722430434e-03,
       -2.6261143127238118e-01 ]

Clearly,

\[\begin{split}&\begin{pmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix} =\\ &\begin{pmatrix} 1.4167295372250414e+03 & 0. & 1.0987014294575711e+03 \\ 0. & 1.4143477979365002e+03 & 7.8219583855213511e+02 \\ 0. & 0. & 1. \end{pmatrix}\end{split}\]
\[\begin{split}\begin{pmatrix} k_1 & k_2 & p_1 & p_2 & k_3 \end{pmatrix} = \begin{pmatrix} 6.9131941945763054e-02 \\ -5.8538182599906861e-02 \\ 1.6830698943192594e-03 \\ -9.4239470722430434e-03 \\ -2.6261143127238118e-01 \end{pmatrix}^T\end{split}\]

Clearly, result calibration_chessboard.yml from Demo 1 and result calibration_circle_grid.yml from Demo 2 are of some difference.

Assignments

There are much more different calibration patterns to try. Due to different applications, calibration board varies from factors including:

  • pattern sizes: such as:
    • How many rows and columns of squares or circles?

    • How big is a single square or circle? For example, telephoto lens requires a more precise calibration when the target view is far away. In such cases, the calibration board needs to be big enough to be detected at a very long distance from the camera.

  • pattern types:
    • Besides traditional chessboard and circle grids, OpenCV also provides ChArUco.

There are a number of online calibration pattern generators free to use. Calib.io is one recommendation. In this activity, please generate any calibration pattern as you wish, and calibrate your camera with it. Different calibration patterns are of different performance due to different applications. Please compare the calibration performance of various calibration patterns for various applications with various camera lenses.