1. Introduction
This tutorial focuses estimation of the homogeneous transformation between the robot reference frame and the camera frame in the case of a fixed camera observing the robot end-effector. This configuration is also called eye-to-hand.
As a use case, we will consider in this tutorial the case of either:
- a Panda robot in its research version from, Franka Emika and a D435 fixed camera mounted on a tripod observing the robot end-effector
- or a robot from Universal Robots and a D435 fixed camera mounted on a tripod observing the robot end-effector.
The principle of the extrinsic eye-to-hand calibration is easy to apply to any other robot equipped with any other camera attached to the robot end-effector.
Let us consider:
the homogeneous transformation between the robot reference frame and the robot end-effector
the homogeneous transformation between the camera frame and an Apriltag frame (also called object frame) attached to the robot end-effector,
the homogeneous transformation between the robot reference frame and the camera frame. This is the transformation corresponding to the extrinsic eye-to-hand transformation that we have to estimate.
the homogeneous transformation between the robot end-effector frame and the object frame attached to the end-effector. This transformation is also estimated.
The calibration process described in this tutorial consists in 3 steps:
- acquiring data consisting in couples of
poses, images of the Apriltag and camera intrinsic parameters
- computing the corresponding
pose of the Apriltag from the images
- from the basket of
corresponding to couple of poses
the last step is to estimate the
extrinsic transformation.
Note that all the material (source code) described in this tutorial is part of ViSP source code (in apps/calibration/hand-eye folder) and could be found in https://github.com/lagadic/visp/tree/master/apps/calibration/hand-eye.
2. Prerequisites
2.1. Get intrinsic parameters
In order to compute the pose
from the Apriltag image, there is the need to get the camera intrinsic parameters. Depending on the device, these parameters are part of the device SDK or firmware. This is for example the case for our Intel Realsense D435 camera considered in this tutorial. These intrinsic factory parameters could be retrieved using vpRealSense2::getCameraParameters().
If you have an other camera than a Realsense, or if you want to have a better estimation than the factory parameters you may follow Tutorial: Camera intrinsic calibration. Otherwise you can skip this section.
- Note
- In any case, calibrating your camera following Tutorial: Camera intrinsic calibration will allow you to estimate intrinsic parameters, including image distortion (see vpCameraParameters class). When using the intrinsic factory parameters, the model doesn't include image distortion. When a model with distortion is used, the results of hand-eye calibration are better.
2.2. Print a 36h11 Apriltag
We provide a ready to print 36h11 tag that is 9.5 by 9.5 cm square [download].
If you prefer, you can also directly download on the Apriltag website some pre-generated 36h11 tags.
In the archive you will find a PNG image of each tag, a mosaic in PNG containing every tag and a ready-to-print postscript file with one tag per page. If you want to print an individual tag, you can manually scale the corresponding PNG image using two methods:
- on Unix with ImageMagick, e.g.:
$ convert tag36_11_00000.png -scale 5000% tag36_11_00000_big.png
- or open the image with Gimp:
- then from the pulldown menu, select Image > Scale Image
- set the unit and the size
- set the Interpolation mode to None
- click on the Scale button
- From the pulldown menu, select File > Export As
- Save the image as a new PNG image, e.g., /tmp/tag36_11_00000-rescaled.png
- Send the PNG file to your printer
Glue the tag on the robot end-effector and mount your camera on a fixed tripod.
3. Extrinsic calibration overview
3.1. Acquire data
The objective here is to complete step 1 by acquiring couples of
poses and the corresponding images of the tag. To this end move the robot end-effector to different positions. At least 8 to 10 positions are requested. To define a good position you have to imagine a half sphere over the camera X-Y plane and select positions that discretise as much as possible all the half sphere surface in front of the camera. For each position you should see all the tag as large as possible in the image.
To acquire images of the tag, depending of your device you can follow Tutorial: Image frame grabbing. Images could be saved in jpeg or png format, or any format supported by ViSP.
To get the corresponding
poses, you need to use one of our robot interface like vpRobotFranka::getPosition(vpRobot::END_EFFECTOR_FRAME), vpRobotUniversalRobots::getPosition(vpRobot::END_EFFECTOR_FRAME)... It returns the homogeneous transformation between the robot reference frame and the robot end-effector. The following code snippet shows how to save the pose in yaml format:
int cpt = 1;
std::stringstream ss;
ss << "pose_rPe_" << cpt << ".yaml";
static bool saveYAML(const std::string &filename, const vpArray2D< Type > &A, const char *header="")
Implementation of a pose vector and operations on poses.
To complete this step, you need also to get or calibrate your camera intrinsics in order to obtain its intrinsic parameters. Camera intrinsic parameters need to be saved in an xml file. If you have an Intel RealSense device you can directly get the factory parameters using vpRealSense2::getCameraParameters() and then save the parameters in an xml file using vpXmlParserCamera::save(). An example is given in visp-acquire-franka-calib-data.cpp or in visp-acquire-universal-robots-calib-data.cpp
- Note
- With vpRealSense2::getCameraParameters() you can only get the factory parameters without distortion coefficients. If you want the parameters with distortion, you need to achieve a calibration as described in Tutorial: Camera intrinsic calibration.
As an example, in ViSP build folder you will find a dataset in "data-eye-to-hand" folder corresponding to data acquired with a real Panda robot that has a 36h11 Apriltag attached to its end-effector. These data were acquired with visp-acquire-franka-calib-data.cpp binary described in section 4.1. Panda robot + Realsense.
$ cd $VISP_WS/visp-build/apps/calibration/hand-eye
$ ls data-eye-to-hand
apriltag-data.txt franka_image-2.png franka_image-6.png franka_pose_rPe_2.yaml franka_pose_rPe_6.yaml
franka_camera_visp.xml franka_image-3.png franka_image-7.png franka_pose_rPe_3.yaml franka_pose_rPe_7.yaml
franka_camera.xml franka_image-4.png franka_image-8.png franka_pose_rPe_4.yaml franka_pose_rPe_8.yaml
franka_image-1.png franka_image-5.png franka_pose_rPe_1.yaml franka_pose_rPe_5.yaml
In this dataset, you will find:
- 8 images of the 36h11 Apriltag in franka_image-*.png files
- the corresponding pose of the end-effector in the robot reference frame noted
and saved as a pose vector in yaml format in franka_pose_rPe_*.yaml files
- camera intrinsic factory parameters in franka_camera.xml
- camera intrinsic parameters calibrated using visp in franka_camera_visp.xml. These parameters where obtained following Tutorial: Camera intrinsic calibration
- and the size of the tag in apriltag-data.txt which is 0.048 m large.
3.2. Compute tag poses
Here we will complete step 2 by computing for each image the corresponding
pose of the tag using the camera intrinsic parameters recorded in the xml file.
To this end you can use visp-compute-apriltag-poses binary to compute the different poses of the tag with respect to the camera frame.
Considering the dataset presented in previous section, and knowing that the size of the tag is 0.048 (modify option --tag-size according to your printed tag), to proceed with the dataset you may run:
$ cd $VISP_WS/visp-build/apps/calibration/hand-eye
$ ./visp-compute-apriltag-poses \
--tag-size 0.048 \
--input data-eye-to-hand/franka_image-%d.png \
--intrinsic data-eye-to-hand/franka_camera.xml \
--output data-eye-to-hand/franka_pose_cPo_%d.yaml
It will produce the following results:
Parameters:
Apriltag
Size [m] : 0.048
Z aligned : false
Input images location : data-eye-to-hand/franka_image-%d.png
First frame : 1
Last frame : 8
Camera intrinsics
Param file name [.xml]: data-eye-to-hand/franka_camera.xml
Camera name : Camera
Output camera poses : data-eye-to-hand/franka_pose_cPo_%d.yaml
Interactive mode : yes
Found camera with name: "Camera"
Camera parameters used to compute the pose:
Camera parameters for perspective projection with distortion:
px = 601.0670294 py = 602.4186662
u0 = 321.3921431 v0 = 236.8025954
kud = 0.05663697965
kdu = -0.05584121348
Process image: data-eye-to-hand/franka_image-1.png
Save data-eye-to-hand/franka_pose_cPo_1.yaml
Process image: data-eye-to-hand/franka_image-2.png
Save data-eye-to-hand/franka_pose_cPo_2.yaml
Process image: data-eye-to-hand/franka_image-3.png
Save data-eye-to-hand/franka_pose_cPo_3.yaml
Process image: data-eye-to-hand/franka_image-4.png
Save data-eye-to-hand/franka_pose_cPo_4.yaml
Process image: data-eye-to-hand/franka_image-5.png
Save data-eye-to-hand/franka_pose_cPo_5.yaml
Process image: data-eye-to-hand/franka_image-6.png
Save data-eye-to-hand/franka_pose_cPo_6.yaml
Process image: data-eye-to-hand/franka_image-7.png
Save data-eye-to-hand/franka_pose_cPo_7.yaml
Process image: data-eye-to-hand/franka_image-8.png
Save data-eye-to-hand/franka_pose_cPo_8.yaml
- Note
- At this point, you may notice that the camera parameters read from the data-eye-in-hand/franka_camera_visp.xml file take image distortion into account, since the kud and kdu parameters are set to non zero values.
Camera parameters used to compute the pose:
Camera parameters for perspective projection with distortion:
px = 601.0670294 py = 602.4186662
u0 = 321.3921431 v0 = 236.8025954
kud = 0.05663697965
kdu = -0.05584121348
-
As mentioned above, the results of eye-to-hand calibration can gain in accuracy when using intrinsics that consider also image distortion like here.
The source code corresponding to the binary is available in visp-compute-apriltag-poses.cpp.
It produces as output the corresponding franka_pose_cPo_8.yaml files that are saved in "data-eye-to-hand" folder. They correspond to the camera to tag transformation:
$ ls data-eye-to-hand/franka_pose_cPo_*
data-eye-to-hand/franka_pose_cPo_1.yaml data-eye-to-hand/franka_pose_cPo_4.yaml data-eye-to-hand/franka_pose_cPo_7.yaml
data-eye-to-hand/franka_pose_cPo_2.yaml data-eye-to-hand/franka_pose_cPo_5.yaml data-eye-to-hand/franka_pose_cPo_8.yaml
data-eye-to-hand/franka_pose_cPo_3.yaml data-eye-to-hand/franka_pose_cPo_6.yaml
3.3. Estimate extrinsic transformation
The final step consists now to estimate the robot reference frame to camera
transformation from the couples of
and
poses.
Complete the calibration running visp-compute-eye-to-hand-calibration binary. It will get the data from the pair of files, franka_pose_rPe_%d.yaml and franka_pose_cPo_%d.yaml located in "data-eye-to-hand" folder.
$ cd $VISP_WS/visp-build/apps/calibration/hand-eye
$ ./visp-compute-eye-to-hand-calibration \
--data-path data-eye-to-hand/ \
--rPe franka_pose_rPe_%d.yaml \
--cPo franka_pose_cPo_%d.yaml \
--output-ePo data-eye-to-hand/franka_ePo.yaml \
--output-rPc data-eye-to-hand/franka_rPc.yaml
The source code corresponding to the binary is available in visp-compute-eye-to-hand-calibration.cpp.
It will produce the following results:
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_1.yaml and from data-eye-to-hand/franka_pose_cPo_1.yaml
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_2.yaml and from data-eye-to-hand/franka_pose_cPo_2.yaml
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_3.yaml and from data-eye-to-hand/franka_pose_cPo_3.yaml
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_4.yaml and from data-eye-to-hand/franka_pose_cPo_4.yaml
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_5.yaml and from data-eye-to-hand/franka_pose_cPo_5.yaml
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_6.yaml and from data-eye-to-hand/franka_pose_cPo_6.yaml
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_7.yaml and from data-eye-to-hand/franka_pose_cPo_7.yaml
Use data from data-eye-to-hand//data-eye-to-hand/franka_pose_rPe_8.yaml and from data-eye-to-hand/franka_pose_cPo_8.yaml
Distance theta between rMo/rMc(0) and mean (deg) = 0.373169
Distance theta between rMo/rMc(1) and mean (deg) = 0.614997
Distance theta between rMo/rMc(2) and mean (deg) = 0.357255
Distance theta between rMo/rMc(3) and mean (deg) = 0.264302
Distance theta between rMo/rMc(4) and mean (deg) = 0.52074
Distance theta between rMo/rMc(5) and mean (deg) = 0.30428
Distance theta between rMo/rMc(6) and mean (deg) = 0.776229
Distance theta between rMo/rMc(7) and mean (deg) = 0.643039
Mean residual rMo/rMc(8) - rotation (deg) = 0.511707
Distance d between rMo/rMc(0) and mean (m) = 0.00138341
Distance d between rMo/rMc(1) and mean (m) = 0.00147386
Distance d between rMo/rMc(2) and mean (m) = 0.00190847
Distance d between rMo/rMc(3) and mean (m) = 0.00247277
Distance d between rMo/rMc(4) and mean (m) = 0.00188432
Distance d between rMo/rMc(5) and mean (m) = 0.00157478
Distance d between rMo/rMc(6) and mean (m) = 0.00224356
Distance d between rMo/rMc(7) and mean (m) = 0.00439174
Mean residual rMo/rMc(8) - translation (m) = 0.00235038
Mean residual rMo/rMc(8) - global = 0.00653018
Eye-to-hand calibration succeed
Estimated hand-object (eMo) transformation:
-------------------------------------------
eMo[4,4]=
0.00622146 0.0219727 0.999739 0.0220351
0.999978 0.00193654 -0.00626551 -0.0051383
-0.0020737 0.999757 -0.0219602 -0.0556314
0.0 0.0 0.0 1.0
- Corresponding pose vector [tx ty tz tux tuy tuz] in [m] and [rad]: 0.02203508116 -0.005138296199 -0.05563139449 1.226817252 1.221684136 1.19265196
- Translation [m]: 0.02203508116 -0.005138296199 -0.05563139449
- Rotation (theta-u representation) [rad]: 1.226817252 1.221684136 1.19265196
- Rotation (theta-u representation) [deg]: 70.29145076 69.9973449 68.33392374
- Rotation (quaternion representation) [rad]: 0.5065187868 0.5043994659 0.4924128862 0.4965374577
- Rotation (r-x-y-z representation) [rad]: 2.863665054 1.547957809 -1.294874109
- Rotation (r-x-y-z representation) [deg]: 164.0759216 88.69144934 -74.19082146
Estimated robot reference to camera frames (rMc) transformation:
----------------------------------------------------------------
rMc[4,4]=
-0.0161049 -0.139226 -0.99013 0.954036
0.999844 -0.00939845 -0.0149414 -0.0512357
-0.00722546 -0.990216 0.139356 0.47622
0.0 0.0 0.0 1.0
- Corresponding pose vector [tx ty tz tux tuy tuz] in [m] and [rad]: 0.9540358034 -0.05123574465 0.4762201018 -1.104108108 -1.112745553 1.289540899
- Translation [m]: 0.9540358034 -0.05123574465 0.4762201018
- Rotation (theta-u representation) [rad]: -1.104108108 -1.112745553 1.289540899
- Rotation (theta-u representation) [deg]: -63.26073472 -63.75562386 73.88525101
- Rotation (quaternion representation) [rad]: -0.4620438111 -0.4656583828 0.5396431627 0.527695977
- Rotation (r-x-y-z representation) [rad]: 0.106809808 -1.430179098 1.685959411
- Rotation (r-x-y-z representation) [deg]: 164.0759216 88.69144934 -74.19082146
Save transformation matrix eMo as an homogeneous matrix in: data-eye-to-hand/franka_ePo.txt
Save transformation matrix eMo as a vpPoseVector in : data-eye-to-hand/franka_ePo.yaml
Save transformation matrix rMc as an homogeneous matrix in: data-eye-to-hand/franka_rPc.txt
Save transformation matrix rMc as a vpPoseVector in : data-eye-to-hand/franka_rPc.yaml
Results are saved in data-eye-to-hand/ folder in which we can find:
- the extrinsic transformation
in franka_rPc.yaml file as a vpPoseVector, with translation in meter and rotation as a
axis-angle vector with values in radians. $ more data-eye-to-hand/franka_rPc.yaml
Robot reference to camera frames transformation (rMc)
rows: 6
cols: 1
data:
- [0.954036]
- [-0.0512357]
- [0.47622]
- [-1.10411]
- [-1.11275]
- [1.28954]
- the extrinsic transformation
in franka_rPc.txt file that contains the corresponding homogeneous matrix transformation: $ more data-eye-to-hand/franka_rPc.txt
-0.01610494501 -0.1392259042 -0.9901296775 0.9540358034
0.9998441996 -0.009398452743 -0.01494140425 -0.05123574465
-0.007225456468 -0.9902160455 0.1393555743 0.4762201018
0 0 0 1
It will also save these two other files:
- franka_ePo.yaml file that gives the transformation between the end-effector and the apriltag as a pose vector
$ more data-eye-to-hand/franka_ePo.yaml
Robot end-effector to object frames transformation (eMo)
rows: 6
cols: 1
data:
- [0.0220351]
- [-0.0051383]
- [-0.0556314]
- [1.22682]
- [1.22168]
- [1.19265]
- franka_ePo.txt file that gives the transformation between the end-effector and the apriltag as an homogeneous matrix.
$ more data-eye-to-hand/franka_ePo.txt
0.006221456592 0.02197272564 0.9997392124 0.02203508116
0.9999784964 0.001936536168 -0.006265507753 -0.005138296199
-0.002073701426 0.999756695 -0.02196020507 -0.05563139449
0 0 0 1
- Warning
- We recall, that a good hand-eye calibration is obtained when the camera poses are covering the surface of a half sphere over the grid and when the AprilTag is as large as possible in the images.
- Note
- Looking at the results, we can see the residual values:
Mean residual rMo/rMc(8) - rotation (deg) = 0.511707
Mean residual rMo/rMc(8) - translation (m) = 0.00235038
Mean residual rMo/rMc(8) - global = 0.00653018
It shows that we have an error of 0.51 degrees for rotation and 2.25 millimeters for translation. If, instead of using the intrinsic parameters with distorsion, we had used the factory parameters without distorsion, the residual would have been larger and would have enabled us to obtain a less accurate eye-in-hand calibration.
4. Use cases
4.1. Panda robot + Realsense
In this section we suppose that you have a Panda robot from Franka Emika with an Apriltag glued to its end-effector and a Realsense camera mounted on a tripod observing the robot end-effector.
Panda robot in eye-to-hand configuration.
If not already done, follow 2.8. Configure Ethernet and 2.9. Connect to Franka desk instructions to power on the Panda robot. Then if this is not already done, follow 2.3. Install Franka library and 2.7. Configure and build ViSP.
If not already done, you need also to install 4.1.2. librealsense 2.x and build ViSP to enable vpRealSense2 class usage.
4.1.1 Acquire data
Connect the Realsense D435 camera to the computer, attach the tag to the robot end-effector, enter in apps/calibration/hand-eye folder and run visp-acquire-franka-calib-data binary to acquire images of the tag for different end-effector positions:
$ cd apps/calibration/hand-eye
$ ./visp-acquire-franka-calib-data
By default the robot controller IP is 192.168.1.1. If your Franka has an other IP (let say 192.168.30.10) use --ip option like:
$ ./visp-acquire-franka-calib-data --ip 192.168.30.10 --output-folder data-franka
Click with the left mouse button to acquire data. It records the following outputs in data-franka/ folder:
- franka_camera.xml : XML file that contains the intrinsic camera parameters extracted from camera firmware
- couples of franka_image-<number>.png + franka_pose_rMe-<number>.txt with number starting from 1. franka_pose_rMe-<number>.yaml is the pose of the end-effector expressed in the robot reference frame
, while franka_image-<number>.png is the image captured at the corresponding robot position.
Move the robot to an other position such as the tag remains in the image and repeat data acquisition by a left mouse click. We recommend to acquire data at 8 to 10 different robot positions.
A right mouse click ends this step exiting the binary.
This is the output when 8 different positions are considered:
$ ./visp-acquire-franka-calib-data --ip 192.168.30.10 --output-folder ./data-franka
Image size: 640 x 480
Save: ./data-franka/franka_image-1.png and ./data-franka/franka_pose_rPe_1.yaml
Save: ./data-franka/franka_image-2.png and ./data-franka/franka_pose_rPe_2.yaml
Save: ./data-franka/franka_image-3.png and ./data-franka/franka_pose_rPe_3.yaml
Save: ./data-franka/franka_image-4.png and ./data-franka/franka_pose_rPe_4.yaml
Save: ./data-franka/franka_image-5.png and ./data-franka/franka_pose_rPe_5.yaml
Save: ./data-franka/franka_image-6.png and ./data-franka/franka_pose_rPe_6.yaml
Save: ./data-franka/franka_image-7.png and ./data-franka/franka_pose_rPe_7.yaml
Save: ./data-franka/franka_image-8.png and ./data-franka/franka_pose_rPe_8.yaml
The source code corresponding to the binary is available in visp-acquire-franka-calib-data.cpp. If your setup is different, it could be easily adapted to your robot or camera.
4.1.2 Compute tag poses
Given the camera intrinsic parameters and the set of images, you can compute the tag poses running (adapt the tag size parameter to your use case):
$ ./visp-compute-apriltag-poses \
--tag-size 0.048 \
--input ./data-franka/franka_image-%d.png \
--intrinsic ./data-franka/franka_camera.xml \
--output ./data-franka/franka_pose_cPo_%d.yaml
4.1.3 Estimate extrinsic transformation
Finally you can estimate the extrinsic transformation between the robot reference frame and your camera, running:
$ ./visp-compute-eye-to-hand-calibration \
--data-path ./data-franka \
--rPe franka_pose_rPe_%d.yaml \
--cPo franka_pose_cPo_%d.yaml \
--output-rPc ./data-franka/franka_rPc.yaml \
--output-ePo ./data-franka/franka_ePo.yaml
In data-franka/ folder it will save
- franka_rPc.yaml file that contains the pose as a vpPoseVector
$ more ./data-franka/franka_rPc.yaml
Robot reference to camera frames transformation (rMc)
rows: 6
cols: 1
data:
- [0.954036]
- [-0.0512357]
- [0.47622]
- [-1.10411]
- [-1.11275]
- [1.28954]
- franka_rPc.txt that contains the corresponding homogeneous matrix transformation:
$ more ./data-franka/franka_rPc.txt
-0.01610494501 -0.1392259042 -0.9901296775 0.9540358034
0.9998441996 -0.009398452743 -0.01494140425 -0.05123574465
-0.007225456468 -0.9902160455 0.1393555743 0.4762201018
0 0 0 1
It will also save these two other files:
- franka_ePo.yaml file that gives the transformation between the end-effector and the apriltag as a pose vector
- franka_ePo.txt file that gives the transformation between the end-effector and the apriltag as an homogeneous matrix.
4.2. UR robot + Realsense
In this section we suppose that you have an Universal Robots robot with a 36h11 Apriltag attached to its end-effector.
If not already done, follow Universal Robots visual-sevoing Prerequisites instructions to install ur_rtde 3rdparty and build ViSP to support UR that enables vpRobotUniversalRobots class usage.
If not already done, you need also to install 4.1.2. librealsense 2.x and build ViSP to enable vpRealSense2 class usage.
4.2.1 Acquire data
Connect the Realsense camera to the computer, put the tag in the camera field of view, enter in apps/calibration/hand-eye folder and run visp-acquire-universal-robots-calib-data binary to acquire the images and the corresponding robot end-effector positions:
$ cd apps/calibration/hand-eye
$ ./visp-acquire-universal-robots-calib-data
By default the robot controller IP is 192.168.0.100. If your robot from Universal Robots has an other IP (let say 192.168.30.10) use --ip option like:
$ ./visp-acquire-universal-robots-calib-data --ip 192.168.30.10 --output-folder data-ur
Click with the left mouse button to acquire data. It records the following outputs in data-ur folder:
- ur_camera.xml : XML file that contains the intrinsic camera parameters extracted from camera firmware
- couples of ur_image-<number>.png + ur_pose_rPe-<number>.txt with number starting from 1. ur_pose_rPe-<number>.yaml is the pose of the end-effector expressed in the robot reference frame
, while ur_image-<number>.png is the image captured at the corresponding robot position.
With the PolyScope, move the robot to an other position such as the tag remains in the image and repeat data acquisition by a left mouse click. We recommend to acquire data at 8 to 10 different robot positions.
A right mouse click ends this step exiting the binary.
This is the output when 8 different positions are considered:
$ ./visp-acquire-universal-robots-calib-data --ip 192.168.30.10 --output-folder data-ur
Image size: 640 x 480
Found camera with name: "Camera"
Save: data-ur/ur_image-1.png and data-ur/ur_pose_rPe_1.yaml
Save: data-ur/ur_image-2.png and data-ur/ur_pose_rPe_2.yaml
Save: data-ur/ur_image-3.png and data-ur/ur_pose_rPe_3.yaml
Save: data-ur/ur_image-4.png and data-ur/ur_pose_rPe_4.yaml
Save: data-ur/ur_image-5.png and data-ur/ur_pose_rPe_5.yaml
Save: data-ur/ur_image-6.png and data-ur/ur_pose_rPe_6.yaml
Save: data-ur/ur_image-7.png and data-ur/ur_pose_rPe_7.yaml
Save: data-ur/ur_image-8.png and data-ur/ur_pose_rPe_8.yaml
The source code corresponding to the binary is available in visp-acquire-universal-robots-calib-data.cpp. If your setup is different, it could be easily adapted to your robot or camera.
4.2.2 Compute tag poses
Given the camera intrinsic parameters and the set of images, you can compute the camera pose running (adapt the tag size parameter to your use case):
$ ./visp-compute-apriltag-poses \
--tag-size 0.0248 \
--input data-ur/ur_image-%d.png \
--intrinsic data-ur/ur_camera.xml \
--output data-ur/ur_pose_cPo_%d.yaml
4.2.3: Estimate extrinsic transformation
Finally you can estimate the extrinsic transformation between end-effector and you camera, running:
$ ./visp-compute-eye-to-hand-calibration \
--data-path . \
--rPe data-ur/ur_pose_rPe_%d.yaml \
--cPo data-ur/ur_pose_cPo_%d.yaml \
--output-rPc data-ur/ur_rPc.yaml \
--output-ePo data-ur/ur_ePo.yaml
In data-ur folder, it will save:
- ur_rMc.yaml file that contains the pose as a vpPoseVector
- ur_rMc.txt file that contains the corresponding homogeneous matrix transformation.
It will also save these two other files:
- ur_ePo.yaml file that gives the transformation between the end-effector and the apriltag as a pose vector
- ur_ePo.txt file that gives the transformation between the end-effector and the apriltag as an homogeneous matrix.