Human-robot collaboration relies on multi-camera systems to robustly monitor human operators in a robotic workcell. In this scenario, precise localization of the person in the robot coordinate system is essential, making the calibration of the camera network critical. In this work, we propose an innovative method to calibrate a network of cameras using a robot present in the cameras’ field of view as a calibration model, based on a single image per camera, and to estimate the pose between the cameras and the robot. This approach is innovative because previous methods have primarily focused on calibrating individual cameras. We demonstrate the effectiveness of our model by comparing it with the singlecamera case, highlighting improvements in robustness and accuracy. We analyze how our method performs with 3, 4, and 5 cameras, and how the distance of the cameras to the robot affects our estimations. The experiments show that our method is more accurate and robust than the single-camera method, achieving an increase of about 38% in rotation estimation and 40% in translation estimation for some cameras. Our findings indicate that our model successfully handles variations in the number of cameras and is robust to changes in the setup configuration.

Human-robot collaboration relies on multi-camera systems to robustly monitor human operators in a robotic workcell. In this scenario, precise localization of the person in the robot coordinate system is essential, making the calibration of the camera network critical. In this work, we propose an innovative method to calibrate a network of cameras using a robot present in the cameras’ field of view as a calibration model, based on a single image per camera, and to estimate the pose between the cameras and the robot. This approach is innovative because previous methods have primarily focused on calibrating individual cameras. We demonstrate the effectiveness of our model by comparing it with the singlecamera case, highlighting improvements in robustness and accuracy. We analyze how our method performs with 3, 4, and 5 cameras, and how the distance of the cameras to the robot affects our estimations. The experiments show that our method is more accurate and robust than the single-camera method, achieving an increase of about 38% in rotation estimation and 40% in translation estimation for some cameras. Our findings indicate that our model successfully handles variations in the number of cameras and is robust to changes in the setup configuration.

Single-frame multi-camera to robot pose estimation

CAPPELLETTO, ALBERTO
2023/2024

Abstract

Human-robot collaboration relies on multi-camera systems to robustly monitor human operators in a robotic workcell. In this scenario, precise localization of the person in the robot coordinate system is essential, making the calibration of the camera network critical. In this work, we propose an innovative method to calibrate a network of cameras using a robot present in the cameras’ field of view as a calibration model, based on a single image per camera, and to estimate the pose between the cameras and the robot. This approach is innovative because previous methods have primarily focused on calibrating individual cameras. We demonstrate the effectiveness of our model by comparing it with the singlecamera case, highlighting improvements in robustness and accuracy. We analyze how our method performs with 3, 4, and 5 cameras, and how the distance of the cameras to the robot affects our estimations. The experiments show that our method is more accurate and robust than the single-camera method, achieving an increase of about 38% in rotation estimation and 40% in translation estimation for some cameras. Our findings indicate that our model successfully handles variations in the number of cameras and is robust to changes in the setup configuration.
2023
Single-frame multi-camera to robot pose estimation
Human-robot collaboration relies on multi-camera systems to robustly monitor human operators in a robotic workcell. In this scenario, precise localization of the person in the robot coordinate system is essential, making the calibration of the camera network critical. In this work, we propose an innovative method to calibrate a network of cameras using a robot present in the cameras’ field of view as a calibration model, based on a single image per camera, and to estimate the pose between the cameras and the robot. This approach is innovative because previous methods have primarily focused on calibrating individual cameras. We demonstrate the effectiveness of our model by comparing it with the singlecamera case, highlighting improvements in robustness and accuracy. We analyze how our method performs with 3, 4, and 5 cameras, and how the distance of the cameras to the robot affects our estimations. The experiments show that our method is more accurate and robust than the single-camera method, achieving an increase of about 38% in rotation estimation and 40% in translation estimation for some cameras. Our findings indicate that our model successfully handles variations in the number of cameras and is robust to changes in the setup configuration.
hand-eye calibration
multi-camera
pose estimation
File in questo prodotto:
File Dimensione Formato  
Cappelletto_Alberto.pdf

accesso aperto

Dimensione 5.02 MB
Formato Adobe PDF
5.02 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/66604