A robotic arm can perform many useful movements during its use, but attention must be paid to possible obstacles in the working environment. The aim of this thesis is to develop an application capable of mapping the robot's work area through the use of a time-of-flight camera and a tracking sensor for virtual reality. The time-of-flight camera provides both images in two dimensions and point clouds representing the images in three-dimensional space. The collected images are used to build the panorama, through the image stitching process, using algorithms that currently represent the state of the art in the field of computer vision. Once the panorama has been reconstructed, the transformations undergone by the 2D images are estimated and applied to the point clouds to obtain the same result in the 3D world. When merging the images, it is necessary that there are overlapping areas in common between them, to allow the stitching algorithms to construct the panorama. To increase the chance of success, the images are split based on location, which is provided by a virtual reality tracker. The virtual reality tracker is also intended to make the application portable. The camera, in fact, is mounted as an end-effector of the robot, but it is difficult to always have a real robotic arm available, unless the latter is already mounted and functioning, especially considering that the study of the robot's workspace it is a process that is performed before the final assembly of the robot and its cell. To overcome this problem, the tracker can be connected to software, which simulates the robotic arm and its movements, allowing developers to understand how to best use the manipulator, without needing the physical counterpart.

A robotic arm can perform many useful movements during its use, but attention must be paid to possible obstacles in the working environment. The aim of this thesis is to develop an application capable of mapping the robot's work area through the use of a time-of-flight camera and a tracking sensor for virtual reality. The time-of-flight camera provides both images in two dimensions and point clouds representing the images in three-dimensional space. The collected images are used to build the panorama, through the image stitching process, using algorithms that currently represent the state of the art in the field of computer vision. Once the panorama has been reconstructed, the transformations undergone by the 2D images are estimated and applied to the point clouds to obtain the same result in the 3D world. When merging the images, it is necessary that there are overlapping areas in common between them, to allow the stitching algorithms to construct the panorama. To increase the chance of success, the images are split based on location, which is provided by a virtual reality tracker. The virtual reality tracker is also intended to make the application portable. The camera, in fact, is mounted as an end-effector of the robot, but it is difficult to always have a real robotic arm available, unless the latter is already mounted and functioning, especially considering that the study of the robot's workspace it is a process that is performed before the final assembly of the robot and its cell. To overcome this problem, the tracker can be connected to software, which simulates the robotic arm and its movements, allowing developers to understand how to best use the manipulator, without needing the physical counterpart.

Creating a 3D scene using a ToF camera and VR tracker

CASTAGNA, ELISA
2023/2024

Abstract

A robotic arm can perform many useful movements during its use, but attention must be paid to possible obstacles in the working environment. The aim of this thesis is to develop an application capable of mapping the robot's work area through the use of a time-of-flight camera and a tracking sensor for virtual reality. The time-of-flight camera provides both images in two dimensions and point clouds representing the images in three-dimensional space. The collected images are used to build the panorama, through the image stitching process, using algorithms that currently represent the state of the art in the field of computer vision. Once the panorama has been reconstructed, the transformations undergone by the 2D images are estimated and applied to the point clouds to obtain the same result in the 3D world. When merging the images, it is necessary that there are overlapping areas in common between them, to allow the stitching algorithms to construct the panorama. To increase the chance of success, the images are split based on location, which is provided by a virtual reality tracker. The virtual reality tracker is also intended to make the application portable. The camera, in fact, is mounted as an end-effector of the robot, but it is difficult to always have a real robotic arm available, unless the latter is already mounted and functioning, especially considering that the study of the robot's workspace it is a process that is performed before the final assembly of the robot and its cell. To overcome this problem, the tracker can be connected to software, which simulates the robotic arm and its movements, allowing developers to understand how to best use the manipulator, without needing the physical counterpart.
2023
Creating a 3D scene using a ToF camera and VR tracker
A robotic arm can perform many useful movements during its use, but attention must be paid to possible obstacles in the working environment. The aim of this thesis is to develop an application capable of mapping the robot's work area through the use of a time-of-flight camera and a tracking sensor for virtual reality. The time-of-flight camera provides both images in two dimensions and point clouds representing the images in three-dimensional space. The collected images are used to build the panorama, through the image stitching process, using algorithms that currently represent the state of the art in the field of computer vision. Once the panorama has been reconstructed, the transformations undergone by the 2D images are estimated and applied to the point clouds to obtain the same result in the 3D world. When merging the images, it is necessary that there are overlapping areas in common between them, to allow the stitching algorithms to construct the panorama. To increase the chance of success, the images are split based on location, which is provided by a virtual reality tracker. The virtual reality tracker is also intended to make the application portable. The camera, in fact, is mounted as an end-effector of the robot, but it is difficult to always have a real robotic arm available, unless the latter is already mounted and functioning, especially considering that the study of the robot's workspace it is a process that is performed before the final assembly of the robot and its cell. To overcome this problem, the tracker can be connected to software, which simulates the robotic arm and its movements, allowing developers to understand how to best use the manipulator, without needing the physical counterpart.
3D scene
ToF camera
VR tracker
File in questo prodotto:
File Dimensione Formato  
Castagna_Elisa.pdf

accesso aperto

Dimensione 7.77 MB
Formato Adobe PDF
7.77 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/64603