Visual-aid positioning using point cloud compression and RGB-D cameras for robotic manipulators
No hay miniatura disponible
Archivos
Fecha
2023
Profesor/a Guía
Facultad/escuela
Idioma
en
Título de la revista
ISSN de la revista
Título del volumen
Editor
Ingeniare, Volume 312023 Article number 16
Nombre de Curso
Licencia CC
Attribution 4.0 International
CC BY 4.0
Deed
Licencia CC
https://creativecommons.org/licenses/by/4.0/
Resumen
Over the last decade, optimization of a massive set of industrial tasks has been achieved by taking advantage of repeatability and precision of robotic arms. While the new era of robotic arms introduces high-tech tools to fix positioning and tracking issues, upgrading older units is a significant challenge due to hardware incompatibilities, outdated mechanisms, and operation restrictions. This work introduces a new visual system to determine the position of a robotic arm using a two-stereo-cameras array. The visual-positioning system estimates the position of the end-effector using an inverse model of the robot and the full point cloud acquired with the stereo cameras. An Iterative Point Cloud algorithm merges the partial point clouds of each depth sensor, and with a yellow color detector, the algorithm extracts the Region of Interest (ROI). Experimental results show that the proposed device can estimate the relative position of the end-effector with respect to the robotic arm base with errors in the longitudinal, lateral, and vertical positions of around 19.6%, 15.7% and 9.2%, respectively. © 2023, Universidad de Tarapaca. All rights reserved
Notas
Indexación: Scopus.
Palabras clave
Computer vision, machine learning, point cloud compression, robotic manipulators, stereo vision
Citación
DOI
10.4067/s0718-33052023000100216