Publication: Learning markerless robot-depth camera calibration and end-effector pose estimation
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Sefercik, Buğra Can | |
dc.contributor.kuauthor | Akgün, Barış | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.researchcenter | Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI) | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.date.accessioned | 2024-12-29T09:39:30Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Traditional approaches to extrinsic calibration use fiducial markers and learning-based approaches rely heavily on simulation data. In this work, we present a learning-based markerless extrinsic calibration system that uses a depth camera and does not rely on simulation data. We learn models for end-effector (EE) segmentation, single-frame rotation prediction and keypoint detection, from automatically generated real-world data. We use a transformation trick to get EE pose estimates from rotation predictions and a matching algorithm to get EE pose estimates from keypoint predictions. We further utilize the iterative closest point algorithm, multiple-frames, filtering and outlier detection to increase calibration robustness. Our evaluations with training data from multiple camera poses and test data from previously unseen poses give sub-centimeter and sub-deciradian average calibration and pose estimation errors. We also show that a carefully selected single training pose gives comparable results. © 2023 Proceedings of Machine Learning Research. All rights reserved. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.publisherscope | International | |
dc.description.sponsors | This work was supported by KUIS AI Center computational resources. The authors would also like to thank Onur Berk Töre and Farzin Negahbani for their infrastructure support and work on an earlier version of the system. | |
dc.description.volume | 205 | |
dc.identifier.issn | 2640-3498 | |
dc.identifier.quartile | N/A | |
dc.identifier.scopus | 2-s2.0-85161024274 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/23015 | |
dc.identifier.wos | 1232393400134 | |
dc.keywords | Camera calibration | |
dc.keywords | Perception | |
dc.keywords | Pose estimation | |
dc.language | en | |
dc.publisher | Ml Research Press | |
dc.source | Proceedings of Machine Learning Research | |
dc.subject | Artificial intelligence | |
dc.subject | Theory and methods | |
dc.subject | Robotics | |
dc.title | Learning markerless robot-depth camera calibration and end-effector pose estimation | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.kuauthor | Sefercik, Buğra Can | |
local.contributor.kuauthor | Akgün, Barış | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |