Research Outputs
Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2
Browse
14 results
Search Results
Publication Metadata only Co-exploring the design space of emotional AR visualizations(Springer international Publishing ag, 2021) N/A; Department of Media and Visual Arts; Şemsioğlu, Sinem; Yantaç, Asım Evren; PhD Student; Faculty Member; Department of Media and Visual Arts; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Koç Üniversitesi KARMA Gerçeklik Teknolojileri Eğitim, Uygulama ve Yayma Merkezi (KARMA) / Koç University KARMA Mixed Reality Technologies Training, Implementation and Dissemination Centre (KARMA); N/A; College of Social Sciences and Humanities; N/A; 52621Designing for emotional expression has become a popular topic of study in HCI due to advances in affective computing technologies. With increasing use of video-conferencing and video use in social media for different needs such as leisure, work, social communication, video filters, AR effects and holograms are also getting popular. in this paper, we suggest a framework for emotion visualization for natural user interface technologies such as augmented or Mixed Reality. the framework has been developed based on our analysis of visualizations formed during a series of emotion visualization workshops.Publication Open Access Development of a cognitive robotic system for simple surgical tasks(InTech, 2015) Muradore, Riccardo; Fiorini, Paolo; Fiorini, Paolo; Barkana, Duygun Erol; Bonfe, Marcello; Borierol, Fabrizio; Caprara, Andrea; De Rossi, Giacomo; Dodi, Riccardo; Elle, Ole Jakob; Ferraguti, Federica; Gasperottil, Lorenza; Gassert, Roger; Mathiassen, Kim; Handini, Dilla; Lambercy, Olivier; Lil, Lin; Kruusmaal, Maarja; Manurung, Auralius Oberman; Meruzzi, Giovanni; Ho Quoc Phuong Nguyen; Freda, Nicola; Riolfo, Gianluca; Ristolainen, Asko; Sanna, Alberto; Secchi, Cristian; Torsello, Marco; Department of Media and Visual Arts; Yantaç, Asım Evren; Faculty Member; Department of Media and Visual Arts; College of Social Sciences and Humanities; 52621The introduction of robotic surgery within the operating rooms has significantly improved the quality of many surgical procedures. Recently, the research on medical robotic systems focused on increasing the level of autonomy in order to give them the possibility to carry out simple surgical actions autonomously. This paper reports on the development of technologies for introducing automation within the surgical workflow. The results have been obtained during the ongoing FP7 European funded project Intelligent Surgical Robotics (I-SUR). The main goal of the project is to demonstrate that autonomous robotic surgical systems can carry out simple surgical tasks effectively and without major intervention by surgeons. To fulfil this goal, we have developed innovative solutions (both in terms of technologies and algorithms) for the following aspects: fabrication of soft organ models starting from CT images, surgical planning and execution of movement of robot arms in contact with a deformable environment, designing a surgical interface minimizing the cognitive load of the surgeon supervising the actions, intra-operative sensing and reasoning to detect normal transitions and unexpected events. All these technologies have been integrated using a component-based software architecture to control a novelrobot designed to perform the surgical actions under study. In this work we provide an overview of our system and report on preliminary results of the automatic execution of needle insertion for the cryoablation of kidney tumours.Publication Metadata only Droeye: introducing a social eye prototype for drones(Association for Computing Machinery (ACM), 2020) Obaid, Mohammad; Mubin, Omar; Brown, Scott Andrew; Otsuki, Mai; Kuzuoka, Hideaki; Department of Media and Visual Arts; Yantaç, Asım Evren; Faculty Member; Department of Media and Visual Arts; College of Social Sciences and Humanities; 52621A drone agent can benefit from exhibiting social cues, as introducing behavioral cues in robotic agents can enhance interaction trust and comfort with users. In this work, we introduce the development and setup of a responsive eye prototype (DroEye) mounted on a drone to demonstrate prominent social cues in Human-Drone Interaction. We describe possible attributes associated with the DroEye prototype and our future research directions to enhance the overall experience with social drones in our environment.Publication Metadata only Energetically autonomous soft robots: an embodied actuation strategy by liquid metal metabolism(Institute of Electrical and Electronics Engineers Inc., 2024) Liao, Jiahe; Bao, Xianqiang; Park, Minjo; Department of Mechanical Engineering; Sitti, Metin; Department of Mechanical Engineering; School of Medicine; College of EngineeringThe level of energy autonomy in untethered robots is often physically limited by their onboard or remote energy supply, which often lacks the synergy and efficiency in living organisms.Embodied energy design emerges as a biologically inspired paradigm for energetically autonomous robots, where the energy source and actuator mechanisms are directly integrated into the materials and architecture for efficiency and functionality.In this work, we introduce an energy-embodied soft actuation strategy that is inspired by the metabolic processes in natural organisms.We present a self-powered, chemo-pneumatic actuation mechanism powered by a metabolic-like decomposition process of a gel-encapsulated liquid metal composite.We demonstrate the self-regulating locomotion capability of an energetically autonomous soft robotic crawler with this energy-actuation coupling. © 2024 IEEE.Publication Open Access Envisioning social drones in education(Frontiers, 2022) Johal, W.; Obaid, M.; Department of Media and Visual Arts; N/A; Yantaç, Asım Evren; Gatos, Doğa Çorlu; Faculty Member; Department of Media and Visual Arts; College of Social Sciences and Humanities; Graduate School of Social Sciences and Humanities; 52621; N/AEducation is one of the major application fields in social Human-Robot Interaction. Several forms of social robots have been explored to engage and assist students in the classroom environment, from full-bodied humanoid robots to tabletop robot companions, but flying robots have been left unexplored in this context. In this paper, we present seven online remote workshops conducted with 20 participants to investigate the application area of Education in the Human-Drone Interaction domain; particularly focusing on what roles a social drone could fulfill in a classroom, how it would interact with students, teachers and its environment, what it could look like, and what would specifically differ from other types of social robots used in education. In the workshops we used online collaboration tools, supported by a sketch artist, to help envision a social drone in a classroom. The results revealed several design implications for the roles and capabilities of a social drone, in addition to promising research directions for the development and design in the novel area of drones in education.Publication Metadata only Generating robot/agent backchannels during a storytelling experiment(Institute of Electrical and Electronics Engineers (IEEE), 2009) Al Moubayed, S.; Baklouti, M.; Chetouani, M.; Dutoit, T.; Mahdhaoui, A.; Martin, J. -C.; Ondas, S.; Pelachaud, C.; Urbain, J.; Department of Mechanical Engineering; Yılmaz, Mustafa Akın; Tekalp, Ahmet Murat; PhD Student; Department of Mechanical Engineering; Graduate School of Sciences and Engineering; N/AThis work presents the development of a real-time framework for the research of Multimodal Feedback of Robots/Talking Agents in the context of Human Robot Interaction (HRI) and Human Computer Interaction (HCI). For evaluating the framework, a Multimodal corpus is built (ENTERFACE_STEAD), and a study on the important multimodal features was done for building an active Robot/Agent listener of a storytelling experience with Humans. The experiments show that even when building the same reactive behavior models for Robot and Talking Agents, the interpretation and the realization of the behavior communicated is different due to the different communicative channels Robots/Agents offer be it physical but less-human-like in Robots, and virtual but more expressive and human-like in Talking agents.Publication Metadata only Learning markerless robot-depth camera calibration and end-effector pose estimation(Ml Research Press, 2023) Department of Computer Engineering; Sefercik, Buğra Can; Akgün, Barış; Department of Computer Engineering; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); College of Engineering; Graduate School of Sciences and EngineeringTraditional approaches to extrinsic calibration use fiducial markers and learning-based approaches rely heavily on simulation data. In this work, we present a learning-based markerless extrinsic calibration system that uses a depth camera and does not rely on simulation data. We learn models for end-effector (EE) segmentation, single-frame rotation prediction and keypoint detection, from automatically generated real-world data. We use a transformation trick to get EE pose estimates from rotation predictions and a matching algorithm to get EE pose estimates from keypoint predictions. We further utilize the iterative closest point algorithm, multiple-frames, filtering and outlier detection to increase calibration robustness. Our evaluations with training data from multiple camera poses and test data from previously unseen poses give sub-centimeter and sub-deciradian average calibration and pose estimation errors. We also show that a carefully selected single training pose gives comparable results. © 2023 Proceedings of Machine Learning Research. All rights reserved.Publication Metadata only Magnetically actuated gearbox for the wireless control of millimeter-scale robots(American Association for the Advancement of Science (AAAS), 2022) Hong, Chong; Ren, Ziyu; Wang, Che; Li, Mingtong; Wu, Yingdan; Tang, Dewei; Hu, Wenqi; N/A; Department of Mechanical Engineering; Sitti, Metin; Faculty Member; Department of Mechanical Engineering; School of Medicine; College of Engineering; 297104The limited force or torque outputs of miniature magnetic actuators constrain the locomotion performances and functionalities of magnetic millimeter-scale robots. Here, we present a magnetically actuated gearbox with a maximum size of 3 millimeters for driving wireless millirobots. The gearbox is assembled using microgears that have reference diameters down to 270 micrometers and are made of aluminum-filled epoxy resins through casting. With a magnetic disk attached to the input shaft, the gearbox can be driven by a rotating external magnetic field, which is not more than 6.8 millitesla, to produce torque of up to 0.182 millinewton meters at 40 hertz. The corresponding torque and power densities are 12.15 micronewton meters per cubic millimeter and 8.93 microwatt per cubic millimeter, respectively. The transmission efficiency of the gearbox in the air is between 25.1 and 29.2% at actuation frequencies ranging from 1 to 40 hertz, and it lowers when the gearbox is actuated in viscous liquids. This miniature gearbox can be accessed wirelessly and integrated with various functional modules to repeatedly generate large actuation forces, strains, and speeds; store energy in elastic components; and lock up mechanical linkages. These characteristics enable us to achieve a peristaltic robot that can crawl on a flat substrate or inside a tube, a jumping robot with a tunable jumping height, a clamping robot that can sample solid objects by grasping, a needle-puncture robot that can take samples from the inside of the target, and a syringe robot that can collect or release liquids.Publication Metadata only Robot-assisted drilling on curved surfaces with haptic guidance under adaptive admittance control(Institute of Electrical and Electronics Engineers (IEEE), 2022) Aydın, Yusuf; N/A; N/A; N/A; Department of Mechanical Engineering; Madani, Alireza; Niaz, Pouya Pourakbarian; Güler, Berk; Başdoğan, Çağatay; Master Student; Master Student; Master Student; Faculty Member; Department of Mechanical Engineering; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A; N/A; 125489Drilling a hole on a curved surface with a desired angle is prone to failure when done manually, due to the difficulties in drill alignment and also inherent instabilities of the task, potentially causing injury and fatigue to the workers. On the other hand, it can be impractical to fully automate such a task in real manufacturing environments because the parts arriving at an assembly line can have various complex shapes where drill point locations are not easily accessible, making automated path planning difficult. In this work, an adaptive admittance controller with 6 degrees of freedom is developed and deployed on a KUKA LBR iiwa 7 cobot such that the operator is able to manipulate a drill mounted on the robot with one hand comfortably and open holes on a curved surface with haptic guidance of the cobot and visual guidance provided through an AR interface. Real-time adaptation of the admittance damping provides more transparency when driving the robot in free space while ensuring stability during drilling. After the user brings the drill sufficiently close to the drill target and roughly aligns to the desired drilling angle, the haptic guidance module fine tunes the alignment first and then constrains the user movement to the drilling axis only, after which the operator simply pushes the drill into the workpiece with minimal effort. Two sets of experiments were conducted to investigate the potential benefits of the haptic guidance module quantitatively (Experiment I) and also the practical value of the proposed pHRI system for real manufacturing settings based on the subjective opinion of the participants (Experiment II). The results of Experiment I, conducted with 3 naive participants, show that the haptic guidance improves task completion time by 26% while decreasing human effort by 16% and muscle activation levels by 27% compared to no haptic guidance condition. The results of Experiment II, conducted with 3 experienced industrial workers, show that the proposed system is perceived to be easy to use, safe, and helpful in carrying out the drilling task.Publication Open Access Second language tutoring using social robots: a large-scale study(Institute of Electrical and Electronics Engineers (IEEE), 2019) Vogt, Paul; van den Berghe, Rianne; de Haas, Mirjam; Hoffman, Laura; Mamus, Ezgi; Montanier, Jean-Marc; Oudgenoeg-Paz, Ora; Garcia, Daniel Hernandez; Papadopoulos, Fotios; Schodde, Thorsten; Verhagen, Josje; Wallbridge, Christopher D.; Willemsen, Bram; de Wit, Jan; Belpaeme, Tony; Goksun, Tilbe; Kopp, Stefan; Krahmer, Emiel; Leseman, Paul; Pandey, Amit Kumar; Department of Psychology; Kanero, Junko; Oranç, Cansu; Küntay, Aylin C.; Faculty Member; Department of Psychology; Graduate School of Social Sciences and HumanitiesWe present a large-scale study of a series of seven lessons designed to help young children learn english vocabulary as a foreign language using a social robot. The experiment was designed to investigate 1) the effectiveness of a social robot teaching children new words over the course of multiple interactions (supported by a tablet), 2) the added benefit of a robot's iconic gestures on word learning and retention, and 3) the effect of learning from a robot tutor accompanied by a tablet versus learning from a tablet application alone. For reasons of transparency, the research questions, hypotheses and methods were preregistered. With a sample size of 194 children, our study was statistically well-powered. Our findings demonstrate that children are able to acquire and retain English vocabulary words taught by a robot tutor to a similar extent as when they are taught by a tablet application. In addition, we found no beneficial effect of a robot's iconic gestures on learning gains.