Research Outputs

Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2

Browse

Search Results

Now showing 1 - 4 of 4
  • Thumbnail Image
    PublicationOpen Access
    3D microprinting of iron platinum nanoparticle-based magnetic mobile microrobots
    (Wiley, 2021) Giltinan, Joshua; Sridhar, Varun; Bozüyük, Uğur; Sheehan, Devin; Department of Mechanical Engineering; Sitti, Metin; Faculty Member; Department of Mechanical Engineering; School of Medicine; College of Engineering; 297104
    Wireless magnetic microrobots are envisioned to revolutionize minimally invasive medicine. While many promising medical magnetic microrobots are proposed, the ones using hard magnetic materials are not mostly biocompatible, and the ones using biocompatible soft magnetic nanoparticles are magnetically very weak and, therefore, difficult to actuate. Thus, biocompatible hard magnetic micro/nanomaterials are essential toward easy-to-actuate and clinically viable 3D medical microrobots. To fill such crucial gap, this study proposes ferromagnetic and biocompatible iron platinum (FePt) nanoparticle-based 3D microprinting of microrobots using the two-photon polymerization technique. A modified one-pot synthesis method is presented for producing FePt nanoparticles in large volumes and 3D printing of helical microswimmers made from biocompatible trimethylolpropane ethoxylate triacrylate (PETA) polymer with embedded FePt nanoparticles. The 30 mu m long helical magnetic microswimmers are able to swim at speeds of over five body lengths per second at 200Hz, making them the fastest helical swimmer in the tens of micrometer length scale at the corresponding low-magnitude actuation fields of 5-10mT. It is also experimentally in vitro verified that the synthesized FePt nanoparticles are biocompatible. Thus, such 3D-printed microrobots are biocompatible and easy to actuate toward creating clinically viable future medical microrobots.
  • Thumbnail Image
    PublicationOpen Access
    Children's reliance on the non-verbal cues of a robot versus a human
    (Public Library of Science, 2019) Verhagen J.; Van Den Berghe R.; Oudgenoeg-Paz O.; Leseman P.; Department of Psychology; Küntay, Aylin C.; Faculty Member; Department of Psychology; College of Social Sciences and Humanities; 178879
    Robots are used for language tutoring increasingly often, and commonly programmed to display non-verbal communicative cues such as eye gaze and pointing during robot-child interactions. With a human speaker, children rely more strongly on non-verbal cues (pointing) than on verbal cues (labeling) if these cues are in conflict. However, we do not know how children weigh the non-verbal cues of a robot. Here, we assessed whether four- to six-year-old children (i) differed in their weighing of non-verbal cues (pointing, eye gaze) and verbal cues provided by a robot versus a human; (ii) weighed non-verbal cues differently depending on whether these contrasted with a novel or familiar label; and (iii) relied differently on a robot's non-verbal cues depending on the degree to which they attributed human-like properties to the robot. The results showed that children generally followed pointing over labeling, in line with earlier research. Children did not rely more strongly on the non-verbal cues of a robot versus those of a human. Regarding pointing, children who perceived the robot as more human-like relied on pointing more strongly when it contrasted with a novel label versus a familiar label, but children who perceived the robot as less human-like did not show this difference. Regarding eye gaze, children relied more strongly on the gaze cue when it contrasted with a novel versus a familiar label, and no effect of anthropomorphism was found. Taken together, these results show no difference in the degree to which children rely on non-verbal cues of a robot versus those of a human and provide preliminary evidence that differences in anthropomorphism may interact with children's reliance on a robot's non-verbal behaviors.
  • Thumbnail Image
    PublicationOpen Access
    Development of a cognitive robotic system for simple surgical tasks
    (InTech, 2015) Muradore, Riccardo; Fiorini, Paolo; Fiorini, Paolo; Barkana, Duygun Erol; Bonfe, Marcello; Borierol, Fabrizio; Caprara, Andrea; De Rossi, Giacomo; Dodi, Riccardo; Elle, Ole Jakob; Ferraguti, Federica; Gasperottil, Lorenza; Gassert, Roger; Mathiassen, Kim; Handini, Dilla; Lambercy, Olivier; Lil, Lin; Kruusmaal, Maarja; Manurung, Auralius Oberman; Meruzzi, Giovanni; Ho Quoc Phuong Nguyen; Freda, Nicola; Riolfo, Gianluca; Ristolainen, Asko; Sanna, Alberto; Secchi, Cristian; Torsello, Marco; Department of Media and Visual Arts; Yantaç, Asım Evren; Faculty Member; Department of Media and Visual Arts; College of Social Sciences and Humanities; 52621
    The introduction of robotic surgery within the operating rooms has significantly improved the quality of many surgical procedures. Recently, the research on medical robotic systems focused on increasing the level of autonomy in order to give them the possibility to carry out simple surgical actions autonomously. This paper reports on the development of technologies for introducing automation within the surgical workflow. The results have been obtained during the ongoing FP7 European funded project Intelligent Surgical Robotics (I-SUR). The main goal of the project is to demonstrate that autonomous robotic surgical systems can carry out simple surgical tasks effectively and without major intervention by surgeons. To fulfil this goal, we have developed innovative solutions (both in terms of technologies and algorithms) for the following aspects: fabrication of soft organ models starting from CT images, surgical planning and execution of movement of robot arms in contact with a deformable environment, designing a surgical interface minimizing the cognitive load of the surgeon supervising the actions, intra-operative sensing and reasoning to detect normal transitions and unexpected events. All these technologies have been integrated using a component-based software architecture to control a novelrobot designed to perform the surgical actions under study. In this work we provide an overview of our system and report on preliminary results of the automatic execution of needle insertion for the cryoablation of kidney tumours.
  • Thumbnail Image
    PublicationOpen Access
    Second language tutoring using social robots: a large-scale study
    (Institute of Electrical and Electronics Engineers (IEEE), 2019) Vogt, Paul; van den Berghe, Rianne; de Haas, Mirjam; Hoffman, Laura; Mamus, Ezgi; Montanier, Jean-Marc; Oudgenoeg-Paz, Ora; Garcia, Daniel Hernandez; Papadopoulos, Fotios; Schodde, Thorsten; Verhagen, Josje; Wallbridge, Christopher D.; Willemsen, Bram; de Wit, Jan; Belpaeme, Tony; Goksun, Tilbe; Kopp, Stefan; Krahmer, Emiel; Leseman, Paul; Pandey, Amit Kumar; Department of Psychology; Kanero, Junko; Oranç, Cansu; Küntay, Aylin C.; Faculty Member; Department of Psychology; Graduate School of Social Sciences and Humanities
    We present a large-scale study of a series of seven lessons designed to help young children learn english vocabulary as a foreign language using a social robot. The experiment was designed to investigate 1) the effectiveness of a social robot teaching children new words over the course of multiple interactions (supported by a tablet), 2) the added benefit of a robot's iconic gestures on word learning and retention, and 3) the effect of learning from a robot tutor accompanied by a tablet versus learning from a tablet application alone. For reasons of transparency, the research questions, hypotheses and methods were preregistered. With a sample size of 194 children, our study was statistically well-powered. Our findings demonstrate that children are able to acquire and retain English vocabulary words taught by a robot tutor to a similar extent as when they are taught by a tablet application. In addition, we found no beneficial effect of a robot's iconic gestures on learning gains.