Research Outputs

Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2

Browse

Search Results

Now showing 1 - 10 of 19
  • Thumbnail Image
    PublicationOpen Access
    3D microprinting of iron platinum nanoparticle-based magnetic mobile microrobots
    (Wiley, 2021) Giltinan, Joshua; Sridhar, Varun; Bozüyük, Uğur; Sheehan, Devin; Department of Mechanical Engineering; Sitti, Metin; Faculty Member; Department of Mechanical Engineering; School of Medicine; College of Engineering; 297104
    Wireless magnetic microrobots are envisioned to revolutionize minimally invasive medicine. While many promising medical magnetic microrobots are proposed, the ones using hard magnetic materials are not mostly biocompatible, and the ones using biocompatible soft magnetic nanoparticles are magnetically very weak and, therefore, difficult to actuate. Thus, biocompatible hard magnetic micro/nanomaterials are essential toward easy-to-actuate and clinically viable 3D medical microrobots. To fill such crucial gap, this study proposes ferromagnetic and biocompatible iron platinum (FePt) nanoparticle-based 3D microprinting of microrobots using the two-photon polymerization technique. A modified one-pot synthesis method is presented for producing FePt nanoparticles in large volumes and 3D printing of helical microswimmers made from biocompatible trimethylolpropane ethoxylate triacrylate (PETA) polymer with embedded FePt nanoparticles. The 30 mu m long helical magnetic microswimmers are able to swim at speeds of over five body lengths per second at 200Hz, making them the fastest helical swimmer in the tens of micrometer length scale at the corresponding low-magnitude actuation fields of 5-10mT. It is also experimentally in vitro verified that the synthesized FePt nanoparticles are biocompatible. Thus, such 3D-printed microrobots are biocompatible and easy to actuate toward creating clinically viable future medical microrobots.
  • Thumbnail Image
    PublicationOpen Access
    A computational multicriteria optimization approach to controller design for pysical human-robot interaction
    (Institute of Electrical and Electronics Engineers (IEEE), 2020) Tokatlı, Ozan; Patoğlu, Volkan; Department of Mechanical Engineering; Aydın, Yusuf; Başdoğan, Çağatay; Faculty Member; Department of Mechanical Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 125489
    Physical human-robot interaction (pHRI) integrates the benefits of human operator and a collaborative robot in tasks involving physical interaction, with the aim of increasing the task performance. However, the design of interaction controllers that achieve safe and transparent operations is challenging, mainly due to the contradicting nature of these objectives. Knowing that attaining perfect transparency is practically unachievable, controllers that allow better compromise between these objectives are desirable. In this article, we propose a multicriteria optimization framework, which jointly optimizes the stability robustness and transparency of a closed-loop pHRI system for a given interaction controller. In particular, we propose a Pareto optimization framework that allows the designer to make informed decisions by thoroughly studying the tradeoff between stability robustness and transparency. The proposed framework involves a search over the discretized controller parameter space to compute the Pareto front curve and a selection of controller parameters that yield maximum attainable transparency and stability robustness by studying this tradeoff curve. The proposed framework not only leads to the design of an optimal controller, but also enables a fair comparison among different interaction controllers. In order to demonstrate the practical use of the proposed approach, integer and fractional order admittance controllers are studied as a case study and compared both analytically and experimentally. The experimental results validate the proposed design framework and show that the achievable transparency under fractional order admittance controller is higher than that of integer order one, when both controllers are designed to ensure the same level of stability robustness.
  • Thumbnail Image
    PublicationOpen Access
    A gated fusion network for dynamic saliency prediction
    (Institute of Electrical and Electronics Engineers (IEEE), 2022) Kocak, Aysun; Erdem, Erkut; Department of Computer Engineering; Erdem, Aykut; Faculty Member; Department of Computer Engineering; College of Engineering; 20331
    Predicting saliency in videos is a challenging problem due to complex modeling of interactions between spatial and temporal information, especially when ever-changing, dynamic nature of videos is considered. Recently, researchers have proposed large-scale data sets and models that take advantage of deep learning as a way to understand what is important for video saliency. These approaches, however, learn to combine spatial and temporal features in a static manner and do not adapt themselves much to the changes in the video content. In this article, we introduce the gated fusion network for dynamic saliency (GFSalNet), the first deep saliency model capable of making predictions in a dynamic way via the gated fusion mechanism. Moreover, our model also exploits spatial and channelwise attention within a multiscale architecture that further allows for highly accurate predictions. We evaluate the proposed approach on a number of data sets, and our experimental analysis demonstrates that it outperforms or is highly competitive with the state of the art. Importantly, we show that it has a good generalization ability, and moreover, exploits temporal information more effectively via its adaptive fusion scheme.
  • Thumbnail Image
    PublicationRestricted
    A new control architecture for physical human-robot interaction based on haptic communication
    (Koç University, 2013) Aydın, Yusuf; Başdoğan, Çağatay; 0000-0002-6382-7334; Koç University Graduate School of Sciences and Engineering; Mechanical Engineering; 125489
  • Thumbnail Image
    PublicationRestricted
    An adaptive admittance controller for collaborative drilling with a robot based on subtask classification via deep learning
    (Koç University, 2022) Niaz, Pouya Pourakbarian; Başdoğan, Çağatay; 0000-0002-6382-7334; Koç University Graduate School of Sciences and Engineering; Mechanical Engineering; 125489
  • Thumbnail Image
    PublicationOpen Access
    Children's reliance on the non-verbal cues of a robot versus a human
    (Public Library of Science, 2019) Verhagen J.; Van Den Berghe R.; Oudgenoeg-Paz O.; Leseman P.; Department of Psychology; Küntay, Aylin C.; Faculty Member; Department of Psychology; College of Social Sciences and Humanities; 178879
    Robots are used for language tutoring increasingly often, and commonly programmed to display non-verbal communicative cues such as eye gaze and pointing during robot-child interactions. With a human speaker, children rely more strongly on non-verbal cues (pointing) than on verbal cues (labeling) if these cues are in conflict. However, we do not know how children weigh the non-verbal cues of a robot. Here, we assessed whether four- to six-year-old children (i) differed in their weighing of non-verbal cues (pointing, eye gaze) and verbal cues provided by a robot versus a human; (ii) weighed non-verbal cues differently depending on whether these contrasted with a novel or familiar label; and (iii) relied differently on a robot's non-verbal cues depending on the degree to which they attributed human-like properties to the robot. The results showed that children generally followed pointing over labeling, in line with earlier research. Children did not rely more strongly on the non-verbal cues of a robot versus those of a human. Regarding pointing, children who perceived the robot as more human-like relied on pointing more strongly when it contrasted with a novel label versus a familiar label, but children who perceived the robot as less human-like did not show this difference. Regarding eye gaze, children relied more strongly on the gaze cue when it contrasted with a novel versus a familiar label, and no effect of anthropomorphism was found. Taken together, these results show no difference in the degree to which children rely on non-verbal cues of a robot versus those of a human and provide preliminary evidence that differences in anthropomorphism may interact with children's reliance on a robot's non-verbal behaviors.
  • Thumbnail Image
    PublicationOpen Access
    Control and transport of passive particles using self-organized spinning micro-disks
    (Institute of Electrical and Electronics Engineers (IEEE), 2022) Basualdo, Franco N. Pinan; Gardi, Gaurav; Wang, Wendong; Demir, Sinan O.; Bolopion, Aude; Gauthier, Michael; Lambert, Pierre; Department of Mechanical Engineering; Sitti, Metin; Faculty Member; Department of Mechanical Engineering; College of Engineering; School of Medicine; 297104
    Traditional robotic systems have proven to be instrumental in object manipulation tasks for automated manufacturing processes. Object manipulation in such cases typically involves transport, pick-and-place and assembly of objects using automated conveyors and robotic arms. However, the forces at microscopic scales (e.g., surface tension, Van der Waals, electrostatic) can be qualitatively and quantitatively different from those at macroscopic scales. These forces render the release of objects difficult, and hence, traditional systems cannot be directly transferred to small scales (below a few millimeters). Consequently, novel micro-robotic manipulation systems have to be designed to take into account these scaling effects. Such systems could be beneficial for micro-fabrication processes and for biological studies. Here, we show autonomous position control of passive particles floating at the air-water interface using a collective of self-organized spinning micro-disks with a diameter of 300 mu m. First, we show that the spinning micro-disks collectives generate azimuthal flows that cause passive particles to orbit around them. We then develop a closed-loop controller to demonstrate autonomous position control of passive particles without physical contact. Finally, we showcase the capability of our system to split from an expanded to several circular collectives while holding the particle at a fixed target. Our system's contact-free object manipulation capability could be used for transporting delicate biological objects and for guiding self-assembly of passive objects for micro-fabrication.
  • Thumbnail Image
    PublicationOpen Access
    Deep learning-based 3D magnetic microrobot tracking using 2D MR images
    (Institute of Electrical and Electronics Engineers (IEEE), 2022) Tiryaki, Mehmet Efe; Demir, Sinan Özgün; Department of Mechanical Engineering; Sitti, Metin; Faculty Member; Department of Mechanical Engineering; College of Engineering; School of Medicine; 297104
    Magnetic resonance imaging (MRI)-guided robots emerged as a promising tool for minimally invasive medical operations. Recently, MRI scanners have been proposed for actuating and localizing magnetic microrobots in the patient's body using two-dimensional (2D) MR images. However, three-dimensional (3D) magnetic microrobots tracking during motion is still an untackled issue in MRI-powered microrobotics. Here, we present a deep learning-based 3D magnetic microrobot tracking method using 2D MR images during microrobot motion. The proposed method comprises a convolutional neural network (CNN) and complementary particle filter for 3D microrobot tracking. The CNN localizes the microrobot position relative to the 2D MRI slice and classifies the microrobot visibility in the MR images. First, we create an ultrasound (US) imaging-mentored MRI-based microrobot imaging and actuation system to train the CNN. Then, we trained the CNN using the MRI data generated by automated experiments using US image-based visual servoing of a microrobot with a 500 mu m-diameter magnetic core. We showed that the proposed CNN can localize the microrobot and classified its visibility in an in vitro environment with +/- 0.56 mm and 87.5% accuracy in 2D MR images, respectively. Furthermore, we demonstrated ex-vivo 3D microrobot tracking with +/- 1.43 mm accuracy, improving tracking accuracy by 60% compared to the previous studies. The presented tracking strategy will enable MRI-powered microrobots to be used in high-precision targeted medical applications in the future.
  • Thumbnail Image
    PublicationOpen Access
    Development of a cognitive robotic system for simple surgical tasks
    (InTech, 2015) Muradore, Riccardo; Fiorini, Paolo; Fiorini, Paolo; Barkana, Duygun Erol; Bonfe, Marcello; Borierol, Fabrizio; Caprara, Andrea; De Rossi, Giacomo; Dodi, Riccardo; Elle, Ole Jakob; Ferraguti, Federica; Gasperottil, Lorenza; Gassert, Roger; Mathiassen, Kim; Handini, Dilla; Lambercy, Olivier; Lil, Lin; Kruusmaal, Maarja; Manurung, Auralius Oberman; Meruzzi, Giovanni; Ho Quoc Phuong Nguyen; Freda, Nicola; Riolfo, Gianluca; Ristolainen, Asko; Sanna, Alberto; Secchi, Cristian; Torsello, Marco; Department of Media and Visual Arts; Yantaç, Asım Evren; Faculty Member; Department of Media and Visual Arts; College of Social Sciences and Humanities; 52621
    The introduction of robotic surgery within the operating rooms has significantly improved the quality of many surgical procedures. Recently, the research on medical robotic systems focused on increasing the level of autonomy in order to give them the possibility to carry out simple surgical actions autonomously. This paper reports on the development of technologies for introducing automation within the surgical workflow. The results have been obtained during the ongoing FP7 European funded project Intelligent Surgical Robotics (I-SUR). The main goal of the project is to demonstrate that autonomous robotic surgical systems can carry out simple surgical tasks effectively and without major intervention by surgeons. To fulfil this goal, we have developed innovative solutions (both in terms of technologies and algorithms) for the following aspects: fabrication of soft organ models starting from CT images, surgical planning and execution of movement of robot arms in contact with a deformable environment, designing a surgical interface minimizing the cognitive load of the surgeon supervising the actions, intra-operative sensing and reasoning to detect normal transitions and unexpected events. All these technologies have been integrated using a component-based software architecture to control a novelrobot designed to perform the surgical actions under study. In this work we provide an overview of our system and report on preliminary results of the automatic execution of needle insertion for the cryoablation of kidney tumours.
  • Thumbnail Image
    PublicationOpen Access
    Envisioning social drones in education
    (Frontiers, 2022) Johal, W.; Obaid, M.; Department of Media and Visual Arts; N/A; Yantaç, Asım Evren; Gatos, Doğa Çorlu; Faculty Member; Department of Media and Visual Arts; College of Social Sciences and Humanities; Graduate School of Social Sciences and Humanities; 52621; N/A
    Education is one of the major application fields in social Human-Robot Interaction. Several forms of social robots have been explored to engage and assist students in the classroom environment, from full-bodied humanoid robots to tabletop robot companions, but flying robots have been left unexplored in this context. In this paper, we present seven online remote workshops conducted with 20 participants to investigate the application area of Education in the Human-Drone Interaction domain; particularly focusing on what roles a social drone could fulfill in a classroom, how it would interact with students, teachers and its environment, what it could look like, and what would specifically differ from other types of social robots used in education. In the workshops we used online collaboration tools, supported by a sketch artist, to help envision a social drone in a classroom. The results revealed several design implications for the roles and capabilities of a social drone, in addition to promising research directions for the development and design in the novel area of drones in education.