Researcher:
Demir, Çiğdem Gündüz

Loading...
Profile Picture
ORCID

Job Title

Faculty Member

First Name

Çiğdem Gündüz

Last Name

Demir

Name

Name Variants

Demir, Çiğdem Gündüz

Email Address

Birth Date

Search Results

Now showing 1 - 6 of 6
  • Placeholder
    Publication
    Large language models as a rapid and objective tool for pathology report data extraction
    (Federation Turkish Pathology Soc., 2024) Department of Computer Engineering; Department of Computer Engineering; Bolat, Beyza; Eren, Özgür Can; Dur Karasayar, Ayşe Hümeyra; Meriçöz, Çisel Aydın; Demir, Çiğdem Gündüz; Kulaç, İbrahim; Koç Üniversitesi İş Bankası Enfeksiyon Hastalıkları Uygulama ve Araştırma Merkezi (EHAM) / Koç University İşbank Center for Infectious Diseases (KU-IS CID); Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); Koç University Research Center for Translational Medicine (KUTTAM) / Koç Üniversitesi Translasyonel Tıp Araştırma Merkezi (KUTTAM); School of Medicine; Graduate School of Health Sciences; College of Engineering
    Medical institutions continuously create a substantial amount of data that is used for scientific research. One of the departments with a great amount of archived data is the pathology department. Pathology archives hold the potential to create a case series of valuable rare entities or large cohorts of common entities. The major problem in creation of these databases is data extraction which is still commonly done manually and is highly laborious and error prone. For these reasons, we offer using large language models to overcome these challenges. Ten pathology reports of selected resection specimens were retrieved from electronic archives of Ko & ccedil; University Hospital for the initial set. These reports were de-identified and uploaded to ChatGPT and Google Bard. Both algorithms were asked to turn the reports in a synoptic report format that is easy to export to a data editor such as Microsoft Excel or Google Sheets. Both programs created tables with Google Bard facilitating the creation of a spreadsheet from the data automatically. In conclusion, we propose the use of AI-assisted data extraction for academic research purposes, as it may enhance efficiency and precision compared to manual data entry.
  • Placeholder
    Publication
    A simplified grid method of camera-captured images may be a practical alternative if validated ai-assisted counting is inaccessible
    (Elsevier Science Inc, 2023) Adsay, David; Eren, Ozgur; Basturk, Olca; Department of Computer Engineering; Department of Computer Engineering; Esmer, Rohat; Armutlu, Ayşe; Taşkın, Orhun Çığ; Koç, Soner; Tezcan, Nuray; Aktaş, Berk Kaan; Kulaç, İbrahim; Kapran, Yersu; Demir, Çiğdem Gündüz; Saka, Burcu; School of Medicine; Graduate School of Sciences and Engineering; College of Engineering
    N/A
  • Placeholder
    Publication
    FractalRG: advanced fractal region growing using Gaussian mixture models for left atrium segmentation
    (Academic Press Inc Elsevier Science, 2024) Firouznia, Marjan; Koupaei, Javad Alikhani; Faez, Karim; Jabdaragh, Aziza Saber; Department of Computer Engineering; Department of Computer Engineering; Demir, Çiğdem Gündüz; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); College of Engineering
    This paper presents an advanced region growing method for precise left atrium (LA) segmentation and estimation of atrial wall thickness in CT/MRI scans. The method leverages a Gaussian mixture model (GMM) and fractal dimension (FD) analysis in a three -step procedure to enhance segmentation accuracy. The first step employs GMM for seed initialization based on the probability distribution of image intensities. The second step utilizes fractal -based texture analysis to capture image self -similarity and texture complexity. An enhanced approach for generating 3D fractal maps is proposed, providing valuable texture information for region growing. In the last step, fractal -guided 3D region growing is applied for segmentation. This process expands seed points iteratively by adding neighboring voxels meeting specific similarity criteria. GMM estimations and fractal maps are used to restrict the region growing process, reducing the search space for global segmentation and enhancing computational efficiency. Experiments on a dataset of 10 CT scans with 3,947 images resulted in a Dice score of 0.85, demonstrating superiority over traditional techniques. In a dataset of 30 MRI scans with 3,600 images, the proposed method achieved a competitive Dice score of 0.89 +/- 0.02, comparable to Deep Learning -based models. These results highlight the effectiveness of our approach in accurately delineating the LA region across diverse imaging modalities.
  • Placeholder
    Publication
    Histopathological classification of colon tissue images with self-supervised models
    (IEEE, 2023) Department of Computer Engineering; Department of Computer Engineering; Erden, Mehmet Bahadır; Cansız, Selahattin; Demir, Çiğdem Gündüz; Graduate School of Sciences and Engineering; College of Engineering
    Deep learning techniques have demonstrated their ability to facilitate medical image diagnostics by offering more precise and accurate predictions. Convolutional neural network (CNN) architectures have been employed for a decade as the primary approach to enable automated diagnosis. On the other hand, recently proposed vision transformers (ViTs) based architectures have shown success in various computer vision tasks. However, their efficacy in medical image classification tasks remains largely unexplored due to their requirement for large datasets. Nevertheless, significant performance gains can be achieved by leveraging self-supervised learning techniques through pretraining. This paper analyzes performance of self-supervised pretrained networks in medical image classification tasks. Results on colon histopathology images revealed that CNN based architectures are more effective when trained from scratch, while pretrained models could achieve similar levels of performance with limited data.
  • Placeholder
    Publication
    Henle fiber layer mapping with directional optical coherence tomography
    (Lippincott Williams & Wilkins, 2022) N/A; N/A; N/A; N/A; N/A; N/A; N/A; Department of Computer Engineering; N/A; Department of Computer Engineering; Kesim, Cem; Bektaş, Şevval Nur; Kulalı, Zeynep Umut; Yıldız, Erdost; Ersöz, Mehmet Giray; Şahin, Afsun; Demir, Çiğdem Gündüz; Hasanreisoğlu, Murat; Doctor; Undergraduate Student; Undergraduate Student; PhD Student; Doctor; Faculty Member; Faculty Member; Faculty Member; Koç University Research Center for Translational Medicine (KUTTAM) / Koç Üniversitesi Translasyonel Tıp Araştırma Merkezi (KUTTAM); N/A; School of Medicine; School of Medicine; N/A; School of Medicine; School of Medicine; College of Engineering; School of Medicine; Koç University Hospital; N/A; N/A; N/A; Koç University Hospital; N/A; N/A; N/A; 387367; N/A; N/A; N/A; 324533; 171267; 43402; 182001
    Purpose: To perform a macular volumetric and topographic analysis of Henle fiber layer (HFL) from retinal scans acquired by directional optical coherence tomography. Methods: Thirty healthy eyes of 17 subjects were imaged using the Heidelberg spectral-domain optical coherence tomography (Spectralis, Heidelberg Engineering, Heidelberg, Germany) with varied horizontal and vertical pupil entry. Manual segmentation of HFL was performed from retinal sections of horizontally and vertically tilted optical coherence tomography images acquired within macular 20 x 20 degrees area. Total HFL volume, mean HFL thickness, and HFL coverage area within Early Treatment for Diabetic Retinopathy Study grid were calculated from mapped images. Results: Henle fiber layer of 30 eyes were imaged, segmented and mapped. The mean total HFL volume was 0.74 +/- 0.08 mm(3) with 0.16 +/- 0.02 mm(3), 0.18 +/- 0.03 mm(3), 0.17 +/- 0.02 mm(3), and 0.19 +/- 0.03 mm(3) for superior, temporal, inferior, and nasal quadrants, respectively. The mean HFL thickness was 26.5 +/- 2.9 mu m. Central 1-mm macular zone had the highest mean HFL thickness with 51.0 +/- 7.6 mu m. The HFL coverage that have thickness equal or above to the mean value had a mean 10.771 +/- 0.574 mm(2) of surface area. Conclusion: Henle fiber layer mapping is a promising tool for structural analysis of HFL. Identifying a normative data of HFL morphology will allow further studies to investigate HFL involvement in various ocular and systemic disorders.
  • Placeholder
    Publication
    FourierNet: shape-preserving network for Henle's fiber layer segmentation in optical coherence tomography images
    (Institute of Electrical and Electronics Engineers (IEEE), 2023) N/A; N/A; N/A; N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Cansız, Selahattin; Kesim, Cem; Bektaş, Şevval Nur; Kulalı, Zeynep Umut; Hasanreisoğlu, Murat; Demir, Çiğdem Gündüz; PhD Student; Teaching Faculty; Undergraduate Student; Undergraduate Student; Faculty Member; Faculty Member; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); Graduate School of Sciences and Engineering; School of Medicine; School of Medicine; School of Medicine; School of Medicine; College of Engineering; N/A; 387367; N/A; N/A; 182001; 43402
    Henle's fiber layer (HFL), a retinal layer located in the outer retina between the outer nuclear and outer plexiform layers (ONL and OPL, respectively), is composed of uniformly linear photoreceptor axons and Muller cell processes. However, in the standard optical coherence tomography (OCT) imaging, this layer is usually included in the ONL since it is difficult to perceive HFL contours on OCT images. Due to its variable reflectivity under an imaging beam, delineating the HFL contours necessitates directional OCT, which requires additional imaging. This paper addresses this issue by introducing a shape-preserving network, FourierNet, which achieves HFL segmentation in standard OCT scans with the target performance obtained when directional OCT is available. FourierNet is a new cascaded network design that puts forward the idea of benefiting the shape prior of the HFL in the network training. This design proposes to represent the shape prior by extracting Fourier descriptors on the HFL contours and defining an additional regression task of learning these descriptors. FourierNet then formulates HFL segmentation as concurrent learning of regression and classification tasks, in which estimated Fourier descriptors are used together with the input image to construct the HFL segmentation map. Our experiments on 1470 images of 30 OCT scans of healthy-looking macula reveal that quantifying the HFL shape with Fourier descriptors and concurrently learning them with the main segmentation task leads to significantly better results. These findings indicate the effectiveness of designing a shape-preserving network to facilitate HFL segmentation without performing directional OCT imaging.