Research Outputs

Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2

Browse

Search Results

Now showing 1 - 2 of 2
  • Placeholder
    Publication
    Intelligent edge computing: state-of-the-art techniques and applications
    (Institute of Electrical and Electronics Engineers Inc., 2020) Department of Computer Engineering; Department of Computer Engineering; N/A; Gürsoy, Attila; Özkasap, Öznur; Gill, Waris; Faculty Member; Faculty Member; PhD Student; Department of Computer Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; 8745; 113507; N/A
    To enable intelligent decisions at the network edge, supervised and unsupervised machine learning techniques and their variations are highly utilized in recent research studies. These include techniques and the corresponding applications such as detecting manufacturing faults in a smart factory setting, monitoring patient activities and health problems in smart health systems, detecting security attacks on the Internet of Things devices, and finding the rare events in the audio signals. In this paper, we present an extensive review of state-of-the-art techniques and applications of intelligent edge computing and provide classification and discussion of various approaches in this field.
  • Placeholder
    Publication
    Use of affective visual Information for summarization of human-centric videos
    (2022) Kopro, Berkay; Department of Computer Engineering; Erzin, Engin; Faculty Member; Department of Computer Engineering; College of Engineering; 34503
    The increasing volume of user-generated human-centric video content and its applications, such as video retrieval and browsing, require compact representations addressed by the video summarization literature. Current supervised studies formulate video summarization as a sequence-to-sequence learning problem, and the existing solutions often neglect the surge of the human-centric view, which inherently contains affective content. In this study, we investigate the affective-information enriched supervised video summarization task for human-centric videos. First, we train a visual input-driven state-of-the-art continuous emotion recognition model (CER-NET) on the RECOLA dataset to estimate activation and valence attributes. Then, we integrate the estimated emotional attributes and their high-level embeddings from the CER-NET with the visual information to define the proposed affective video summarization (AVSUM) architectures. In addition, we investigate the use of attention to improve the AVSUM architectures and propose two new architectures based on temporal attention (TA-AVSUM) and spatial attention (SA-AVSUM). We conduct video summarization experiments on the TvSum and COGNIMUSE datasets. The proposed temporal attention-based TA-AVSUM architecture attains competitive video summarization performances with strong improvements for the human-centric videos compared to the state-of-the-art in terms of F-score, self-defined face recall, and rank correlation metrics.