Publication:
Integrated semantic-syntactic video modeling for search and browsing

dc.contributor.coauthorEkin, A
dc.contributor.coauthorMehrotra, R
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorTekalp, Ahmet Murat
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokid26207
dc.date.accessioned2024-11-10T00:05:18Z
dc.date.issued2004
dc.description.abstractVideo processing and computer vision communities usually employ shot-based or object-based structural video models and associate low-level (color, texture, shape, and motion) and semantic descriptions (textual annotations) with these structural (syntactic) elements. Database and information retrieval communities, on the other hand, employ entity-relation or object-oriented models to model the semantics of multimedia documents. This paper proposes a new generic integrated semantic-syntactic video model to include all of these elements within a single framework to enable structured video search and browsing combining textual and low-level descriptors. The proposed model includes semantic entities (video objects and events) and the relations between them. We introduce a new "actor" entity to enable grouping of object roles in specific events. This context-dependent classification of attributes of an object allows for more efficient browsing and retrieval. The model also allows for decomposition of events into elementary motion units and elementary reaction/interaction units in order to access mid-level semantics and low-level video features. The instantiations of the model are expressed as graphs. Users can formulate flexible queries that can be translated into such graphs. Alternatively, users can input query graphs by editing an abstract model (model template). Search and retrieval is accomplished by matching the query graph with those instantiated models in the database. Examples and experimental results are provided to demonstrate the effectiveness of the proposed integrated modeling and querying framework.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.issue6
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.volume6
dc.identifier.doi10.1109/TMM.2004.837238
dc.identifier.eissn1941-0077
dc.identifier.issn1520-9210
dc.identifier.quartileQ1
dc.identifier.scopus2-s2.0-10044296005
dc.identifier.urihttp://dx.doi.org/10.1109/TMM.2004.837238
dc.identifier.urihttps://hdl.handle.net/20.500.14288/16423
dc.identifier.wos225224200007
dc.keywordsEvents
dc.keywordsIntegrated video model
dc.keywordsModel-based query formation
dc.keywordsObject motion description
dc.keywordsQuery resolution by graph matching
dc.keywordsVideo objects image retrieval
dc.keywordsImplementation
dc.keywordsSystem
dc.languageEnglish
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.sourceIEEE Transactions on Multimedia
dc.subjectComputer Science
dc.subjectInformation systems
dc.subjectSoftware engineering
dc.subjectTelecommunications
dc.titleIntegrated semantic-syntactic video modeling for search and browsing
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.authorid0000-0003-1465-8121
local.contributor.kuauthorTekalp, Ahmet Murat
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files