Publication:
Exploiting synchronization in the analysis of shared-memory asynchronous programs

dc.contributor.coauthorEmmi, Michael
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorÖzkan, Burcu Külahcıoğlu
dc.contributor.kuauthorTaşıran, Serdar
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokidN/A
dc.date.accessioned2024-11-09T23:39:17Z
dc.date.issued2014
dc.description.abstractAs asynchronous programming becomes more mainstream, program analyses capable of automatically uncovering programming errors are increasingly in demand. Since asynchronous program analysis is computationally costly, current approaches sacrifice completeness and focus on limited sets of asynchronous task schedules that are likely to expose programming errors. These approaches are based on parameterized task schedulers, each of which admits schedules which are variations of a default deterministic schedule. By increasing the parameter value, a larger variety of schedules is explored, at a higher cost. The efficacy of these approaches depends largely on the default deterministic scheduler on which varying schedules are fashioned. We find that the limited exploration of asynchronous program behaviors can be made more efficient by designing parameterized schedulers which better match the inherent ordering of program events, e.g., arising from waiting for an asynchronous task to complete. We follow a reduction-based sequentialization" approach to analyzing asynchronous programs, which leverages existing (sequential) program analysis tools by encoding asynchronous program executions, according to a particular scheduler, as the executions of a sequential program. Analysis based on our new scheduler comes at no greater computational cost, and provides strictly greater behavioral coverage than analysis based on existing parameterized schedulers; we validate these claims both conceptually, with complexity and behavioral-inclusion arguments, and empirically, by discovering actual reported bugs faster with smaller parameter values.
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsorshipACM
dc.description.sponsorshipet al.
dc.description.sponsorshipMicrosoft
dc.description.sponsorshipNASA
dc.description.sponsorshipNVIDIA
dc.description.sponsorshipSIGSOFT
dc.identifier.doi10.1145/2632362.26332370
dc.identifier.isbn9781-4503-2452-6
dc.identifier.linkhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-84942362234&doi=10.1145%2f2632362.26332370&partnerID=40&md5=135ca0366a1e40920650033d0dc47915
dc.identifier.scopus2-s2.0-84942362234
dc.identifier.uriN/A
dc.identifier.urihttps://hdl.handle.net/20.500.14288/13072
dc.keywordsAsynchronous programs
dc.keywordsConcurrency
dc.keywordsSequentialization
dc.languageEnglish
dc.publisherAssociation for Computing Machinery
dc.source2014 International SPIN Symposium on Model Checking of Software, SPIN 2014 - Proceedings
dc.subjectComputer engineering
dc.titleExploiting synchronization in the analysis of shared-memory asynchronous programs
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authorid0000-0002-7038-165X
local.contributor.authoridN/A
local.contributor.kuauthorÖzkan, Burcu Külahcıoğlu
local.contributor.kuauthorTaşıran, Serdar
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files