Publication:
Interpretable embeddings from molecular simulations using Gaussian mixture variational autoencoders

dc.contributor.coauthorBereau, Tristan
dc.contributor.coauthorRudzinski, Joseph F.
dc.contributor.kuauthorBozkurt, Yasemin
dc.contributor.kuprofilePhD Student
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.yokidN/A
dc.date.accessioned2024-11-09T23:59:10Z
dc.date.issued2020
dc.description.abstractExtracting insight from the enormous quantity of data generated from molecular simulations requires the identification of a small number of collective variables whose corresponding low-dimensional free-energy landscape retains the essential features of the underlying system. Data-driven techniques provide a systematic route to constructing this landscape, without the need for extensive a priori intuition into the relevant driving forces. In particular, autoencoders are powerful tools for dimensionality reduction, as they naturally force an information bottleneck and, thereby, a low-dimensional embedding of the essential features. While variational autoencoders ensure continuity of the embedding by assuming a unimodal Gaussian prior, this is at odds with the multi-basin free-energy landscapes that typically arise from the identification of meaningful collective variables. In this work, we incorporate this physical intuition into the prior by employing a Gaussian mixture variational autoencoder (GMVAE), which encourages the separation of metastable states within the embedding. The GMVAE performs dimensionality reduction and clustering within a single unified framework, and is capable of identifying the inherent dimensionality of the input data, in terms of the number of Gaussians required to categorize the data. We illustrate our approach on two toy models, alanine dipeptide, and a challenging disordered peptide ensemble, demonstrating the enhanced clustering effect of the GMVAE prior compared to standard VAEs. The resulting embeddings appear to be promising representations for constructing Markov state models, highlighting the transferability of the dimensionality reduction from static equilibrium properties to dynamics.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.issue1
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsorshipScientific and Technological Research Council of Turkey, TUBITAK- BIDEB, under the 2214-A programme
dc.description.sponsorshipEmmy Noether program of the Deutsche Forschungsgemeinschaft (DFG)
dc.description.sponsorshiplong program Machine Learning for Physics and the Physics of Learning at the Institute for Pure and Applied Mathematics (IPAM) The authors thank Kiran H Kanekal and Omar Valsson for critical reading of the manuscript. JFR is grateful to the BiGmax consortium and participants of the BiGmax Big Data Summer School for insightful discussions. YBV acknowledges foreign collaborative research study support by The Scientific and Technological Research Council of Turkey, TUBITAK- BIDEB, under the 2214-A programme. TB acknowledges financial support by the Emmy Noether program of the Deutsche Forschungsgemeinschaft (DFG) and the long program Machine Learning for Physics and the Physics of Learning at the Institute for Pure and Applied Mathematics (IPAM).
dc.description.volume1
dc.identifier.doi10.1088/2632-2153/ab80b7
dc.identifier.eissn2632-2153
dc.identifier.issnN/A
dc.identifier.quartileQ1
dc.identifier.scopus2-s2.0-85087592698
dc.identifier.urihttp://dx.doi.org/10.1088/2632-2153/ab80b7
dc.identifier.urihttps://hdl.handle.net/20.500.14288/15571
dc.identifier.wos660848300001
dc.keywordsVariational autoencoders
dc.keywordsDimensionality reduction
dc.keywordsClustering
dc.keywordsMarkov state models
dc.keywordsMolecular dynamics simulations
dc.languageEnglish
dc.publisherIOP Publishing Ltd
dc.sourceMachine Learning-Science and Technology
dc.subjectComputer science
dc.subjectArtificial intelligence
dc.subjectComputer science
dc.titleInterpretable embeddings from molecular simulations using Gaussian mixture variational autoencoders
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.kuauthorBozkurt, Yasemin

Files