Publication: Assessment of fine-tuned large language models for real-world chemistry and material science applications
Program
KU-Authors
Keskin, Seda
Harman, Hilal Dağlar
KU Authors
Co-Authors
Van Herck, Joren
Gil, Maria Victoria
Jablonka, Kevin Maik
Abrudan, Alex
Anker, Andy S.
Asgari, Mehrdad
Blaiszik, Ben
Buffo, Antonio
Choudhury, Leander
Corminboeuf, Clemence
Advisor
Publication Date
Language
Type
Journal Title
Journal ISSN
Volume Title
Abstract
The current generation of large language models (LLMs) has limited chemical knowledge. Recently, it has been shown that these LLMs can learn and predict chemical properties through fine-tuning. Using natural language to train machine learning models opens doors to a wider chemical audience, as field-specific featurization techniques can be omitted. In this work, we explore the potential and limitations of this approach. We studied the performance of fine-tuning three open-source LLMs (GPT-J-6B, Llama-3.1-8B, and Mistral-7B) for a range of different chemical questions. We benchmark their performances against "traditional" machine learning models and find that, in most cases, the fine-tuning approach is superior for a simple classification problem. Depending on the size of the dataset and the type of questions, we also successfully address more sophisticated problems. The most important conclusions of this work are that, for all datasets considered, their conversion into an LLM fine-tuning training set is straightforward and that fine-tuning with even relatively small datasets leads to predictive models. These results suggest that the systematic use of LLMs to guide experiments and simulations will be a powerful technique in any research study, significantly reducing unnecessary experiments or computations.
Source:
CHEMICAL SCIENCE
Publisher:
The Royal Society of Chemistry
Keywords:
Subject
Chemistry