Publication:
Balanced and elastic end-to-end training of dynamic LLMs

Placeholder

Departments

School / College / Institute

Organizational Unit

Program

KU Authors

Co-Authors

Wahib, Mohamed

Publication Date

Language

Embargo Status

No

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

To reduce the computational and memory overhead of Large Language Models, various approaches have been proposed. These include a) Mixture of Experts (MoEs), where token routing affects compute balance; b) gradual pruning of model parameters; c) dynamically freezing layers; d) dynamic sparse attention mechanisms; e) early exit of tokens as they pass through model layers; and f) Mixture of Depths (MoDs), where tokens bypass certain blocks. While these approaches are effective in reducing overall computation, they often introduce significant workload imbalance across workers. In many cases, this imbalance is severe enough to render the techniques impractical for large-scale distributed training, limiting their applicability to toy models due to poor efficiency. We propose an autonomous dynamic load balancing solution, DynMo, which provably achieves maximum reduction in workload imbalance and adaptively equalizes compute loads across workers in pipeline-parallel training. In addition, DynMo dynamically consolidates computation onto fewer workers without sacrificing training throughput, allowing idle workers to be released back to the job manager. DynMo supports both single-node multi-GPU systems and multi-node GPU clusters, and can be used in practical deployment. Compared to static distributed training solutions such as Megatron-LM and DeepSpeed, DynMo accelerates the end-to-end training of dynamic GPT models by up to 1.23x for MoEs, 3.18x for parameter pruning, 2.23x for layer freezing, 4.02x for sparse attention, 4.52x for early exit, and 1.17x for MoDs. © 2025 Copyright held by the owner/author(s).

Source

Publisher

Association for Computing Machinery

Subject

Computer Science

Citation

Has Part

Source

2025 International Conference for High Performance Computing, Networking, Storage, and Analysis, SC 2025

Book Series Title

Edition

DOI

10.1145/3712285.3759775

item.page.datauri

Link

Rights

CC BY (Attribution)

Copyrights Note

Creative Commons license

Except where otherwised noted, this item's license is described as CC BY (Attribution)

Endorsement

Review

Supplemented By

Referenced By

0

Views

0

Downloads

View PlumX Details