Multi-objective Optimization: Applications in Modern Machine Learning Problems

This talk explores multi-objective optimization (MOO) as a principled framework for reconciling competing objectives in modern machine learning (ML). We present two case studies that demonstrate its effectiveness. First, we reformulate multi-modal learning (MML) as a MOO problem to address modality imbalance, proposing a gradient-based algorithm with theoretical convergence guarantees and up to 20× computational savings, while outperforming existing MML methods empirically. Second, we examine LLM post-training, showing that the conventional sequential fine-tuning and preference optimization paradigm is theoretically suboptimal. We introduce a joint post-training framework that optimizes both objectives simultaneously, achieving up to 23% higher performance with minimal additional cost. Collectively, these studies illustrate how MOO principles promote greater balance, efficiency, and robustness in contemporary ML systems.

Heshan Fernando is a 5th year PhD student in Department of Electrical, Computer, and Systems Engineering. His research interests lie in theory and applications of multi-objective optimization (MOO).

Date
Location
Amos Eaton 216
Speaker: Heshan Fernando from ECSE at RPI
Back to top