Publication: Benefits of Online Tilted Empirical Risk Minimization: A Case Study of Outlier Detection and Robust Regression
| dc.conference.date | 2025-08-31 through 2025-09-03 | |
| dc.conference.location | Istanbul | |
| dc.contributor.coauthor | Yildirim, Yiǧit E. (60197925500) | |
| dc.contributor.coauthor | Demir, Samet (58662508900) | |
| dc.contributor.coauthor | Doǧan, Zafer (35101767100) | |
| dc.date.accessioned | 2025-12-31T08:19:17Z | |
| dc.date.available | 2025-12-31 | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Empirical Risk Minimization (ERM) is a foundational framework for supervised learning but primarily optimizes averagecase performance, often neglecting fairness and robustness considerations. Tilted Empirical Risk Minimization (TERM) extends ERM by introducing an exponential tilt hyperparameter t to balance average-case accuracy with worst-case fairness and robustness. However, in online or streaming settings where data arrive one sample at a time, the classical TERM objective degenerates to standard ERM, losing tilt sensitivity. We address this limitation by proposing an online TERM formulation that removes the logarithm from the classical objective, preserving tilt effects without additional computational or memory overhead. This formulation enables a continuous trade-off controlled by t, smoothly interpolating between ERM (t → 0), fairness emphasis (t>0), and robustness to outliers (t<0). We empirically validate online TERM on two representative streaming tasks: robust linear regression with adversarial outliers and minority-class detection in binary classification. Our results demonstrate that negative tilting effectively suppresses outlier influence, while positive tilting improves recall with minimal impact on precision, all at per-sample computational cost equivalent to ERM. Online TERM thus recovers the full robustness-fairness spectrum of classical TERM in an efficient single-sample learning regime. © 2025 IEEE. | |
| dc.description.fulltext | Yes | |
| dc.description.harvestedfrom | Manual | |
| dc.description.indexedby | Scopus | |
| dc.description.publisherscope | International | |
| dc.description.readpublish | N/A | |
| dc.description.sponsoredbyTubitakEu | TÜBİTAK | |
| dc.description.sponsorship | Türkiye Bilimsel ve Teknolojik Araştırma Kurumu, TUBITAK, (124E063); Türkiye Bilimsel ve Teknolojik Araştırma Kurumu, TUBITAK | |
| dc.identifier.doi | 10.1109/MLSP62443.2025.11204247 | |
| dc.identifier.embargo | No | |
| dc.identifier.isbn | 9798331570293 | |
| dc.identifier.isbn | 9781467374545 | |
| dc.identifier.isbn | 9781728166629 | |
| dc.identifier.isbn | 9781538654774 | |
| dc.identifier.isbn | 9781509063413 | |
| dc.identifier.isbn | 9781728163383 | |
| dc.identifier.isbn | 9781728108247 | |
| dc.identifier.isbn | 9781509007462 | |
| dc.identifier.isbn | 9781467310260 | |
| dc.identifier.isbn | 9781479936946 | |
| dc.identifier.issn | 2161-0363 | |
| dc.identifier.quartile | N/A | |
| dc.identifier.scopus | 2-s2.0-105022128098 | |
| dc.identifier.uri | https://doi.org/10.1109/MLSP62443.2025.11204247 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.14288/31446 | |
| dc.keywords | fairness | |
| dc.keywords | online learning | |
| dc.keywords | outlier detection | |
| dc.keywords | robustness | |
| dc.keywords | Tilted empirical risk minimization | |
| dc.language.iso | eng | |
| dc.publisher | IEEE Computer Society | |
| dc.relation.affiliation | Koç University | |
| dc.relation.collection | Koç University Institutional Repository | |
| dc.relation.ispartof | IEEE International Workshop on Machine Learning for Signal Processing, MLSP | |
| dc.relation.openaccess | Yes | |
| dc.rights | CC BY-NC-ND (Attribution-NonCommercial-NoDerivs) | |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.title | Benefits of Online Tilted Empirical Risk Minimization: A Case Study of Outlier Detection and Robust Regression | |
| dc.type | Conference Proceeding | |
| dspace.entity.type | Publication |
