🤖 AI Summary
Existing fairness research focuses on single-task classification and struggles to adapt to heterogeneous multi-task learning (MTL), where classification, detection, and regression coexist under partial label availability. Mainstream approaches are limited by assuming homogeneous task types, imposing fairness constraints only on shared representations while ignoring bias amplification by task-specific heads, and treating fairness and utility as zero-sum objectives.
Method: We propose an asymmetric heterogeneous fairness constraint aggregation mechanism—the first to unify asymmetric fairness objectives across diverse task types. We design a primal-dual optimization framework with head-aware proxy strategies and differentiable fairness regularization to explicitly suppress anisotropic bias propagation induced by task heads.
Contribution/Results: Our method significantly improves group fairness (e.g., 30–50% reduction in ΔEO and ΔDP) across multiple homogeneous and heterogeneous benchmarks, while maintaining state-of-the-art task performance—demonstrating strong cross-modal and weakly supervised generalization.
📝 Abstract
Fairness in machine learning has been extensively studied in single-task settings, while fair multi-task learning (MTL), especially with heterogeneous tasks (classification, detection, regression) and partially missing labels, remains largely unexplored. Existing fairness methods are predominantly classification-oriented and fail to extend to continuous outputs, making a unified fairness objective difficult to formulate. Further, existing MTL optimization is structurally misaligned with fairness: constraining only the shared representation, allowing task heads to absorb bias and leading to uncontrolled task-specific disparities. Finally, most work treats fairness as a zero-sum trade-off with utility, enforcing symmetric constraints that achieve parity by degrading well-served groups. We introduce FairMT, a unified fairness-aware MTL framework that accommodates all three task types under incomplete supervision. At its core is an Asymmetric Heterogeneous Fairness Constraint Aggregation mechanism, which consolidates task-dependent asymmetric violations into a unified fairness constraint. Utility and fairness are jointly optimized via a primal--dual formulation, while a head-aware multi-objective optimization proxy provides a tractable descent geometry that explicitly accounts for head-induced anisotropy. Across three homogeneous and heterogeneous MTL benchmarks encompassing diverse modalities and supervision regimes, FairMT consistently achieves substantial fairness gains while maintaining superior task utility. Code will be released upon paper acceptance.