Fairness in Machine Learning-based Hand Load Estimation: A Case Study on Load Carriage Tasks

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address systematic biases and gender performance disparities in external hand load estimation—exacerbated by demographic factors such as age and biological sex, particularly under imbalanced training data—the paper introduces, for the first time, a feature disentanglement paradigm for this task. We propose a variational autoencoder (VAE)-based framework for sex-invariant representation learning that explicitly separates sex-correlated and sex-agnostic latent features, while enforcing fairness-aware optimization to ensure predictions depend solely on kinematic features satisfying statistical fairness criteria. Evaluated under standard fairness metrics (statistical parity, predictive rate disparity, normalized risk disparity), our method significantly improves fairness: gender-wise MAE disparity decreases by 42%, and statistical parity improves by 3.1×. Moreover, prediction accuracy surpasses conventional baselines, including random forests.

Technology Category

Application Category

📝 Abstract
Predicting external hand load from sensor data is essential for ergonomic exposure assessments, as obtaining this information typically requires direct observation or supplementary data. While machine learning methods have been used to estimate external hand load from worker postures or force exertion data, our findings reveal systematic bias in these predictions due to individual differences such as age and biological sex. To explore this issue, we examined bias in hand load prediction by varying the sex ratio in the training dataset. We found substantial sex disparity in predictive performance, especially when the training dataset is more sex-imbalanced. To address this bias, we developed and evaluated a fair predictive model for hand load estimation that leverages a Variational Autoencoder (VAE) with feature disentanglement. This approach is designed to separate sex-agnostic and sex-specific latent features, minimizing feature overlap. The disentanglement capability enables the model to make predictions based solely on sex-agnostic features of motion patterns, ensuring fair prediction for both biological sexes. Our proposed fair algorithm outperformed conventional machine learning methods (e.g., Random Forests) in both fairness and predictive accuracy, achieving a lower mean absolute error (MAE) difference across male and female sets and improved fairness metrics such as statistical parity (SP) and positive and negative residual differences (PRD and NRD), even when trained on imbalanced sex datasets. These findings emphasize the importance of fairness-aware machine learning algorithms to prevent potential disadvantages in workplace health and safety for certain worker populations.
Problem

Research questions and friction points this paper is trying to address.

Detects bias in hand load prediction due to individual differences
Develops fair model using VAE with feature disentanglement
Improves fairness and accuracy in ergonomic exposure assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Variational Autoencoder for fair predictions
Disentangles sex-agnostic and sex-specific features
Improves fairness metrics and predictive accuracy
🔎 Similar Papers
No similar papers found.
Arafat Rahman
Arafat Rahman
PhD Student, Systems Engineering, University of Virginia
Machine LearningDeep LearningBiometricsHealthcare
S
Sol Lim
Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, 1145 Perry Street, Blacksburg, VA, USA
S
Seokhyun Chung
Department of Systems and Information Engineering, University of Virginia, 151 Engineer’s Way, Charlottesville, VA, USA