🤖 AI Summary
This study addresses the challenge of prospectively predicting key foot–ground interaction parameters—center of pressure (COP) and time of impact (TOI)—prior to foot contact with unstructured terrain. Leveraging a wearable RGB-D camera to capture terrain ahead in real time, the authors propose a lightweight CNN-RNN model that continuously forecasts COP and TOI up to 250 ms before touchdown. This work presents the first vision-based, wearable approach to anticipatory foot–ground interaction prediction, designed for deployment on edge devices to enable real-time control of lower-limb assistive systems. The system achieves mean absolute COP errors of 29.42, 26.82, and 23.72 mm, and TOI errors of 21.14, 20.08, and 17.73 ms at prediction horizons of 150, 100, and 50 ms, respectively, while operating at 60 FPS on consumer-grade hardware.
📝 Abstract
Computer-vision (CV) has been used for environmental classification during gait and is often used to inform control in assistive systems; however, the ability to predict how the foot will contact a changing environment is underexplored. We evaluated the feasibility of forecasting the anterior-posterior (AP) foot center-of-pressure (COP) and time-of-impact (TOI) prior to foot-strike on a level-ground to stair-ascent transition. Eight subjects wore an RGB-D camera on their right shank and instrumented insoles while performing the task of stepping onto the stairs. We trained a CNN-RNN to forecast the COP and TOI continuously within a 250ms window prior to foot-strike, termed the forecast horizon (FH). The COP mean-absolute-error (MAE) at 150, 100, and 50ms FH was 29.42mm, 26.82, and 23.72mm respectively. The TOI MAE was 21.14, 20.08, and 17.73ms for 150, 100, and 50ms respectively. While torso velocity had no effect on the error in either task, faster toe-swing speeds prior to foot-strike were found to improve the prediction accuracy in the COP case, however, was insignificant in the TOI case. Further, more anterior foot-strikes were found to reduce COP prediction accuracy but did not affect the TOI prediction accuracy. We also found that our lightweight model was capable at running at 60 FPS on either a consumer grade laptop or an edge computing device. This study demonstrates that forecasting COP and TOI from visual data was feasible using a lightweight model, which may have important implications for anticipatory control in assistive systems.