Computer Vision Based Transfer Learning-Aided Transformer Model for Fall Detection and Prediction
Falls bring about significant risks to individuals’ well-being and independence, prompting widespread public health concerns. Swift detection and even predicting the risk of falls are crucial for implementing effective measures to alleviate the adverse consequences associated with such incidents. This study presents a new framework for identifying and forecasting fall risks. Our approach utilizes a novel transformer model trained on 2D poses extracted through an off-the-shelf pose extractor, incorporating transfer learning techniques. Initially, the transformer is trained on a large dataset containing 2D poses of general actions. Subsequently, we freeze the majority of its layers and fine-tune only the last few layers using relatively smaller datasets for fall detection and prediction tasks. Experimental results indicate that our proposed method outperforms traditional machine learning (e.g., SVM, Decision Tree, etc.) and deep learning approaches (e.g., LSTM, CNN, ST-GCN, PoseC3D, etc.) in both fall detection and prediction tasks across various datasets.
Funding
10.13039/100010661-European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie (Grant Number: 778602 ULTRACEPT)
History
School affiliated with
- School of Computer Science (Research Outputs)
Publication Title
IEEE AccessVolume
12Pages/Article Number
28798 - 28809Publisher
IEEEExternal DOI
ISSN
2169-3536Date Accepted
2024-02-14Date of Final Publication
2024-02-20Open Access Status
- Open Access
Will your conference paper be published in proceedings?
- N/A