Advanced Fall Analysis for Elderly Monitoring Using Feature Fusion and CNN-LSTM: A Multi-Camera Approach

Advanced Fall Analysis for Elderly Monitoring Using Feature Fusion and CNN-LSTM: A Multi-Camera Approach

Volume 9, Issue 6, Page No 12-20, 2024

Author’s Name:  Win Pa Pa San 1, a), Myo Khaing 2

View Affiliations

1 Image and Signal Processing Lab, University of Computer Studies, Mandalay, Mandalay, 05071, Myanmar
2 Faculty of Computer Science, University of Computer Studies, Mandalay, Mandalay, 05071, Myanmar

a)whom correspondence should be addressed. E-mail: winpapasan@ucsm.edu.mm

Adv. Sci. Technol. Eng. Syst. J. 9(6), 12-20 (2024); a  DOI: 10.25046/aj090602

Keywords: Feature Fusion, Human Silhouette Image (HSI), Silhouette History Images (SHI), Dense Optical Flow (DOF), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM)

Share

62 Downloads

Export Citations

As society ages, the imbalance between family caregivers and elderly individuals increases, leading to inadequate support for seniors in many regions. This situation has ignited interest in automatic health monitoring systems, particularly in fall detection, due to the significant health risks that falls pose to older adults. This research presents a vision-based fall detection system that employs computer vision and deep learning to improve elderly care. Traditional systems often struggle to accurately detect falls from various camera angles, as they typically rely on static assessments of body posture. To tackle this challenge, we implement a feature fusion strategy within a deep learning framework to enhance detection accuracy across diverse perspectives. The process begins by generating a Human Silhouette Image (HSI) through background subtraction. By combining silhouette images from two consecutive frames, we create a Silhouette History Image (SHI), which captures the shape features of the individual. Simultaneously, Dense Optical Flow (DOF) extracts motion features from the same frames, allowing us to merge these with the SHI for a comprehensive input image. This fused representation is then processed using a pre trained Convolutional Neural Network (CNN) to extract deep features. A Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) is subsequently trained on these features to recognize patterns indicative of fall events. Our approach’s effectiveness is validated through experiments on the UP-fall detection dataset, which includes 1,122 action videos and achieves an impressive 99% accuracy in fall detection.

Received: 15 September 2024  Revised: 30 September 2024 Accepted: 15 October 2024  Online: 30 November 2024

  1. Q. Li, J.A. Stankovic, M.A. Hanson, A.T. Barth, J. Lach, G. Zhou, ‘Accurate, fast fall detection using gyroscopes and accelerometer-derived posture information’, Proceedings – 2009 6th International Workshop on Wearable and Implantable Body Sensor Networks, BSN 2009, (June), 138–143, 2009, doi:10.1109/BSN.2009.46.
  2. Y. Li, K.C. Ho, M. Popescu, ‘A microphone array system for automatic fall detection’, IEEE Transactions on Biomedical Engineering, 59(5), 1291–1301, 2012, doi:10.1109/TBME.2012.2186449.
  3. Y. Li, Z. Zeng, M. Popescu, K.C. Ho, ‘Acoustic fall detection using a circular microphone array’, 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC’10, 2242–2245, 2010, doi:10.1109/IEMBS.2010.5627368.
  4. H. Wang, D. Zhang, Y. Wang, J. Ma, Y. Wang, S. Li, ‘RT-Fall: A Real-Time and Contactless Fall Detection System with Commodity WiFi Devices’, IEEE Transactions on Mobile Computing, 16(2), 511–526, 2017, doi:10.1109/TMC.2016.2557795.
  5. F. Bianchi, S.J. Redmond, M.R. Narayanan, S. Cerutti, N.H. Lovell, ‘Barometric pressure and triaxial accelerometry-based falls event detection’, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 18(6), 619–627, 2010, doi:10.1109/TNSRE.2010.2070807.
  6. R.K. Shen, C.Y. Yang, V.R.L. Shen, W.C. Chen, ‘A Novel Fall Prediction System on Smartphones’, IEEE Sensors Journal, 17(6), 1865–1871, 2017, doi:10.1109/JSEN.2016.2598524.
  7. B. Wójtowicz, A. Dobrowolski, K. Tomczykiewicz, ‘Fall detector using discrete wavelet decomposition and SVM classifier’, Metrology and Measurement Systems, 22(2), 303–314, 2015, doi:10.1515/mms-2015-0026.
  8. H.U. Openpose, ‘Fall Detection Based on Key Points of’, Symmetry, 2020.
  9. Espinosa, H. Ponce, S. Gutiérrez, L. Martínez-Villaseñor, J. Brieva, E. Moya-Albor, ‘A vision-based approach for fall detection using multiple cameras and convolutional neural networks: A case study using the UP-Fall detection dataset’, Computers in Biology and Medicine, 115, 2019, doi:10.1016/j.compbiomed.2019.103520.
  10. S. Sherin, P.M.T. Student, A.J. Assistant, ‘Human Fall Detection using Convolutional Neural Network’, International Journal of Engineering Research & Technology, 8(6), 1368–1372, 2019.
  11. A. Núñez-Marcos, G. Azkune, I. Arganda-Carreras, ‘Vision-based fall detection with convolutional neural networks’, Wireless Communications and Mobile Computing, 2017, 2017, doi:10.1155/2017/9474806.
  12. K. Wang, G. Cao, D. Meng, W. Chen, W. Cao, ‘Automatic fall detection of human in video using combination of features’, Proceedings – 2016 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2016, 1228–1233, 2017, doi:10.1109/BIBM.2016.7822694.
  13. S. Maldonado-Bascón, C. Iglesias-Iglesias, P. Martín-Martín, S. Lafuente-Arroyo, ‘Fallen people detection capabilities using assistive robot’, Electronics (Switzerland), 8(9), 2019, doi:10.3390/electronics8090915.
  14. T. Hassner, C. Liu, ‘Dense image correspondences for computer vision’, Dense Image Correspondences for Computer Vision, 1–295, 2015, doi:10.1007/978-3-319-23048-1.
  15. L. Martínez-Villaseñor, H. Ponce, J. Brieva, E. Moya-Albor, J. Núñez-Martínez, C. Peñafort-Asturiano, ‘Up-fall detection dataset: A multimodal approach’, Sensors (Switzerland), 19(9), 2019, doi:10.3390/s19091988.
  16. L. Martinez-Villasenor, H. Ponce, K. Perez-Daniel, ‘Deep learning for multimodal fall detection’, Conference Proceedings – IEEE International Conference on Systems, Man and Cybernetics, 2019-Octob, 3422–3429, 2019, doi:10.1109/SMC.2019.8914429.
  17. M. Sokolova, G. Lapalme, ‘A systematic analysis of performance measures for classification tasks’, Information Processing and Management, 45(4), 427–437, 2009, doi:10.1016/j.ipm.2009.03.002.

Citations by Dimensions

Citations by PlumX

Crossref Citations

This paper is currently not cited.

No. of Downloads Per Month

ASTESJ_090602 L
No. of Downloads Per Country


Special Issues

Special Issue on Innovation in Computing, Engineering Science & Technology
Guest Editors: Prof. Wang Xiu Ying
Deadline: 15 November 2025

Special Issue on Computing, Engineering and Multidisciplinary Sciences
Guest Editors: Prof. Wang Xiu Ying
Deadline: 30 April 2025