Skip to main content
Information SciencesVolume 628, May 2023, Pages 542-557

Bimodal HAR-An efficient approach to human activity analysis and recognition using bimodal hybrid classifiers(Article)

  Save all to author list
  • aDepartment of Applied Cybernetics, Faculty of Science, University of Hradec Králové, Hradec Králové, 50003, Czech Republic
  • bCollege of Economics and Management, Beijing University of Technology, Beijing, China
  • cDepartment of Mathematics, Faculty of Science, University of Hradec Kralove, Hradec Kralove, Czech Republic
  • dSingidunum University, Belgrade, Serbia
  • eThe Bartlett School of Sustainable Construction, University College London, 1-19 Torrington Place, London, WC1E 7HB, United Kingdom
  • fDepartment of Industrial Engineering, Turkish Naval Academy, National Defence University, Istanbul, Tuzla, 34940, Turkey
  • gSchool of Information Science and Technology, Nantong University, Nantong, 226019, China

Abstract

Human activity recognition (HAR) is an emerging field that identifies human actions in different settings. This activity is recognized by sensors placed in the room or residence where we wish to observe human action. Real-world applications and automation employ activity recognition to detect anomalous behavior. For example, the anomalous behavior of patients such as walking while advised to rest in bed and falling elderly people need to be monitored carefully in hospitals as well as in home-based monitoring systems. Security, healthcare, human interaction, and computer vision use it. The activity is monitored through sensors and cameras. There is no general, explicit approach for inferring human activities from sensor data. Sensor data and heuristics present technological challenges. Several elements must be evaluated to build a reliable activity recognition system. Factors such as storage, connectivity, processing, energy efficiency, and system adaptability are important. Deep learning systems can better recognize human activities from earlier datasets. In this study, the hybrid One Dimensional Convolution Neural Network with Long Short Term Memory (LSTM) classifier is employed to improve the performance of HAR. It offers a method for automatically and data-adaptively removing reliable characteristics from raw data. This model proposes a two-way classification for abstract and individual activity monitoring. Human activities such as walking, sitting, walking downstairs, walking upstairs, laying, and standing along with mobile phone usage are considered in this study. We also compare state-of-the-art algorithms such as Support Vector Machine (SVM), K-Nearest Neighbour (KNN), Long Short Term Memory (LSTM), and Convolutional Neural Network (CNN). The UCI-HAR dataset is used for recognizing human activity in the proposed work. Features such as mean, median, and autoregressive coefficients are derived from the raw data and processed with principal component analysis to make them more reliable. The LSTM model accepts a series of activities, whereas the CNN accepts a single input. The CNN takes the single input data and each of the outputs is forwarded to the LSTM model, which classifies the activity. The Hybrid model achieves 97.89% accuracy with the new feature selection methods, whereas the CNN and LSTM individually produce 92.77% and 92.80% accuracy. © 2023 Elsevier Inc.

Author keywords

1D CNNHuman activity recognition (HAR)Hybrid classifierLSTM

Indexed keywords

Engineering controlled terms:BrainConvolutionConvolutional neural networksDigital storageEnergy efficiencyHuman computer interactionLearning systemsmHealthNearest neighbor searchPrincipal component analysisSupport vector machines
Engineering uncontrolled terms1d convolutional neural networkActivity recognitionAnomalous behaviorConvolutional neural networkHuman actionsHuman activitiesHuman activity recognitionHybrid classifierSensors data
Engineering main heading:Long short-term memory
  • ISSN: 00200255
  • CODEN: ISIJB
  • Source Type: Journal
  • Original language: English
  • DOI: 10.1016/j.ins.2023.01.121
  • Document Type: Article
  • Publisher: Elsevier Inc.

  Venkatachalam, K.; Department of Applied Cybernetics, Faculty of Science, University of Hradec Králové, Hradec Králové, Czech Republic;
© Copyright 2023 Elsevier B.V., All rights reserved.

Cited by 16 documents

Zhao, Y. , Shao, J. , Lin, X.
CIR-DFENet: Incorporating cross-modal image representation and dual-stream feature enhanced network for activity recognition
(2025) Expert Systems with Applications
Miyoshi, T. , Koshino, M. , Nambo, H.
Applying MLP-Mixer and gMLP to Human Activity Recognition
(2025) Sensors
Luo, F. , Li, A. , Khan, S.
Bi-DeepViT: Binarized Transformer for Efficient Sensor-Based Human Activity Recognition
(2025) IEEE Transactions on Mobile Computing
View details of all 16 citations
{"topic":{"name":"Pattern Recognition; Human Activity Recognition; Machine Learning","id":291,"uri":"Topic/291","prominencePercentile":99.61754,"prominencePercentileString":"99.618","overallScholarlyOutput":0},"dig":"f53b00d7200e7e0317168e5a20aafc2ee50ff942a175eb23341dacb8c4d59d8f"}

SciVal Topic Prominence

Topic:
Prominence percentile: