Automated Detection of Emotional States from Speech using Hybrid Deep Learning Model

Authors

  • Balaji Venkateswaran Research scholar, Department of Computer Science & Engineering, Shri Venkateshwara University, Gajraula, UP, INDIA
  • Jyotirmay Mishra Research scholar, Department of Computer Science & Engineering, Shri Venkateshwara University, Gajraula, UP, India
  • Amit Kumar Ahuja Department of Electronics and Communication Engineering, JSS Academy of Technical Education, Sectors 62, Gautam Buddha Nagar, Noida, UP, India
  • Rahul Kumar Jain Project Lead, Nagarro, Gurgaon, Haryana,India
  • Sanjeev Kumar Assistant Professor, Department of Computer Science, Maharaja Agrasen Institute of Technology, Rohini Sector -22, New Delhi (India)
  • Arshad Rafiq Khan Research scholar, Department of Computer Science and Engineering, Shri Venkateshwara University, Gajraula, UP, India

Keywords:

Deep Learning, LSTM, CNN, SVM, RAVDESS

Abstract

Detection of Emotion State (ES) is an essential yet challenging field with applications across various domains such as psychology, speech therapy, and customer service. In this paper, we present a novel approach to SER using hybrid deep learning techniques, specifically focusing on recurrent neural networks. The proposed model is trained on carefully labeled datasets that include diverse speech samples representing different emotional states. By analyzing critical audio features like pitch, rhythm, and prosody, the system aims to improve emotion detection accuracy for unseen speech data. This work seeks to advance SER by enhancing both precision and reliability, while also providing deeper insights into the complex connection between emotions and speech patterns. Our approach utilizes Long Short-Term Memory (LSTM) neural networks, which are adept at capturing temporal dependencies crucial for recognizing emotions in speech. The LSTM model is rigorously trained on a comprehensive dataset covering a wide range of emotional states, and its performance is evaluated through extensive experimentation. The results demonstrate that our method outperforms conventional techniques, underscoring the effectiveness of LSTM in speech emotion tasks. This research contributes significantly to the development of emotion recognition technology, with promising applications in human-computer interaction, mental health monitoring, and sentiment analysis.

Downloads

Published

2024-05-27

How to Cite

Balaji Venkateswaran, Jyotirmay Mishra, Amit Kumar Ahuja, Rahul Kumar Jain, Sanjeev Kumar, & Arshad Rafiq Khan. (2024). Automated Detection of Emotional States from Speech using Hybrid Deep Learning Model. Journal of Computational Analysis and Applications (JoCAAA), 33(06), 404–415. Retrieved from https://eudoxuspress.com/index.php/pub/article/view/796

Issue

Section

Articles

Similar Articles

<< < 6 7 8 9 10 11 12 13 14 15 > >> 

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)