ISSN: 3048-6815

Hybrid Attention-Based Deep Learning Model for Enhanced Sentiment Analysis in Noisy Social Media Data: A Context-Aware Approach

Abstract

Social media platforms have become crucial sources of information for understanding public opinion. However, the inherent noisiness and contextual complexity of social media data pose significant challenges for accurate sentiment analysis. This paper presents a novel hybrid attention-based deep learning model designed to address these challenges. Our approach combines the strengths of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), augmented with multiple attention mechanisms, to effectively capture both local and global contextual information. The model also incorporates a pre-processing module for noise reduction and normalization. Extensive experiments on benchmark datasets demonstrate that our proposed model outperforms state-of-the-art sentiment analysis techniques, particularly in handling noisy and contextually ambiguous social media text. The results highlight the effectiveness of the hybrid architecture and attention mechanisms in capturing nuanced sentiment expressions, contributing to more accurate and robust sentiment analysis systems.

References

  1. Liu, B. (2012). Sentiment analysis and opinion mining. Morgan & Claypool publishers.
  2. Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2), 1-135.
  3. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
  4. Tang, D., Qin, B., & Liu, T. (2015). Sentiment analysis via convolutional neural networks. Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 1188-1197.
  5. Kim, Y. (2014). Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
  6. Zhang, X., Zhao, J., & LeCun, Y. (2015). Character-level convolutional networks for text classification. Advances in neural information processing systems, 28.
  7. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
  8. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical attention networks for document classification. Proceedings of the 2016 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1480-1489.
  9. Lai, S., Xu, L., Liu, K., & Zhao, J. (2015). Recurrent convolutional neural networks for text classification. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence.
  10. Zhou, P., Shi, W., Zheng, J., Zhao, B., & Xu, G. (2016). Attention-based bidirectional long short-term memory networks for relation classification. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 207-212.
  11. Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
  12. Luong, M. T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025.
  13. Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 1532-1543.
  14. Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
  15. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  16. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
  17. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
  18. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Download PDF

How to Cite

Narendra Kumar, (2025-05-02 11:49:57.045). Hybrid Attention-Based Deep Learning Model for Enhanced Sentiment Analysis in Noisy Social Media Data: A Context-Aware Approach. JANOLI International Journal of Artificial Intelligence and its Applications, Volume EOCMPeqBj5R9ZDur0Rlk, Issue 4.