Abstractive Text Summarization using Deep Learning

Authors

  • Hemanth H M Dept. of Electronics and Communication, Sapthagiri College of Engineering, Bangalore, India
  • Akarsh B A Dept. of Electronics and Communication, Sapthagiri College of Engineering, Bangalore, India
  • Deepak K Dept. of Electronics and Communication, Sapthagiri College of Engineering, Bangalore, India
  • L Vishnu Vardhan Reddy Dept. of Electronics and Communication, Sapthagiri College of Engineering, Bangalore, India
  • Padmavathi C Dept. of Electronics and Communication, Sapthagiri College of Engineering, Bangalore, India

Keywords:

LSTMs, Encoder Decoder, Attention

Abstract

In current era, organizing the data has been a very tedious process as there is a lot of data growing every day and storing and managing them is getting tougher, to solve this problem one of the ways is to summarizing the data which in turn will save lot of storage and makes managing data easier. In this paper we have looked into various techniques and methods to summarise the text data using deep learning techniques and come to the conclusion that using Encoder-Decoder Architecture which consists of LSTMs, which is efficient in producing optimum results. This is because LSTMs are good at remembering the long dependencies by overcoming the problem of remembering the context of the previous line in its memory to predict the next sequence. We will first train the model after setting up the model to predict the target sequence. In this training process, the encoder takes an input and processes it and stores the context of that input whereas the Decoder predicts the next word with respect to the previous input. Later a new test input is uploaded to the model to test for which the target is unknown. Attention mechanism is used in this model as it gives attention to the important word in the sentence to understand the context of that sentence and to generate new words from it.

Downloads

Download data is not yet available.

References

Yuji Roh, Geon Heo, Steven Euijong Whang, Senior Member,”A Survey on Data Collection for Machine Learning A Big Data - AI Integration Perspective”, IEEE, vol 3, pp 211-223, Aug 2019.

Vivek Agarwal “Research on Data Pre-processing and Categorization Technique for Smartphone Review Analysis” International Journal of Computer, vol 13, pp 1-5, Dec 2015.

Felix A. Gers, Nicol N. Schraudolph, J¨urgen Schmidhuber, “Learning Precise Timing with LSTM Recurrent Networks “, Journal of Machine Learning Research, vol 3, pp 115-143, April 2002.

Zhenlin Liang, Chengwei Huang, “Speech Emotion Classification Using Attention Based LSTM”, IEEE/ACM Transactions on Audio, Speech, And Language Processing, vol. 27, pp 11, Nov 2019

Bhardwaj, A. Deshpande, A. J. Elmore, D. Karger, S. Madden, A. Parameswaran, H. Subramanyam, E. Wu, and R. Zhang, “Collaborative data analytics with datahub,” PVLDB, vol.8, pp 1916–1919, Aug 2015.

Timothy E. Ohanekwu, Ezeife “A Token-Based Data Cleaning Technique for Data Warehouse Systems” journal of the Association for Computing Machinery (ACM) vol 1, pp 12-22, June 2003.

Fakhitah Ridzuan, Wan Mohd Nazmee Wan Zainon “A Review on Data Cleansing Methods for Big Data “The Fifth Information Systems International Conference vol 4, pp 12-23, Mar 2019.

Wei Jianping, “Research on Data Preprocessing in Supermarket Customers Data Mining”, IEEE Conference Information Engineering and Computer Science (ICIECS), 2010 2nd International Conference, pp 1 – 4, Dec 2010.

Ramesh Nallapati, Bowen Zhou, Cicero dos, Çaglar Gulçehre, Bing Xiang “Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond”, Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL), pp 280–290, Berlin, Germany, Aug 2016.

Mike Schuster and Kuldip K. Paliwal, “Bidirectional Recurrent Neural Networks”. IEEE Transactions on Signal Processing, vol. 45, pp 11, Nov 1997.

Minh-Thang Luong, Hieu Pham, Christopher D. Manning, “Effective Approaches to Attention-based Neural Machine Translation”, Journal arXiv preprint, vol 2, pp 22-27, Aug 2015.

Hai-Tao Zheng, Wei Wang, Wang Chen, And Arun Kumar Sangaiah, “Automatic Generation of News Comments Based on Gated Attention Neural Networks”, vol 6, pp 1-4, Jan 2018.

Alexander M. Rush, Sumit Chopra, Jason Weston A Neural Attention Model for Sentence Summarization Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp 379–389, Sep 2015.

Simarn N. Maniyar, Sonali B. Kulkarni, Pratibha R. Bhise, “Systematic Review on Techniques of Machine Translation for Indian Languages”, International Journal of Emerging Trends & Technology in Computer Science vol 9, pp 15-19, Sep-Oct 2020.

Novriyanto Napu, Rifal Hasan, “Translation Problems Analysis of Students” International Journal of Linguistics, Literature and Translation (IJLLT) vol 2, Sep 2019.

Alejandra Hurtado de Mendoza, “The Problem of Translation in Cross-Cultural Research on Emotion Concepts”, International Journal for Dialogical Science vol 3, pp 241-248, Feb 2008.

Downloads

Published

2022-08-10

How to Cite

[1]
H. H M, A. B A, D. K, L. V. V. Reddy, and P. C, “Abstractive Text Summarization using Deep Learning”, pices, pp. 78-83, Aug. 2022.

Issue

Section

Articles