Enhanced Neural Text Summarization With Syntactic And Headline Insights

Authors

  • Hadiyafatima B.E Students; Department Of Computer Science And Engineering, ISL Engineering College, Hyderabad, India. Author
  • Iqraa Abdul Quddus B.E Students; Department Of Computer Science And Engineering, ISL Engineering College, Hyderabad, India. Author
  • Juveria Hashmi B.E Students; Department Of Computer Science And Engineering, ISL Engineering College, Hyderabad, India. Author
  • Dr. Syed Asadullah Hussaini Professor; Department Of Computer Science & Engineering ISL Engineering College, Hyderabad, India. Author

DOI:

https://doi.org/10.63665/ndmha557

Keywords:

Text Rank, Transformer models, Distil BERT, syntax-augmented neural networks, CNN/Daily Mail dataset

Abstract

Text summarization plays a vital role in condensing large volumes of information into concise and coherent summaries. Traditional extractive methods often fail to capture semantic richness, while abstractive models face challenges in grammatical correctness and factual accuracy. The base paper introduced a syntax-augmented, headline-aware neural model that leverages syntactic features and headline guidance to improve summary generation. In this work, we extend this idea by evaluating both classical and modern approaches on the CNN/DailyMail dataset. First, we establish a baseline using the unsupervised Text Rank algorithm, which provides extractive summaries but achieves limited ROUGE scores. We then explore a state-of-the-art pretrained Transformer, DistilBART, which has been fine-tuned on CNN/DailyMail. Without additional training, DistilBART generates abstractive summaries with significantly higher accuracy, outperforming the base model on ROUGE-1 and ROUGE-2 metrics. Our experimental results show that incorporating headline cues and syntactic awareness (as in the base paper) improves traditional LSTM-based models, but modern pretrained Transformers demonstrate superior performance and efficiency. The findings highlight the evolution of summarization methods from handcrafted features to large-scale pretrained architectures, offering both practical insights for deployment and a strong foundation for future research.

Downloads

Download data is not yet available.

References

[1]. M. Ramezani, M.-S. Shahryari, A.-R. Feizi-Derakhshi, and M.-R. Feizi-Derakhshi, “Unsupervised Broadcast News Summarization: A Comparative Study on Maximal Marginal Relevance (MMR) and Latent Semantic Analysis (LSA),” Journal of Information and Telecommunication Systems, vol. 12, no. 3, pp. 145–156, 2023.

[2]. M. Y. Abdelwahab, Y. A. Moaiad, and Z. A. Bakar, “Arabic Text Summarization Using Pre-processing Methodologies and Techniques,” International Journal of Advanced Computer Science and Applications, vol. 14, no. 5, pp. 210–218, 2023.

[3]. C. Hark and A. Karcı, “Karcı Summarization: A Simple and Effective Approach for Automatic Text Summarization Using Karcı Entropy,” Expert Systems with Applications, vol. 158, pp. 113–124, 2020.

[4]. G. Malarselvi and A. Pandian, “Multi-Layered Network Model for Text Summarization Using Feature Representation,” International Journal of Intelligent Systems and Applications, vol. 15, no. 2, pp. 88–99, 2023.

[5]. G. Frisoni, P. Italiani, F. Boschi, and G. Moro, “Enhancing Biomedical Scientific Reviews Summarization with Graph-Based Factual Evidence Extracted from Papers,” Artificial Intelligence in Medicine, vol. 129, pp. 102–114, 2022.

[6]. L. A. Schintler and C. L. McNeely, Encyclopedia of Big Data. Cham, Switzerland: Springer, 2022.

[7]. D. R. Radev, H. Jing, M. Styś, and D. Tam, ‘‘Centroid-based summa rization of multiple documents,’’ Inf. Process. Manage., vol. 40, no. 6,pp. 919–938, Nov. 2004.

[8]. I. Harrando, ‘‘Representation, information extraction, and summarizationfor automatic multimedia understanding,’’ M.S. thesis, Comput. AidedEng.. Sorbonne Université, 2022.

[9]. A. Adadi, ‘‘A survey on data-efficient algorithms in big data era,’’ J. BigData, vol. 8, no. 1, p. 24, Jan. 2021.

[10]. W. S. El-Kassas, C. R. Salama, A. A. Rafea, and H. K. Mohamed, ‘‘Automatic text summarization: A comprehensive survey,’’ Expert Syst. Appl.,vol. 165, Mar. 2021, Art. no. 113679.

[11]. S. Casola, ‘‘Natural language processing for technology foresight summarization and simplification: The case of patents,’’ M.S. thesis, Brain, MindComput. Sci., Università Degli Studi Di Padova, 2023.

[12]. A.Givchi, R. Ramezani, and A. Baraani-Dastjerdi, ‘‘Graph-based abstractive biomedical text summarization,’’ J. Biomed. Informat., vol. 132,Aug. 2022, Art. no. 104099.

[13]. D. Suleiman and A. Awajan, ‘‘Deep learning-based abstractive text summarization: Approaches, datasets, evaluation measures, and challenges,’’Math. Problems Eng., vol. 2020, no. 1, 2020, Art. no. 9365340.

[14]. D. G. Ghalandari, ‘‘Revisiting the centroid-based method: A strongbaseline for multi-document summarization,’’ in Proc. WorkshopNew Frontiers Summarization, Copenhagen, Denmark, 2017,pp. 85–90.

[15]. S. Masoumi, M. Feizi-Derakhshi, and R. Tabatabaei, ‘‘Tabsum-a new Persian text summarizer,’’ J. Math. Comput. Sci., vol. 11, no. 4, pp. 330–342,Aug. 2014.

Downloads

Published

2026-04-26

How to Cite

Enhanced Neural Text Summarization With Syntactic And Headline Insights. (2026). International Journal of Multidisciplinary Engineering In Current Research, 11(4s), 86-92. https://doi.org/10.63665/ndmha557