Natural Language Inference (NLI) is the task of determining if a natural language hypothesis can be reasonably inferred from a natural language text. Text Summarization is the task of taking a piece of text and producing a condensed version that retains the salient points of the text in the process. Evaluating the quality of generated summaries is a very ambitious task. Most current methods to evaluate system generated summaries require the presence of human-written summaries for reference, making it an expensive endeavour. This work proposes using NLI as an evaluation measure for system generated summaries. This approach does not need costly reference summaries. The results we obtained show that we can confidently use NLI to determine the correctness of summaries generated by Abstractive Summarizers.
• Hardware: Processor: i3 ,i5 RAM: 4GB Hard disk: 16 GB • Software: operating System : Windws2000/XP/7/8/10 Anaconda,jupyter,spyder,flask Frontend :-python Backend:- MYSQL
₹10000 (INR)
2020