Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Quora Insincere Questions Classification Using Attention Based Model

Chakraborty, S, Wilson, M, Assi, S, Al-Hamid, AA, Alamran, M, Al-Nahari, A, Mustafina, J, Lunn, J and Al-Jumeily, D (2023) Quora Insincere Questions Classification Using Attention Based Model. In: Lecture Notes on Data Engineering and Communications Technologies , 165. (DaSET2022 conference, Virtual Event).

[img]
Preview
Text
Quora Insincere Questions Classification Using Attention Based Model.pdf - Accepted Version

Download (624kB) | Preview

Abstract

The online platform has evolved into an unparalleled storehouse of infor-mation. People use various social question-and-answer websites such as Quora, Form-spring, Stack-Overflow, Twitter, and Beepl to ask questions, clarify doubts, and share ideas and expertise with others. An increase in in-appropriate and insincere comments by users without a genuine motive is a major issue with such Q & A websites. Individuals tend to share harmful and toxic content intended to make a statement rather than look for helpful answers. In the world of natural language processing (NLP), Bidirectional Encoder Representations from Transformers (BERT) has been a game-changer. It has dominated performance benchmarks and thereby pushed the limits of researchers' ability to experiment and produce similar models. This resulted in improvements in language models by introducing lighter models while maintaining efficiency and performance. This study utilized pre-trained state-of-the-art language models for understanding whether posted questions are sincere or insincere with limited computation. To overcome the high computation problem of NLP, the BERT, XLNet, StructBERT, and DeBERTa models were trained on three samples of data. The metrics proved that even with limited resources, recent transformer-based models outscore previous studies with remarkable results. Amongst the four, DeBERTa stands out with the highest balanced accuracy, macro, and weighted f1-score of 80%, 0.83 and 0.96, respectively.

Item Type: Conference or Workshop Item (Paper)
Subjects: R Medicine > RM Therapeutics. Pharmacology
Divisions: Computer Science & Mathematics
Pharmacy & Biomolecular Sciences
Publisher: Springer
SWORD Depositor: A Symplectic
Date Deposited: 20 Feb 2023 10:09
Last Modified: 01 Apr 2024 00:50
DOI or ID number: 10.1007/978-981-99-0741-0_26
URI: https://researchonline.ljmu.ac.uk/id/eprint/18923
View Item View Item