Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Evaluating Few-Shot Prompting Approach Using GPT4 in Comparison to BERT-Variant Language Models in Biomedical Named Entity Recognition

Konduru, KK, Natalia, F, Sudirman, S and Al-Jumeily, D (2025) Evaluating Few-Shot Prompting Approach Using GPT4 in Comparison to BERT-Variant Language Models in Biomedical Named Entity Recognition. In: 2024 17th International Conference on Development in eSystem Engineering (DeSE) . pp. 340-345. (2024 17th International Conference on Development in eSystem Engineering (DeSE), 6th - 8th Nov 2024, Khorfakkan, United Arab Emirates).

[img]
Preview
Text
Konduru Kranthi Kumar Final.pdf - Accepted Version

Download (353kB) | Preview

Abstract

The wealth of information associated with the exponential increase in digital text, particularly within the biomedical field, has the potential to advance medical research, improve patient care, and enhance public health outcomes. However, the sheer volume and complexity of this data necessitate advanced computational tools for effective processing and analysis. We investigated the use of various pretrained transformer-based language models, particularly BERT, PubMedBERT, SciBERT, ClinicalBERT, DistilBERT, and the application of prompt engineering with GPT-4, within the context of biomedical Named Entity Recognition. Our approach incorporates a comprehensive performance evaluation analysis utilizing standard NLP evaluation metrics and computational resource usage metrics such as training time, memory usage, and inference time. Through this multifaceted approach, we sought to find out how the few-shot prompting approach using GPT4 performs in comparison to the BERT-variant language models while at the same time identifying models that not only excel in performance efficiency but also demonstrate computational affordability. Our experimental results show that even the most basic transformer-based language model outperforms the few-shot prompting approach of GPT-4, despite the popularity of the LLM in the more general Natural Language Processing tasks.

Item Type: Conference or Workshop Item (Paper)
Additional Information: © 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Computer Science and Mathematics
Publisher: IEEE
SWORD Depositor: A Symplectic
Date Deposited: 13 Mar 2025 13:41
Last Modified: 13 Mar 2025 13:41
DOI or ID number: 10.1109/dese63988.2024.10911889
URI: https://researchonline.ljmu.ac.uk/id/eprint/25872
View Item View Item