UCSY's Research Repository

UTILIZING ROBERTA INTERMEDIATE LAYERS AND FINE-TUNING FOR SENTENCE CLASSIFICATION

Show simple item record

dc.contributor.author Soe, Eaint Thet Hmu
dc.date.accessioned 2023-01-03T11:53:28Z
dc.date.available 2023-01-03T11:53:28Z
dc.date.issued 2022-12
dc.identifier.uri https://onlineresource.ucsy.edu.mm/handle/123456789/2772
dc.description.abstract Text classification becomes more and more challenging due to a scarcity of standardized labeled data in the Myanmar NLP domain. The majority of the existing Myanmar research has relied on models of deep learning that significantly focus on context-independent word embeddings, such as Word2Vec, GloVe, and fastText, in which each word has a fixed representation irrespective of its context. Meanwhile, context-based pre-trained language models such as BERT and RoBERTa recently revolutionized the state of natural language processing. In this paper, the experiments are conducted to enhance the performance of classification in sentiment analysis by utilizing the transfer learning ability of RoBERTa. Existing pretrained model based works only utilize the last output layer of RoBERTa and ignore the semantic knowledge in the intermediate layers. This research explores the potential of utilizing RoBERTa intermediate layers to enhance the performance of fine-tuning of RoBERTa. To show the generality, Myanmar pretrained RoBERTa model (MyanBERTa)[1] and multilingual pretrained model (XLM-roBERTa)[3] are also compared. The effectiveness and generality of intermediate layers were proved and discussed in the experimental result. en_US
dc.language.iso en en_US
dc.publisher University of Computer Studies, Yangon en_US
dc.subject ROBERTA INTERMEDIATE LAYERS AND FINE-TUNING en_US
dc.title UTILIZING ROBERTA INTERMEDIATE LAYERS AND FINE-TUNING FOR SENTENCE CLASSIFICATION en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository



Browse

My Account

Statistics