EVALUATION OF INDOBERT AND ROBERTA: PERFORMANCE OF INDONESIAN LANGUAGE TRANSFORMER MODELS IN SENTIMENT CLASSIFICATION
Abstract
Full Text:
PDFReferences
R. C. Rivaldi, T. D. Wismarini, J. T. Lomba, and J. Semarang, “Analisis Sentimen Pada Ulasan Produk Dengan Metode Natural Language Processing (NLP) : (Studi Kasus Zalika Store 88 Shopee),” Elkom: Jurnal Elektronika dan Komputer, vol. 17, no. 1, pp. 120–128, Jul. 2024, doi: 10.51903/ELKOM.V17I1.1680.
M. F. Naufal and S. F. Kusuma, “Natural Language Processing untuk Otomatisasi Pengenalan Pronomina dalam Kalimat Bahasa Indonesia,” Jurnal Teknologi Informasi dan Ilmu Komputer, vol. 9, no. 5, pp. 1011–1018, Oct. 2022, doi: 10.25126/JTIIK.2022946394.
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1, pp. 4171–4186, Oct. 2018, doi: https://doi.org/10.48550/arXiv.1810.04805.
A. F. Aji et al., “One Country, 700+ Languages: NLP Challenges for Underrepresented Languages and Dialects in Indonesia,” Proceedings of the Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 7226–7249, Mar. 2022, doi: 10.18653/v1/2022.acl-long.500.
D. Sebastian, H. D. Purnomo, and I. Sembiring, “BERT for Natural Language Processing in Bahasa Indonesia,” 2022 2nd International Conference on Intelligent Cybernetics Technology and Applications, ICICyTA 2022, pp. 204–209, 2022, doi: 10.1109/ICICYTA57421.2022.10038230.
I. M. Krisna Dwitama, M. S. Al Farisi, I. Alfina, and A. Dinakaramani, “Building Morphological Analyzer for Informal Text in Indonesian,” Proceedings - ICACSIS 2022: 14th International Conference on Advanced Computer Science and Information Systems, pp. 199–204, 2022, doi: 10.1109/ICACSIS56558.2022.9923494.
R. Ferdiana, F. Jatmiko, D. D. Purwanti, A. Sekar, T. Ayu, and W. F. Dicka, “Dataset Indonesia untuk Analisis Sentimen,” Jurnal Nasional Teknik Elektro dan Teknologi Informasi, vol. 8, no. 4, pp. 334–339, Nov. 2019.
A. Hussein Mohammed et al., “Survey of BERT (Bidirectional Encoder Representation Transformer) types,” J Phys Conf Ser, vol. 1963, no. 1, p. 012173, Jul. 2021, doi: 10.1088/1742-6596/1963/1/012173.
E. C. Garrido-Merchan, R. Gozalo-Brizuela, and S. Gonzalez-Carvajal, “Comparing BERT against traditional machine learning text classification,” Journal of Computational and Cognitive Engineering, vol. 2, no. 4, pp. 352–356, May 2020, doi: 10.47852/bonviewjcce3202838.
A. Pardamean, H. F. Pardede, and A. Pardamean STMIK Nusa Mandiri, “Tuned bidirectional encoder representations from transformers for fake news detection,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 22, no. 3, pp. 1667–1671, Jun. 2021, doi: 10.11591/IJEECS.V22.I3.PP1667-1671.
F. Koto, A. Rahimi, J. H. Lau, and T. Baldwin, “IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP,” COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference, pp. 757–770, Nov. 2020, doi: 10.18653/v1/2020.coling-main.66.
Y. Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” Jul. 2019, doi: https://doi.org/10.48550/arXiv.1907.11692.
W. Suwarningsih, R. A. Pratama, F. Y. Rahadika, and M. H. A. Purnomo, “RoBERTa: language modelling in building Indonesian question-answering systems,” TELKOMNIKA (Telecommunication Computing Electronics and Control), vol. 20, no. 6, pp. 1248–1255, Dec. 2022, doi: 10.12928/TELKOMNIKA.V20I6.24248.
J. Carreras Timoneda and S. Vallejo Vera, “BERT, RoBERTa, or DeBERTa? Comparing Performance Across Transformers Models in Political Science Text,” https://doi.org/10.1086/730737, Jan. 2025, doi: 10.1086/730737.
A. F. Adoma, N. M. Henry, and W. Chen, “Comparative Analyses of Bert, Roberta, Distilbert, and Xlnet for Text-Based Emotion Recognition,” 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing, ICCWAMTIP 2020, pp. 117–121, Dec. 2020, doi: 10.1109/ICCWAMTIP51612.2020.9317379.
M. R. Mahardika, I. P. J. Wijaya, A. R. Prayoga, H. Lucky, and I. A. Iswanto, “Exploring the Performance of BERT Models for Multi-Label Hate Speech Detection on Indonesian Twitter,” 2023 4th International Conference on Artificial Intelligence and Data Sciences: Discovering Technological Advancement in Artificial Intelligence and Data Science, AiDAS 2023 - Proceedings, pp. 256–261, 2023, doi: 10.1109/AIDAS60501.2023.10284596.
C. Jocelynne, L. Tobing, I. G. N. Lanang Wijayakusuma, L. Putu, and I. Harini, “Perbandingan Kinerja IndoBERT dan MBERT untuk Deteksi Berita Hoaks Politik dalam Bahasa Indonesia,” JST (Jurnal Sains dan Teknologi), vol. 14, no. 1, pp. 114–123, May 2025, doi: 10.23887/JSTUNDIKSHA.V14I1.92126.
R. I. Yulfa, B. H. Setiawan, G. G. Lourensius, and K. Purwandari, “Enhancing Hate Speech Detection in Social Media Using IndoBERT Model: A Study of Sentiment Analysis during the 2024 Indonesia Presidential Election,” ICCA 2023 - 2023 5th International Conference on Computer and Applications, Proceedings, 2023, doi: 10.1109/ICCA59364.2023.10401700.
M. Kuhn and K. Johnson, “Applied predictive modeling,” Applied Predictive Modeling, pp. 1–600, Jan. 2013, doi: 10.1007/978-1-4614-6849-3.
Hugging Face, “Transformers.” [Online]. Available: https://huggingface.co/docs/transformers/index
W. Wongso, D. S. Setiawan, S. Limcorn, and A. Joyoadikusumo, “NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural,” Mar. 2024, doi: https://doi.org/10.48550/arXiv.2403.01817.
R. Sennrich, B. Haddow, and A. Birch, “Neural Machine Translation of Rare Words with Subword Units,” 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, vol. 3, pp. 1715–1725, Aug. 2015, doi: 10.18653/v1/p16-1162.
DOI: https://doi.org/10.33387/jiko.v8i2.9988
Refbacks
- There are currently no refbacks.