CLASSIFICATION OF DURIAN LEAF IMAGES USING CNN (CONVOLUTIONAL NEURAL NETWORK) ALGORITHM

Lely Mustikasari Mahardhika Fitriani, Yovi Litanianda

Abstract


This research investigates the classification of durian leaf images using Convolutional Neural Network (CNN) algorithms, specifically focusing on the architectures AlexNet, InceptionNetV3, and MobileNet. The study begins with the collection of a dataset comprising 1604 images for training, 201 images for validation, and 201 images for testing. The dataset includes five classes of durian leaves: Bawor, Duri Hitam, Malica, Montong, and Musang King, chosen for their varied characteristics such as taste, texture, and aroma. Data preprocessing involved several steps to ensure the images were suitable for model training. These steps included data augmentation to increase variability, pixel normalization to standardize the images, and resizing to 150x150 pixels to match the input requirements of the CNN models. After preprocessing, the CNN models were implemented and trained using deep learning frameworks such as TensorFlow and PyTorch. Model performance was evaluated using a Confusion Matrix, which provided detailed insights into classification accuracy, precision, sensitivity, specificity, and F-score. The results indicated that InceptionNetV3 and AlexNet achieved near-perfect classification accuracy, with no misclassifications, demonstrating their robustness and precision in identifying durian leaf images. The training accuracy for both models rapidly approached 100% within the first few epochs and stabilized, while the loss values decreased sharply, indicating effective learning without overfitting. In contrast, MobileNet, while showing high accuracy and low loss during training, exhibited several misclassifications across all classes. The training accuracy of MobileNet also approached 100%, but the presence of misclassifications suggested that further tuning and improvements were necessary. Specifically, MobileNet's Confusion Matrix revealed errors in correctly identifying samples from each class, indicating potential areas for enhancement in the model's architecture or preprocessing techniques. In conclusion, InceptionNetV3 and AlexNet proved to be highly efficient and accurate architectures for classifying durian leaf images, making them suitable for practical applications. MobileNet, although performing well, requires further refinement to achieve the same level of accuracy and reliability. This study highlights the importance of selecting appropriate CNN architectures and the need for thorough preprocessing to optimize model performance in image classification tasks.


Full Text:

PDF

References


KBBI, “Durian - KBBI VI Daring,” KBBI. Accessed: Jul. 23, 2024. [Online]. Available: https://kbbi.kemdikbud.go.id/entri/durian

F. Fajar Artana and S. Ashari, “Eksplorasi dan Karakterisasi Tanaman Durian (Durio zibethinus Murr.) di Kabupaten Trenggalek,” PLANTROPICA J. Agric. Sci., vol. 7, no. 1, pp. 28–39, Feb. 2022, doi: 10.21776/ub.jpt.2022.007.1.4.

H. Triwidodo, S. Wiyono, and P. B. Ayuwati, “Deteksi Penyakit Pembibitan Pada Durian Tanaman BerdasarkanCitra Menggunakan Convolutional Neural Network,” Agrovigor J. Agroekoteknologi, vol. 13, no. 1, pp. 43–50, 2020.

S. K. Roy, G. Krishna, S. R. Dubey, and B. B. Chaudhuri, “HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification,” IEEE Geosci. Remote Sens. Lett., vol. 17, no. 2, pp. 277–281, 2020, doi: 10.1109/LGRS.2019.2918719.

Y. Wei et al., “HCP: A Flexible CNN Framework for Multi-Label Image Classification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 9, pp. 1901–1907, 2016, doi: 10.1109/TPAMI.2015.2491929.

A. Rachmatullah Pratama and A. F. Cobantoro, “Klasifikasi Citra Pneumonia Menggunakan Arsitektur Convolutional Neural Network (Cnn) Pneumonia Image Classificati on Using Convolutinal Neural Network (Cnn) Architecture,” J. Ilm. NERO, vol. 8, no. 2, p. 2023, 2023.

A. Fajaryanto, F. Masykur, and M. Rizqi Rosyadi, “Implementation of Bot Telegram as Broadcasting Media Classification Results of Convolutional Neural Network (CNN) Images of Rice Plant Leaves,” J. Comput. Networks, Archit. High Perform. Comput., vol. 5, no. 1, pp. 1–9, 2023, doi: 10.47709/cnahpc.v5i1.1976.

Y. Liu, H. Pu, and D.-W. Sun, “Efficient extraction of deep image features using convolutional neural network (CNN) for applications in detecting and analysing complex food matrices,” Trends Food Sci. Technol., vol. 113, pp. 193–204, 2021, doi: https://doi.org/10.1016/j.tifs.2021.04.042.

S. Dong, P. Wang, and K. Abbas, “A survey on deep learning and its applications,” Comput. Sci. Rev., vol. 40, p. 100379, 2021, doi: https://doi.org/10.1016/j.cosrev.2021.100379.

D. Bhatt, C. Patel, H. Talsania, J. Patel, R. Vaghela, and S. Pandya, “CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope,” MDPI, vol. 10, no. 2470, pp. 1–28, 2021.

H.-C. Chen et al., “AlexNet Convolutional Neural Network for Disease Detection and Classification of Tomato Leaf,” Electronics, vol. 11, no. 951, pp. 100–103, 2022, doi: 10.1017/s1351324909005129.

G. Meena, K. K. Mohbey, and S. Kumar, “Sentiment analysis on images using convolutional neural networks based Inception-V3 transfer learning approach,” Int. J. Inf. Manag. Data Insights, vol. 3, no. 1, p. 100174, 2023, doi: https://doi.org/10.1016/j.jjimei.2023.100174.

Y. Nan, J. Ju, Q. Hua, H. Zhang, and B. Wang, “A-MobileNet: An approach of facial expression recognition,” Alexandria Eng. J., vol. 61, no. 6, pp. 4435–4444, 2022, doi: 10.1016/j.aej.2021.09.066.

I. Naseer, S. Akram, T. Masood, A. Jaffar, M. A. Khan, and A. Mosavi, “Performance Analysis of State-of-the-Art CNN Architectures for LUNA16,” Sensors, vol. 22, no. 12, 2022, doi: 10.3390/s22124426.

D. Krstinić, M. Braović, L. Šerić, and D. Božić-Štulić, “Multi-label Classifier Performance Evaluation with Confusion Matrix,” pp. 01–14, 2020, doi: 10.5121/csit.2020.100801.

A. Agrawal and N. Mittal, “Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy,” Vis. Comput., vol. 36, no. 2, pp. 405–412, 2020, doi: 10.1007/s00371-019-01630-9.

A. R. Beeravolu, S. Azam, M. Jonkman, B. Shanmugam, K. Kannoorpatti, and A. Anwar, “Preprocessing of Breast Cancer Images to Create Datasets for Deep-CNN,” IEEE Access, vol. 9, pp. 33438–33463, 2021, doi: 10.1109/ACCESS.2021.3058773.

A. F. Cobantoro, F. Masykur, and K. Sussolaikah, “ERFORMANCE ANALYSIS OF ALEXNET CONVOLUTIONAL NEURAL NETWORK (CNN) ARCHITECTURE WITH IMAGE OBJECTS OF RICE PLANT LEAVES,” JITK (Jurnal Ilmu Pengetah. dan Teknol. Komputer), vol. 8, no. 2, pp. 111–116, Feb. 2023, doi: 10.33480/jitk.v8i2.4060.

J. Fan, J. Lee, and Y. Lee, “A transfer learning architecture based on a support vector machine for histopathology image classification,” Appl. Sci., vol. 11, no. 14, 2021, doi: 10.3390/app11146380.

W. Tyassari, Y. Jusman, S. Riyadi, and S. N. Sulaiman, “Classification of Cervical Precancerous Cell of ThinPrep Images Based on Deep Learning Model AlexNet and InceptionV3,” in 2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT), 2022, pp. 276–281. doi: 10.1109/CSNT54456.2022.9787658.

W. Wang, Y. Li, T. Zou, X. Wang, J. You, and Y. Luo, “A novel image classification approach via dense-mobilenet models,” Mob. Inf. Syst., vol. 2020, 2020, doi: 10.1155/2020/7602384.

J. Xu, Y. Zhang, and D. Miao, “Three-way confusion matrix for classification: A measure driven view,” Inf. Sci. (Ny)., vol. 507, pp. 772–794, 2020, doi: https://doi.org/10.1016/j.ins.2019.06.064.




DOI: https://doi.org/10.33387/jiko.v7i2.8576

Refbacks

  • There are currently no refbacks.
slot gacor slot gacor hari ini slot gacor 2025 demo slot pg slot gacor slot gacor