Perbandingan Performa Bagging dan AdaBoost untuk Klasifikasi Data Multi-Class

  • Samuel Lukas Universitas Pelita Harapan
  • Osvaldo Vigo Universitas Pelita Harapan
  • Dion Krisnadi
  • Petrus Widjaja Universitas Pelita Harapan


One technique to improve the performance of Machine Learning algorithms is to use Ensemble Learning. The idea of ​​this technique combines several Machine Learning algorithms or commonly referred to as base learners. The purpose of this study is to compare the performance of the two Ensemble Learning algorithms, namely the Bootstrap Aggregating (Bagging) method and the Adaptive Boosting (AdaBoost) method. This study uses eleven datasets with multi-class classifications that are independent of the characteristics (data proportion, number of data, and problems) and the number of different classes of target variables. The results showed that the accuracy and F1 model formed by the Bagging method tended to show better value performance than that of the AdaBoost method on the evaluation metric with an average evaluation value of 72.21% and 61% for Bagging and 66.25% and 53, respectively. 7% for AdaBoost. However, the results of hypothesis testing show that it is not significant enough. In addition, the length of computation time to form the Bagging model and the AdaBoost model is not different
Jul 28, 2022
How to Cite
LUKAS, Samuel et al. Perbandingan Performa Bagging dan AdaBoost untuk Klasifikasi Data Multi-Class. Journal Information System Development (ISD), [S.l.], v. 7, n. 2, p. 7 - 12, july 2022. ISSN 2528-5114. Available at: <>. Date accessed: 03 oct. 2022. doi: