A tree-based boosting technique, where subsequent weak learners are tweaked so that they try harder to correct points incorrectly predicted by their predecessors.

To do so, the model give wrongly predicted points more weight, and correctly predicted points less weight.

How to assign weights

For example, in Iteration 1, three misclassified plus + signs are given more weight in Iteration 2, urging this individual model to correctly label them (at the expense of other points). After 3 iterations, all learners are combined using ensemble method

Usage

  • 👍 Can handle numeric and categorical features
  • Function well even there are collinearity among features
  • Robust against outliers (which is the feature for all tree-based models)

Weight assignment

Code

from sklearn.ensemble import AdaBoostClassifier 
from sklearn.tree import DecisionTreeClassifier
 
#base_estimator = weak learner
#n_estimators = max number of weak learners used
model = AdaBoostClassifier(base_estimator = DecisionTreeClassifier(max_depth=2), 
						   n_estimators = 4)
 
model.fit(x_train, y_train)
model.predict(x_test)