(Algotech) ML Specialization

boosting is an ensemble method where weak learners are trained sequentially, each individual model learn from predecessors and updates their errors

Usage

  • 👍 Reduce overfitting
  • 👍 Easy to understand
  • 👍 Require minimal preprocessing like scaling
  • 👍 Handle both numerical and categorical features (if tree-based)
  • 🔻 Less scalable than bagging, since each weak learner is dependent on their predecessor, thus cannot be trained in parallel on different servers.

Methods