boosting is an ensemble method where weak learners are trained sequentially, each individual model learn from predecessors and updates their errors
Weak learner
An individual model in boosting. It has the name since we only expect it to be slightly better than a random guess
Usage
- 👍 Reduce overfitting
- 👍 Easy to understand
- 👍 Require minimal preprocessing like scaling
- 👍 Handle both numerical and categorical features (if tree-based)
- 🔻 Less scalable than bagging, since each weak learner is dependent on their predecessor, thus cannot be trained in parallel on different servers.