precision
precision is the proportion of TRUE positives over all PREDICTED positives
Link to original
sensitivity
recall, or sensitivity, as a classification metric, is the proportion of correctly predicted observations in one class out of all observations in that class. Or the ratio of TRUE positives out of all ACTUAL positives
This has a formula opposite of specificity
Transclude of specificityUsage
recall is important when we believe False Negatives are more important than False Positives (e.g. problem of cancer detection).
Link to original
- Out of survived passengers, how many did we label correctly?
- Out of the sick patients, how many did we correctly diagnose as sick?
- If we need higher FP → recall
- If we need higher FN → precision
- F1 score, or in general F-beta, takes account both precision and Recall
Example
Cancer detection
- a model with high recall will identify most people that have cancer (true positives), saving their lives, but at the cost of misdiagnosing healthy individuals as having cancer (false positives), subjecting them to expensive and dangerous treatments
- a model for precision yields confident diagnoses (i.e. someone predicted as having cancer very likely does have cancer), but at the cost of failing to identify everyone who has the disease (false negatives)