Tag: evaluation metrics

  • Fast optimization of classification thresholds

    Binary classification problems (target/non-target) are often modeled as a pair where is our model, which maps input vectors to scores, and is our threshold, such that we predict to be of target class iff . Otherwise, we predict it to be of non-target class. The threshold is usually set to , but this needs not […]

  • Average Precision is sensitive to class priors

    Average Precision (AP) is an evaluation metric for ranking systems that’s often recommended for use with imbalanced binary classification problems, especially when the classification threshold (i.e. the minimum score to be considered a positive) is variable, or not yet known. When you use AP for classification you’re essentially trying to figure out whether a classifier […]

Create a website or blog at WordPress.com