-
Is your ML model worth putting in production?
You are developing a Machine Learning model for an application in the industry. How do you measure the model’s performance? How do you know whether the model is doing its job correctly? In my experience, there’s two common approaches to this. Use a normal Data Science metric like F1-score, average precision, accuracy or whatever suits…
-
FBSim: football-playing AI agents in Rust
I took a two week vacation in early November. Somehow I decided to spend it learning a bit more about Rust and Reinforcement Learning (RL), a sub-field of AI that I haven’t explored much before. We won’t be talking about RL this post, though. That’s for a future blogpost. All of that lead to me…
-
Average Precision is sensitive to class priors
Average Precision (AP) is an evaluation metric for ranking systems that’s often recommended for use with imbalanced binary classification problems, especially when the classification threshold (i.e. the minimum score to be considered a positive) is variable, or not yet known. When you use AP for classification you’re essentially trying to figure out whether a classifier…