The Re:infer platform helps users training models by calculating a holistic Model Rating, which assesses the overall health and performance of their model by considering a number of key contributing factors.
This rating is a proprietary score that the team at Re:infer have created to ensure that our users create models that perform well in all of the most important areas.
The four main factors that the rating takes into account are:
- Balance - this factor assesses whether the training data is a balanced representative of the dataset as a whole
- Underperforming Labels - assesses the performance of the 10% of labels that have the most significant warnings
- Coverage - assesses how well the dataset as a whole is covered by predictions for informative labels
- All Labels - assesses the average performance of labels by looking at every label in the taxonomy
Example Model Rating in Validation