If you have a label that Re:infer is struggling to predict accurately, and you're happy with the consistency of the already pinned examples (as discussed in the previous article), then it is likely that you need to provide the model with more varied (and consistent) training examples. Re:infer will typically suggest this if it would be beneficial in the recommended actions for the label in Validation.
The best method for training Re:infer on the instances where it struggles to predict whether a label applies or not, is using 'Teach' for unreviewed verbatims.
As this mode shows you predictions for a label with confidence scores ranging outwards from 50% (or 66% in the case of a sentiment-enabled dataset), accepting or correcting these predictions sends much more powerful training signals to the model than if you were to accept predictions with confidence scores of 90% or more. In this way, you can quickly improve the performance of a label by providing varied training examples that Re:infer was previously unsure about.
The actual process of labelling in this mode is discussed in the Explore phase here.
Previous: Training using 'Check label' & 'Missed label' | Next: Check your model's coverage