User permissions required: 'View Sources' AND 'Review and label'
The second step in the Explore phase is called ‘Review predictions’. After the Discover phase and some training in shuffle mode, the model will have started making predictions for many of the labels in your taxonomy.
The purpose of this step is to review these for each label, confirming whether they are correct and correcting them where they aren't, and thereby providing many more training examples for the model.
There are therefore two key actions in this step when reviewing label predictions:
- Where the predictions are correct, you should confirm/accept them
- Where they are incorrect, you should either dismiss them or alternatively add the correct label(s) that does apply.
The images below show how predictions look in Re:infer for data with and without sentiment. Hovering your mouse over the label will also show the confidence the model has that the specific label applies.
The transparency of the predicted label provides a visual indicator of Re:infer’s confidence. The darker the colour, the higher Re:infer’s confidence is in that label applying, and vice versa:
Verbatim with predictions without sentiment enabled
Verbatim with predictions with sentiment enabled
|Remember: telling a model that a label does not apply is just as important as telling it what does|
- Select the unreviewed filter in Explore (this shows verbatims not yet reviewed by a human) *
- Select the label you wish to train
- Re:infer will now present you with unreviewed Verbatims with predictions for the label you have selected. These will be presented in descending order of confidence that the selected label applies, i.e. with the most confident first and least confident at the bottom
- To confirm a label applies simply click on it - e.g. 'Facilities > Spa and pool' shown above
- To add a different or additional label to one you have already selected, click the ‘+’ button and type it in. This is the way to correct wrong predictions, by adding the correct one and not clicking on any incorrectly predicted labels
- Please Remember: at all times to also add in any other labels that apply to the verbatim you're reviewing
* Please Note: If you filter to unreviewed verbatims in Explore, the counts next to the labels will update to show you the predicted count of how many times that label occurs in the dataset. The more you train the label, the more this predicted count should accurately reflect the true number in the dataset.
To delete a label you applied in error you can hover over it and an ‘X’ will appear. Click this to remove the label.
Which predictions should you review?
- From a training perspective, it's only useful to review predictions that are not high-confidence (90%+)
- This is because when the model is very confident (i.e. above 90%), then by confirming the prediction you are not telling the model a lot of new information, it's already confident that the label applies, such as in the example below
- If you select a label and see lots of high confidence predictions already, it's likely you should move straight on to using 'Teach' on the label (see here), which focuses on examples that Re:infer is unsure about
- If predictions have high confidences and are wrong, however, then it's important to apply the correct label(s), thereby rejecting the incorrect prediction(s)
- Predictions are shown in descending order of confidence, with the highest confidence predictions for a label shown first
How many predictions should you review per label?
What are the red and amber warnings that appear next to some labels?