The 'Teach' (Reviewed) step is used to help find inconsistencies in labels you have already applied to reviewed comments. This is different to the 'Teach' (Unreviewed) step, which focuses on comments that have predictions made by Re:infer.
'Teach' filtered to 'reviewed' will show you verbatims that a user has already labeled, but where Re:infer thinks they may have been applied incorrectly. Of these verbatims, there are two kinds:
- Verbatims where the label in question has been applied but the model thinks it should not have been
- Verbatims where the label has not been applied but the model thinks it should have been
The diagram below shows how these two types of potential inconsistencies are displayed in this mode for the ‘Bedroom > Bathroom’ label:
- The first verbatim shows where the label has been applied by a user, and Re:infer thinks it should not be
- The second verbatim shoes a reviewed comment where the ‘Bedroom > Bathroom’ label has not been applied but it thinks it should be. It appears as a suggestion
These suggestions from Re:infer are not necessarily correct, these are just the instances where Re:infer is unsure, based on the training that's been completed so far. You can choose to ignore them once you have reviewed them.
Using this method is a very effective way of finding occurrences where the user may have not been consistent in applying labels. By using this method you are able to correct these occasions and therefore improve the performance of the label.
When to use 'Teach' on reviewed verbatims?
A common way of spotting whether a label needs to be checked using Teach (Reviewed) is reviewing the precision versus recall charts in the Validation page.
When you select a label in Validation, you will presented with some validation statistics and a precision versus recall chart. The diagrams below show a graph of precision versus recall for two different labels:
- The left-hand graph shows Precision dropping sharply in a straight line from 1 to just under 0.5 close to the Y-axis
- This differs to the right-hand graph, where the precision remains at 1 for a higher recall value and drops much further along the curve
- Whenever you see a chart that has this sharp drop in precision close to the left hand Y-axis (precision) it is highly likely that there is some inconsistency in the labelling. When you spot this you should use 'Teach' on reviewed verbatims for that specific label to check for inconsistencies
Precision versus recall charts for two different labels
For more detail on improving label performance and when you may need to use Teach (reviewed), see here.
How to use 'Teach' on reviewed verbatims:
- Select 'Teach' from the drop-down menu in the top right corner
- Select 'reviewed' from the filter bar
- Select the label you want to review from your taxonomy. Re:infer will show you 20 already reviewed verbatims where the model thinks there are potential inconsistencies
- Review each one and where the label has been applied already decide if it should still apply. If you think the label should still apply, then do nothing and move on to the next. However, if you think it was applied in error, delete it by hovering over the label and clicking on the ‘x’. Make sure you apply the correct label as well
- For the examples where the label has not been applied but the model thinks it should, if you think it is correct, then you can click on the suggestion to add it. If it doesn’t apply you don’t need to do anything