Support Centre

Model Training & Maintenance

Guides on how to create, improve and maintain Models in Re:infer, using platform features such as Discover, Explore and Validation

Training using 'Teach' on reviewed and unreviewed verbatims

User permissions required: 'View Sources' AND 'Review and label'

 

Introduction to using 'Teach' on reviewed verbatims


The 'Teach' (Reviewed) step is used to help find inconsistencies in labels you have already applied to reviewed comments. This is different to the 'Teach' (Unreviewed) step, which focuses on comments that have predictions made by Re:infer. 


'Teach' filtered to 'reviewed' will show you verbatims that a user has already labeled, but where Re:infer thinks they may have been applied incorrectly. Of these verbatims, there are two kinds:


  1. Verbatims where the label in question has been applied but the model thinks it should not have been
  2. Verbatims where the label has not been applied but the model thinks it should have been


The diagram below shows how these two types of potential inconsistencies are displayed in this mode for the ‘Bedroom > Bathroom’ label:

  • The first verbatim shows where the label has been applied by a user, and Re:infer thinks it should not be
  • The second verbatim shoes a reviewed comment where the ‘Bedroom > Bathroom’ label has not been applied but it thinks it should be. It appears as a suggestion


 

These suggestions from Re:infer are not necessarily correct, these are just the instances where Re:infer is unsure, based on the training that's been completed so far. You can choose to ignore them once you have reviewed them.

 

Using this method is a very effective way of finding occurrences where the user may have not been consistent in applying labels. By using this method you are able to correct these occasions and therefore improve the performance of the label.


When to use 'Teach' on reviewed verbatims?


A common way of spotting whether a label needs to be checked using Teach (Reviewed) is reviewing the precision versus recall charts in the Validation page. 


When you select a label in Validation, you will presented with some validation statistics and a precision versus recall chart. The diagrams below show a graph of precision versus recall for two different labels:

  • The left-hand graph shows Precision dropping sharply in a straight line from 1 to just under 0.5 close to the Y-axis
  • This differs to the right-hand graph, where the precision remains at 1 for a higher recall value and drops much further along the curve
  • Whenever you see a chart that has this sharp drop in precision close to the left hand Y-axis (precision) it is highly likely that there is some inconsistency in the labelling. When you spot this you should use 'Teach' on reviewed verbatims for that specific label to check for inconsistencies


 

Precision versus recall charts for two different labels


For more detail on improving label performance and when you may need to use Teach (reviewed), see here.


How to use 'Teach' on reviewed verbatims:


  1. Select 'Teach' from the drop-down menu in the top right corner
  2. Select 'reviewed' from the filter bar
  3. Select the label you want to review from your taxonomy. Re:infer will show you 20 already reviewed verbatims where the model thinks there are potential inconsistencies
  4. Review each one and where the label has been applied already decide if it should still apply. If you think the label should still apply, then do nothing and move on to the next. However, if you think it was applied in error, delete it by hovering over the label and clicking on the ‘x’. Make sure you apply the correct label as well
  5. For the examples where the label has not been applied but the model thinks it should, if you think it is correct, then you can click on the suggestion to add it. If it doesn’t apply you don’t need to do anything

 


You can review another 20 verbatims by clicking to the next page at the bottom of each page. Continue to review each verbatim using the same process.


Using 'Teach' on unreviewed verbatims


If you have a label that Re:infer is struggling to predict accurately, and you're happy with the consistency of the already pinned examples (as discussed above), then it is likely that you need to provide the model with more varied (and consistent) training examples.


The best method for training Re:infer on the instances where it struggles to predict whether a label applies or not, is using 'Teach' on unreviewed verbatims.

  

As this mode shows you predictions for a label with confidence scores ranging outwards from 50% (or 66% in the case of a sentiment-enabled dataset), accepting or correcting these predictions sends much more powerful training signals to the model than if you were to accept predictions with confidence scores of 90% or more.


The actual process of labelling in this mode is discussed in the Explore phase here.


Previous: Precision and recall in Re:infer     |      Next: Training using 'Low Confidence'

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.

Sections

View all