PLEASE NOTE: UiPath Communications Mining's Knowledge Base has been fully migrated to UiPath Docs. Please navigate to equivalent articles in UiPath Docs (here) for up to date guidance, as this site will no longer be updated and maintained.

Knowledge Base

Model Training & Maintenance

Guides on how to create, improve and maintain Models in Communications Mining, using platform features such as Discover, Explore and Validation

Improving entity performance

User permissions required: 'Review and label'

 

What's covered in this article



Overview


Like training labels, training entities is the process by which a user teaches the platform which entities apply on a given verbatim using various training modes. 


Like with labels, the ‘Teach’, ’Check’, and ’Missed’ modes are available to help train and improve the performance of entities and can be accessed either 1) on the Explore page using the training dropdown, or 2) by following the recommended actions on the Entity tab of the Validation page.


 

 

A dropdown menu containing the entity training modes in 'Explore'





If a specific entity has a performance warningthe platform recommends the next best action that it thinks will help address that warning, listed in order of priority. This will be shown when you select a specific entity from the taxonomy or the 'All Entity' chart.

 

The next best actions suggestions act as links that you can click to take you direct to the training view that the platform suggests in order to improve the entity's performance. The suggestions are intelligently ordered with the highest priority action to improve the entity listed first.

 

This is the most important tool to help you understand the performance of your entities, and should regularly be used as a guide when trying to improve entity performance.



 

Example entity card with recommended actions



Entity training modes


The following table summarises when the platform recommends each entity training mode:


Teach Entity
Check EntityMissed Entity 

- Show predictions for a label where the model is most confused if it applies or not


- For training entities on unreviewed verbatims 


- Shows verbatims where the platform thinks the entity may have been misapplied

- For training entities on reviewed verbatims to try to find and correct any inconsistencies  


- Shows verbatims that the platform thinks may be missing the selected entity

- For training entities on reviewed verbatims to try to find and correct any inconsistencies  




Using Teach Entity 


Using Teach Entity boosts entity performance, because the model is being given new information on verbatims it is unsure about, as opposed to ones that it already has highly confident predictions for.



 

The platform recommends 'Teach Entity' when:


  • There is a performance warning next to an entity (as seen below – when the min. 25 examples has not been provided) 
  • The F1 score on a given entity is low 
  • There may not always be obvious context within the text for an entity, or there is lots of variation within the entity values for a given type

 

 

 An example of training an entity in ‘Teach Entity’ mode

 


Using Check Entity


Using check entity helps identify inconsistencies in the reviewed set, while improving the model's understanding of the entity, by ensuring that the model has correct and consistent examples to make predictions. This will improve the recall of an entity.





The platform recommends 'Check Entity' when:


  • There is low recall, but high precision
  • The predictions the platform makes are very accurate, but a lot of the time where the entity has been applied, it doesn’t catch these examples 


 

An example of training an entity in ‘Check Entity’ mode


  

(For more details on calculations for entity validation, please see here)



Using Missed Entity 

 

Using missed entity helps find examples in the reviewed set that should have the selected entity but do not. It will also help identify partially labelled verbatims which can be detrimental to the model's ability to predict an entity. This will improve the precision of an entity and ensure the model has correct and consistent examples to make predictions from. 





The platform recommends 'Missed Entity' when:


  • There is high recall, but low precision
  • We’re incorrectly predicting entities a lot, but when we do predict them correctly -we catch many of the examples that should be there  



An example of training an entity in ‘Missed Entity’ mode 


 

(For more details on calculations for entity validation, please see here)

 


Previous: Validation for entities    |    Next: Building custom regex entities

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.

Sections

View all