PLEASE NOTE: UiPath Communications Mining's Knowledge Base has been fully migrated to UiPath Docs. Please navigate to equivalent articles in UiPath Docs (here) for up to date guidance, as this site will no longer be updated and maintained.

Knowledge Base

Model Training & Maintenance

Guides on how to create, improve and maintain Models in Communications Mining, using platform features such as Discover, Explore and Validation

How does Validation work?

User permissions required: 'View Sources' AND 'View Labels'

 

Within Validation, the platform evaluates the performance of both the label and entity models associated with a dataset. 

 

For the label model specifically, it calculates an overall 'Model Rating' by testing a number of different performance factors, including:

 

  • How well it is able to predict each label in the taxonomy, using a sub-set of training data from within that dataset
  • How well covered the dataset as a whole is by informative label predictions
  • How balanced the training data is, in terms of how it has been assigned and how well it represents the dataset as a whole

How does it assess label performance?


To assess how well it can predict each label, the platform first splits the reviewed (i.e. labelled) verbatims in the dataset into two groups; a majority set of training data, and a minority set of test data

 

In the image below, the coloured dots represent the labelled verbatims within a dataset. This split is determined by the verbatim ID when the verbatims are added to the dataset, and remains consistent throughout the life of the dataset.



 

The platform then trains itself using only the training set as training data.

 

Based on this training, it then tries to predict which labels should apply to the verbatims in the test set and evaluates the results for both precision and recall against the actual labels that were applied by a human user.


On top of this process, the platform also takes into account how labels were assigned - i.e. which training modes were used when applying labels - to understand whether they've been labelled in a biased, or balanced way.

 

Validation then publishes live statistics on the performance of the labels for the latest model version, but you can also view historic performance statistics for previously pinned model versions.


How does it assess coverage?

 

To understand how well your model covers your data, the platform looks at all of the unreviewed data in the dataset and the predictions that the platform has made for each of those unreviewed verbatims. 

 

It then assesses the proportion of total verbatims that have at least one informative label predicted. 


'Informative labels' are those labels that the platform understands to be useful as standalone labels, by looking at how frequently they're assigned with other labels. Labels that are always assigned with another label, e.g. parent labels that are never assigned on their own or 'Urgent' if it's always assigned with another label, are down-weighted when the score is calculated.


How does it assess balance?

 

When the platform assesses how balanced your model is, it's essentially looking for labelling bias that can cause an imbalance between the training data and the dataset as a whole. 


To do this, it uses a labelling bias model that compares the reviewed and unreviewed data to ensure that the labelled data is representative of the whole dataset. If the data is not representative, model performance measures can be misleading and potentially unreliable.


Labelling bias is typically the result of an imbalance of the training modes used to assign labels, particularly if too much 'text search' is used and not enough 'Shuffle'.


The 'Rebalance' training mode shows verbatims that are under-represented in the reviewed set. Labelling examples in this mode will help to quickly address any imbalances in the dataset.


When does the validation process happen?


Every time you complete some training within a dataset, the model updates and provides new predictions across every verbatim. In parallel, it also re-evaluates the performance of the model. This means that by the time the new predictions are ready, new validation statistics should also be available (though one process can take longer than the other sometimes), including the latest .


Please Note: The platform will always show you as default the latest validation statistics which have been calculated, and will tell you if new statistics are yet to finish being calculated. 


Previous: Precision and recall    |      Next: Understanding and improving model performance

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.

Sections

View all