PLEASE NOTE: UiPath Communications Mining's Knowledge Base has been fully migrated to UiPath Docs. Please navigate to equivalent articles in UiPath Docs (here) for up to date guidance, as this site will no longer be updated and maintained.

Knowledge Base

Model Training & Maintenance

Guides on how to create, improve and maintain Models in Communications Mining, using platform features such as Discover, Explore and Validation

Training chat and calls data

User permissions required: ‘View Sources’ AND ‘Review and label’

 

Please Note: Users will be able to see chat and calls data if they have ‘View Sources’ AND see labels if they have ‘View labels’ permissions, but they will require the ‘Review and label’ permission in order to actually apply labels.



What's in this article?



Overview


Chat/calls data are commonly trained for analytics and monitoring-based use cases to gain a detailed understanding of the processes, issues, and sentiments within a conversation. 


Some examples of questions you can answer for these communication types: 

 

  • How many conversations start with a customer asking us about a topic, a complaint, etc.?  
  • What are the top topics customers are contacting us about?
  • How long does it take to resolve a conversation about a given topic? 
  • What is the quality of service that agents are providing for our customers?   
  • What is the sentiment when a certain topic is mentioned? 

Layout

 

A chat/call thread


Layout explained:

 

  1. This is used to indicate that a verbatim has been marked as uninformative 
  2. This indicates that a label has been added onto a verbatim 
  3. This allows a user to mark a verbatim as uninformative 
  4. This allows a user to add a label onto a verbatim 
  5. This allows a user to play back an audio recording, control the speed/volume, or download a call.  

Model training



Please Note: If you have sentiment analysis enabled on your chat/calls data, the differences when labelling are the same as labelling with sentiment for other communications channels (i.e. - assigning a sentiment each time you assign a label, using neutral label names, etc.). See here for more details on labelling with sentiment analysis.


Training chat/calls data is very similar to training other verbatim types, where a user would go through the Discover, Explore, Refine phases to train their model further. 


The key distinctions are: 

  1. Thread layout - With chat/calls data, verbatims between all parties in a given conversation are automatically compiled into a single thread view, but labels are still assigned to individual verbatims (i.e. - turns in the conversation).       
  2. Uninformative verbatims - A verbatim in a chat/call can be marked as 'uninformative' if it does not add context or value to the given conversation. By marking a verbatim as uninformative, you are teaching the model that none of the labels are applicable, and therefore the model will learn that similar verbatims should not be expected to have label predictions. 
    • Please Note: When applying labels to a verbatim ('verbatim A'), the model will automatically mark the verbatim above ('verbatim B') as uninformative if no labels are applied to it. It's therefore important to read the verbatim above and apply labels to it if relevant. This feature helps to build up the necessary training data for 'Uninformative', without too much additional labelling.
  3. Coverage - When assessing coverage for chat/calls data, in addition to assessing the proportion of verbatims covered by informative (i.e - meaningful) label predictions, it also incorporates the proportion of verbatims that are predicted to be uninformative. For more information on how coverage is determined, click here.  

 

 

Validation factor card for coverage for a chat or calls dataset


Previous: Training with label sentiment analysis enabled  

Did you find it helpful? Yes No

Send feedback
Sorry we couldn't be helpful. Help us improve this article with your feedback.

Sections

View all