User permissions required: 'View Sources' AND 'View Labels'

 

 

Validation within Re:infer is the process by which the platform evaluates the performance of the model associated with a dataset by testing itself against a sub-set of training data from within that dataset. 

 

To do this, Re:infer first splits the reviewed (i.e. labelled) verbatims in the dataset into two groups; a majority set of training data, and a minority set of test data. The platform then trains itself on the training set and the labels that were applied to it.

 

Based on this training, it then tries to predict labels for the minority test set and evaluates the results for both precision and recall against the actual labels that were applied by a human user.

 

The validation page publishes live statistics on the performance of the model. If you have recently trained the model, it may take some time to re-train and recalculate the validation statistics. 


The platform will always show you as default the latest validation statistics which have been calculated, and will tell you if new statistics are yet to finish being calculated.

 


Next: Understanding Model Performance