Coverage is a term frequently used in Machine Learning and relates to how well a model 'covers' the data it's used to analyse. In Re:infer, this means how accurately your taxonomy of labels (and their training examples) represents your dataset as a whole. You can think of it this way: to ensure you have good coverage, you would need to have enough labels to describe each of the key concepts in your dataset, as well as varied and consistently applied training examples for each of those labels.
Your model having good coverage is particularly important if you are using it to drive automated processes. For example, for a model designed to automatically route different requests received in a shared mailbox, low coverage would mean that lots of requests were inaccurately routed, or sent for manual review as the model could not identify them.
Coverage can be broken down into two core concepts:
Concept coverage: how comprehensively your labels represent the concepts within the real-life data you are trying to model
|Accuracy coverage: for each label concept, how well the model is able to predict where it applies|
For more detail on model coverage, and how to check your model's coverage, see here.