- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Understanding the status of your dataset
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Overview
- Extraction field type filtering
- Generating your extractions
- Validate and annotate generated extractions
- Best practices and considerations
- Understanding validation on extractions and extraction performance
- FAQ
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Understanding validation on extractions and extraction performance
The Extractions Validation page is in public preview.
The Validation page lets you drill down into the individual performance of each extraction. The All extractions performance chart plots the average precision of each label against the number of examples for that label in the training set.
- Select the Extractions tab from the top of the page.
- Check the Summary Stats. The summary stats are averages of each of the individual extraction scores. This covers average precision, average recall and average F1 score.
The main components that the model considers when assessing the extractions include:
- Did the model correctly predict the label?
- Did the model correctly predict all the fields associated with the label?
- Did the model correctly pick up how many times each of the extractions occur?
How the confidence levels work varies depending on the underlying LLM model that you use.
The Preview LLM does not have confidence levels on its predictions. The Preview LLM returns whether a label or field is a prediction (Yes = 1), or not (No = 0).
As a result, there is no concept of different confidence thresholds. The precision/recall is the same at every point on the threshold.
If you use the CommPath LLM, the model uses its Validation capabilities to predict which labels to apply to a communication. The model assigns each prediction a confidence score (%). This shows you how confident is the model that the label applies.
Use the adjustable slider to understand how different confidence thresholds affect the precision and recall scores.
This section describes the outputs of the get stream results activity. Check the Communications Mining dispatcher framework page for more details.
To automate with Generative extraction, it is important to understand the contents of the outputs of your extractions.
Occurrence confidence: Refers to how confident the model is around the number of instances a request might occur on a message (i.e.- how many times an extraction might occur).
As an example: To process a statement of accounts into a downstream system, you always need an Account ID, PO number, the payment amount, and the due date.
Check below the occurrence confidence example. It shows how the model can confidently identify that there are 2 potential occurrences where you need to facilitate this downstream process.
Extraction confidence is the model's confidence about its predictions. This includes how accurate it thinks it was in predicting a label's instance and its related fields. It also includes the model's confidence in correctly predicting if a field is missing.
Consider the same example as before. To process a statement of accounts into a downstream system, you always need an Account ID, PO number, the payment amount, and the due date.
However this time, the PO number is not present on the message, or the due date (only the start date).
The extraction confidence from this example is the model's confidence about identifying if the values for each field associated with the label are present. It also includes the model's confidence in correctly predicting if a field is missing.
In this case here, you don’t have all the fields you need, to be able to fully extract all the required fields.
Check below an example output of what the get stream response activity returns.
Stream refers to the threshold you set in Communications Mining, and if the message meets this threshold.
Instead of filtering out predictions based on thresholds, this route returns which prediction confidence met the thresholds.
In other words, if your thresholds were met, stream is returned. If not, then this value is empty.
Additionally, where there are multiple extractions, it is conditioned on the extractions before it.
For labels without extraction fields, the occurrence confidence is equivalent to the label confidence that you can see in the UI.