- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Understanding data requirements
The following recommendations concern use cases with lower data volume, but high value and/or low complexity.
Generally, use cases should function as expected if their complexity aligns with the volume of message data. Very low volume use cases should typically be very simple, while high volume use cases can be more complex.
In some instances, synchronizing more than one year's worth of historical data can help in sourcing sufficient quality examples for training. This also provides the benefit of greater analytics in terms of trends and alerts.
Use cases with fewer than 20,000 messages (in terms of historical volumes or annual throughput) should be carefully considered in terms of complexity, ROI, and the effort required to support and enable the use case. While there is a chance that such use cases may be disqualified based on these considerations, they can still provide sufficient business value to proceed with.
Every use case is unique, so there isnot a single guideline that fits all complexity scenarios. The labels and fields themselves can range from very simple to complex in terms of understanding and extraction.
The following table outlines rough guidelines for use case complexity.
Complexity | Labels | Extraction Fields | General Fields |
---|---|---|---|
Very Low | ~ 2-5 | N/A | 1 - 2 |
Low | ~ 5 - 15 | 1 - 2 for a few labels | 1 - 3 |
Medium | 15 - 50 | 1 - 5 for multiple labels | 1 - 5 * |
High | 50+ | 1 - 8+ for high proportion of labels | 1 - 5 * |
* Use cases with extraction fields should rely on these rather than general fields. If you are not using extraction fields, you can expect more general fields, but they may not add equivalent value.
# of Messages * | Limitations | Recommendation |
---|---|---|
Less than |
| Should only be:
|
2048 - 20,000 |
|
Should primarily be:
|
20,000 - 50,000 |
|
Should primarily be:
|
Historical data volumes from which training examples will be sourced typically have only a small proportion of total volumes annotated. This proportion is usually higher on lower volume and higher complexity use cases.