- Release Notes
- Before you begin
- Getting started
- Installing AI Center
- Migration and upgrade
- Projects
- Datasets
- Data Labeling
- ML packages
- Out of the box packages
- Pipelines
- ML Skills
- ML Logs
- Document UnderstandingTM in AI Center
- AI Center API
- How to
- Licensing
- Basic Troubleshooting Guide
Overview
Example packages that can be immediately deployed and added to a RPA workflow, more can be found in the product
This is a model for image content moderation based on a deep learning architecture commonly referred to as Inception V3. Given an image, the model will output one of four classes 'explicit', 'explicit-drawing', 'neutral', and 'pornographic' together with a normalized confidence score for each class probability.
It is based on the paper 'Rethinking the Inception Architecture for Computer Vision' by Szegedy et al which was open-sourced by Google.
This model predicts the sentiment of a text in the English Language. It was open-sourced by Facebook Research. Possible predictions are one of "Very Negative", "Negative", "Neutral", "Positive", "Very Positive". The model was trained on Amazon product review data thus, the model predictions may have some unexpected results for different data distributions. A common use case is to route unstructured language content (e.g. emails) based on the sentiment of the text.
It is based on the research paper "Bag of Tricks for Efficient Text Classification" by Joulin, et al.
This model predicts the answer to a question of a text in the English Language based on some paragraph context. It was open-sourced by ONNX. A common use case is in KYC or processing financial reports where a common question can be applied to a standard set of semi-structured documents. It is based on the state-of-the-art BERT (Bidirectional Encoder Representations from Transformers). The model applies Transformers, a popular attention model, to language modeling to produce an encoding of the input and then trains on the task of question answering.
It is based on the research paper “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”.
This model predicts the language of a text input. Possible predictions are one of the following 176 languages:
Languages |
---|
af als am an ar arz as ast av az azb ba bar bcl be bg bh bn bo bpy br bs bxr ca cbk ce ceb ckb co cs cv cy da de diq dsb dty dv el eml en eo es et eu fa fi fr frr fy ga gd gl gn gom gu gv he hi hif hr hsb ht hu hy ia id ie ilo io is it ja jbo jv ka kk km kn ko krc ku kv kw ky la lb lez li lmo lo lrc lt lv mai mg mhr min mk ml mn mr mrj ms mt mwl my myv mzn nah nap nds ne new nl nn no oc or os pa pam pfl pl pms pnb ps pt qu rm ro ru rue sa sah sc scn sco sd sh si sk sl so sq sr su sv sw ta te tg th tk tl tr tt tyv ug uk ur uz vec vep vi vls vo wa war wuu xal xmf yi yo yue zh |
It was open-sourced by Facebook Research. The model was trained on data from Wikipedia, Tatoeba, and SETimes used under the Creative Commons Attribution-Share-Alike License 3.0. A common use case is to route unstructured language content (e.g. emails) to an appropriate responder based on the language of the text.
It is based on the research paper "Bag of Tricks for Efficient Text Classification" by Joulin, et al.
This is a Sequence-to-Sequence machine translation model that translates English to French. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Convolutional Sequence to Sequence Learning" by Gehring, et al.
This is a Sequence-to-Sequence machine translation model that translates English to German. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This is a Sequence-to-Sequence machine translation model that translates English to Russian. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This is a Sequence-to-Sequence machine translation model that translates English to Russian. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This is a Sequence-to-Sequence machine translation model that translates English to Russian. It was open-sourced by Facebook AI Research (FAIR).
It is based on the paper "Facebook FAIR's WMT19 News Translation Submission" by Ng, et al.
This model returns a list of entities recognized in text. The 18 types of named entities recognized use the same output class as in OntoNotes5 which is commonly used for benchmarking this task in academia. The model is based on the paper 'Approaching nested named entity recognition with parallel LSTM-CRFs' by Borchmann et al, 2018.
The 18 classes are the following:
Entity |
Description |
---|---|
PERSON |
People, including fictional. |
NORP |
Nationalities or religious or political groups. |
FAC |
Buildings, airports, highways, bridges, etc. |
ORG |
Companies, agencies, institutions, etc. |
GPE |
Countries, cities, states. |
LOC |
Non-GPE locations, mountain ranges, bodies of water. |
PRODUCT |
Objects, vehicles, foods, etc. (Not services.) |
EVENT |
Named hurricanes, battles, wars, sports events, etc. |
WORK_OF_ART |
Titles of books, songs, etc. |
LAW |
Named documents made into laws. |
LANGUAGE |
Any named language. |
DATE |
Absolute or relative dates or periods. |
TIME |
Times smaller than a day. |
PERCENT |
Percentage, including ”%“. |
MONEY |
Monetary values, including unit. |
QUANTITY |
Measurements, as of weight or distance. |
ORDINAL |
“first”, “second”, etc. |
CARDINAL |
Numerals that do not fall under another type. |