- Overview
- Document Understanding Process
- Quickstart tutorials
- Extracting data from receipts
- Invoices retrained with one additional field
- Extracting data from Forms
- Framework components
- ML packages
- Overview
- Document Understanding - ML package
- DocumentClassifier - ML package
- ML packages with OCR capabilities
- 1040 - ML package
- 4506T - ML package
- 990 - ML Package - Preview
- ACORD125 - ML package
- ACORD126 - ML package
- ACORD131 - ML package
- ACORD140 - ML package
- ACORD25 - ML package
- Bank Statements - ML package
- Bills Of Lading - ML package
- Certificate of Incorporation - ML package
- Certificate of Origin - ML package
- Checks - ML package
- Children Product Certificate - ML package
- CMS 1500 - ML package
- EU Declaration of Conformity - ML package
- Financial Statements - ML package
- FM1003 - ML package
- I9 - ML package
- ID Cards - ML package
- Invoices - ML package
- Invoices Australia - ML package
- Invoices China - ML package
- Invoices India - ML package
- Invoices Japan - ML package
- Invoices Shipping - ML package
- Packing Lists - ML package
- Passports - ML package
- Payslips - ML package
- Purchase Orders - ML package
- Receipts - ML Package
- Remittance Advices - ML package
- Utility Bills - ML package
- Vehicle Titles - ML package
- W2 - ML package
- W9 - ML package
- Other Out-of-the-box ML Packages
- Public Endpoints
- Hardware requirements
- Pipelines
- Document Manager
- OCR services
- Deep Learning
- Document Understanding deployed in Automation Suite
- Document Understanding deployed in AI Center standalone
- Licensing
- Activities
- UiPath.Abbyy.Activities
- UiPath.AbbyyEmbedded.Activities
- UiPath.DocumentProcessing.Contracts
- UiPath.DocumentUnderstanding.ML.Activities
- UiPath.DocumentUnderstanding.OCR.LocalServer.Activities
- UiPath.IntelligentOCR.Activities
- UiPath.OCR.Activities
- UiPath.OCR.Contracts
- UiPath.OmniPage.Activities
- UiPath.PDF.Activities
Invoices retrained with one additional field
The aim of this page is to help first time users get familiar with Document UnderstandingTM.
For scalable production deployments, we strongly recommend using the Document Understanding Process available in UiPath® Studio under the Templates section.
This quickstart shows you how to retrain the Invoices out-of-the-box ML model to extract one more field.
Let’s use the same workflow we used for the receipts in the previous quickstart and modify it so it can support invoices.
To do that, we need to perform the following steps in our workflow:
- Modify the taxonomy
- Add a classifier
- Add a Machine Learning Extractor
- Label the data
- Retrain the Invoices ML model
Now, let us see every step in detail.
In this step, we need to modify the taxonomy to add the invoice document type.
To do so, open Taxonomy Manager and create group named Semi Structured Documents, a category named Finance, a document type named Invoices. Create the above listed fields with user friendly names along with respective data types.
- name -
Text
- vendor-addr -
Address
- billing-name -
Text
- billing-address -
Address
- shipping-address -
Address
- invoice-no -
Text
- po-no -
Text
- vendor-vat-no -
Text
- date -
Date
- tax -
Number
- total -
Number
- payment-terms -
Text
- net-amount -
Number
- due-date -
Date
- discount -
Number
- shipping-charges -
Number
- payment-addr -
Address
- description -
Text
- items -
Table
- description -
Text
- quantity -
Number
- unit-price -
Number
- line-amount -
Number
- item-po-no -
Text
- line-no -
Text
- part-no -
Text
- billing-vat-no -
Text
- description -
In this step, we need to add a classifier so we can process both receipts and invoices with our workflow.
Since our workflow now supports two document types, Receipts and Invoices, we need to add the classifier to differentiate between different document types coming in as input:
- Add a Classify Document Scope after the Digitize Document activity and provide the DocumentPath, DocumentText, DocumentObjectModel, and Taxonomy as input arguments and capture the ClassificationResults in a new variable. We need this variable to check what document(s) we are processing.
- We also need to specify one or more classifiers. In this example, we are using the Intelligent Keyword Classifier. Add it to the Classify Document Scope activity.
This page helps you take an educated decision on what classification method you should use in different scenarios.
- Train the classifier as described here.
- Configure the classifier by enabling it for both document types.
- Depending on your usecase, you might want to validate the classification. You can do that using the Present Classification Station or the Create Document Classification Action and Wait For Document Classification Action And Resume activities.
In this step, we need to add a Machine Learning Extractor to the Data Extraction Scope activity and connect it to the Invoices public endpoint.
The procedure is exactly the same as for the previous Receipts Machine Learning Extractor that we’ve added before:
- Add a Machine Learning Extractor activity along side the Receipts Machine Learning Extractor.
- Provide the Invoices public endpoint, namely
https://du.uipath.com/ie/invoices
, and an API key to the extractor. - Configure the extractor to work with invoices by mapping the fields created in the Taxonomy Manager to the fields available in the ML model:
- Do not forget to use the ClassificationResults variable outputted by the Classify Document Scope as input to the Data Extraction Scope, instead of specifying a DocumentTypeId.
You should end up with something like this:
- Run the workflow to test that it works correctly with invoices.
We need to label the data before retraining the base Invoices ML model in order for it to support the new IBAN field.
- Collect the requirements and sample invoice documents in sufficient volume for the complexity of the usecase you need to solve. Label 50 pages, as explained on this documentation page.
- Gain access to an instance of Document Manager either on premises or in AI Center in the Cloud. Make sure you have the permissions to use Document Manager.
- Create an AI Center Project and go to Data Labeling > UiPath Document Understanding and create a Data Labeling session.
- Configure an OCR Engine as described here, try importing a diverse set of your production documents and make sure that the OCR engine reads the text you need to extract.
More suggestions in this section. Only proceed to next step after you have settled on a OCR engine.
- Create a fresh Document Manager session, and import a Training set and an Evaluation set, while making sure to check the Make this a Test set checkbox when importing the Evaluation set. More details about imports here.
- Create and configure the IBAN field as described here. More advanced guidelines are available in this section.
- Label a Training dataset and an Evaluation dataset as described here. The prelabeling feature of Document Manager described here can make the labeling work a lot easier.
- Export first the Evaluation set and then the Training set to AI Center by selecting them from the filter dropdown at the top of the Document Manager view. More details about exports here.
Next up, let’s create our model, retrain and deploy it.
Now that our workflow supports processing invoices, we need to extract the IBAN from our invoices, which is a field that does not get picked up by default by the out-of-the-box Invoices ML model. That means we need to retrain a new model, starting from the base one.
- Create an ML Package as described here. If your document type is different from the ones available out-of-the-box, then choose the DocumentUnderstanding ML Package. Otherwise, use the package closest to the document type you need to extract.
- Create a Training Pipeline as described here using the Input dataset which you exported in the previous section from Document Manager.
-
When the training is done and you have package minor version 1, run an Evaluation Pipeline on this minor version and inspect the evaluation.xlsx side by side comparison. Use the detailed guidelines here.
- If the evaluation results are satisfactory, go to the ML Skills view and create an ML Skill using the new minor version of the ML Package. If you want to use this to do prelabeling in Document Manager, you need to click on the Modify Current Deployment button at the top right of the ML Skill view and toggle on the Make ML Skill Public.
- After creating the ML skill, we now need to consume it in Studio. The easiest way to do that is to make the ML Skill public as described here. Then, the only thing left to do is simply replace the Invoices ML model public endpoint that we’ve initially added to the Machine Learning Extractor in our workflow with the public endpoint of the ML Skill.
- Run the workflow and you should see the newly added IBAN field being extracted along the default invoices fields.
Download this sample project using this link. You need to change the Machine Learning Extractor for Invoices from Endpoint mode to your trained ML Skill.