- Release notes
- Before you begin
- Getting started
- Integrations
- Working with process apps
- Working with dashboards and charts
- Working with process graphs
- Working with Discover process models and Import BPMN models
- Showing or hiding the menu
- Context information
- Export
- Filters
- Sending automation ideas to UiPath® Automation Hub
- Tags
- Due dates
- Compare
- Conformance checking
- Root cause analysis
- Simulating automation potential
- Triggering an automation from a process app
- Viewing Process data
- Creating apps
- Loading data
- Customizing process apps
- App templates
- Additional resources
- Out-of-the-box Tags and Due dates
- Editing data transformations in a local environment
- Setting up a local test environment
- Designing an event log
- Extending the SAP Ariba extraction tool
- Performance characteristics
Configuring DataBridgeAgent
This page describes how to configure DataBridgeAgent to load data for a process app in Process Mining.
Follow these steps to configure DataBridgeAgent.
- Download DataBridgeAgent. See Loading data using DataBridgeAgent.
- On the server, create a folder for the DataBridgeAgent. For instance,
D:\processmining\P2P_data\
.
<EXTRACTORDIR>
.
-
Place the installation package in the
<EXTRACTORDIR>
folder.- Right-click on the installation package.
- Select Extract All….
- Right-click on the file
<EXTRACTORDIR>\datarun.json
and select Open. -
Enter a value for the following settings:
azureURL
connectorWorkspace
connectorModuleCode
Input type
Use credential store
Below is an overview of the generic parameters for DataBridgeAgent.
Parameter |
Description |
---|---|
azureURL |
The SAS URL of the Azure blob storage to which the extracted data needs to be uploaded. See Note: The
azureURL is not used when loading data for use in Process Mining on-premises (Automation Suite). Make sure this field is left blank.
|
endOfUploadApiUrl | The API that is called to start data processing in Process Mining, once all data has been uploaded.
Note: The
azureURL is not used when loading data for use in Process Mining on-premises (Automation Suite). Make sure this field is left blank.
|
connectorWorkspace
| The name of the workspace of the connector used to load the data and to create the dataset. |
connectorModuleCode | The module code of the connector used to load the data and to create the dataset. |
Input type |
Can be either: •
SAP see SAP parameters•
CSV see CSV parameters•
ODBC see ODBC parametersNote: depending on the preferred input type, you must enter the settings in the corresponding section.
|
Use credential store |
Indicate whether or not a credential store is used for password storage. Note: if set to
true you specify the password identifier in de SAP Password or ODBC Password field.
|
Reporting currency |
The currency in which price-related values are displayed. |
Exchange rate type |
The exchange rate type that is used for currency conversion. |
Language |
The language in which data is extracted from the source system. |
Extraction start date |
The start date of the extraction period of the data. Note: In cases where only a subset of the data is needed, it is recommended to limit the amount of data loaded, while this may
improve the loading times.
|
Extraction end date |
The end date of the extraction period of the data. Note: In cases where only a subset of the data is needed, it is recommended to limit the amount of data loaded, while this may
improve the loading times.
|
Below is an overview of the parameters that can be used for SAP datasources.
Parameter |
Description |
---|---|
SAP Host |
The hostname or IP address of the SAP application server. |
SAP SystemNumber |
The two-digit number between 00 and 99 that identifies the designated instance. |
SAP Username |
The username of the account that is being used to log in to the SAP instance. |
SAP Password |
The password of the user above. Note: If you use a credential store, you must enter the password identifier from the credential store, instead of the password.
|
SAP Client |
The client that is being used. |
Below is an overview of the parameters that can be used for CSV datasources.
SAP Setting |
Description |
---|---|
CSV Data path |
Data path in the Server Data that points to the place where the
.csv files are stored. For example P2P/ if all files can be found in the folder named P2P .
|
CSV Suffix |
A regular expression containing the files extension of the file to read in. May contain a suffix up to 2 digits that are added to the name of the table. |
CSV Delimeter |
The delimiter character that is used to separate the fields. |
CSV Quotation character |
The quote character that is used to identify fields that are wrapped in quotes. |
CSV Has header |
Indicate whether the first line of the
.CSV file is a header line.
|
Below is an overview of the parameters that can be used for ODBC datasources.
Parameter |
Description |
---|---|
ODBC Driver |
The name of the ODBC driver to use for this connection. |
ODBC Username |
Username to be used to connect to the external datasource. |
ODBC Password |
Password to be used to connect to the external datasource. Note: If you use a credential store, you must enter the password identifier from the credential store, instead of the password.
|
ODBC Connection parameters |
Any other parameters are passed as specified to the ODBC driver. Use the format
param1=value1 (;param2=value2) |
EXTRACTORDIR\datarun.bat
file.
The time taken for this task will depend highly on the data volumes loaded.
The output is uploaded to the SQL Server database using CData Sync for use in Process Mining on-premises (Automation Suite). See Configuring CData Sync.
Follow this step to start the data run.
- Double-click on the
EXTRACTORDIR\datarun.bat
file to start the data run.
Instead of running the file manually, you can use Windows Task Scheduler to schedule a task that runs the batch script for automated data refreshes.
The output is uploaded to the SQL Server database for use in Process Mining Automation Suite.
<EXTRACTORDIR>\datarun.txt
contains the logs of the last data run.
.tsv
files are loaded into the Microsoft SQL Server database using CData Sync. This page describes how to use CData Sync to upload
the data set generated by DataBridgeAgent to a process app in Process Mining Automation Suite.
In general, you should follow the steps as described in Loading data using CData Sync to set up data loading using CData Sync.
However, some specific settings are required when using DataBridgeAgent.
Create a Source Connection
Select CSV as the source system to which you want to create a connection from the list, and make sure to define the settings as described in Loading data using CData Sync - Create a Source Connection.
Creating a Job
On the Tables tab add a custom query that mirrors the output of the connector you are using.
See also: