- Release Notes
- Before you begin
- Getting started
- Projects
- Datasets
- ML packages
- Pipelines
- ML Skills
- ML Logs
- Document Understanding in AI Center
- How To
- Basic Troubleshooting Guide
- General AI Center troubleshooting and FAQs
General AI Center troubleshooting and FAQs
When uploading dataset files, the following error can occur:
Failed to upload item(s), it may be due to a slow or lost internet connection
Open browser console and get the DNS of the objectstore url. It will be of the form objectstore.xxx.xx Make sure that the objectstore DNS is resolvable either by adding to host file or talking to your network administrator Once the DNS is resolved, if the certificate is not trusted, make sure you trust the certificate inside your browser before uploading the item.
When trying to view or run pipelines, an error can be occur, even though permissions to run pipelines are in place.
Issue: Service deployment can get stuck because of the DATABASECHANGELOGLOCK lock not being released by one service
On rare occasions, if you restart the machine two times consecutively, service deployment can get stuck because of the DATABASECHANGELOGLOCK lock not being released by one service. In this case you will see AI Center pods restarting continuously.
The import/export script is failing with the following error message:
cookfile_new.txt: Permission denied
When running the import or export scripts, the following error message can occur:
./export.sh: line 2: $'\r': command not found
This error message is displayed when importing or exporting ML Packages using scripts.
This issue can occur when running a UiPath Studio automation and uploading validation data for training using a public dataset.
update-mlskills-cm
cronjob is missing in AI Center versions 2021.10.1 and 2021.10.2.
YAML
file below.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-mlskill-cm
namespace: uipath
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- args:
- -XPOST
- ai-deployer-svc.uipath.svc.cluster.local/ai-deployer/v1/system/mlskills:update-cm
image: registry.uipath.com/aicenter/alpine-curl:7.78.0
imagePullPolicy: IfNotPresent
name: update-mlskill-cm
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
ttlSecondsAfterFinished: 120
schedule: 0 */2 * * *
startingDeadlineSeconds: 200
successfulJobsHistoryLimit: 1
suspend: false
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: update-mlskill-cm
namespace: uipath
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- args:
- -XPOST
- ai-deployer-svc.uipath.svc.cluster.local/ai-deployer/v1/system/mlskills:update-cm
image: registry.uipath.com/aicenter/alpine-curl:7.78.0
imagePullPolicy: IfNotPresent
name: update-mlskill-cm
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- NET_RAW
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
ttlSecondsAfterFinished: 120
schedule: 0 */2 * * *
startingDeadlineSeconds: 200
successfulJobsHistoryLimit: 1
suspend: false
LOGS_STREAMING_ENABLED
environment variable to false
.
You can also add a logsStreamingEnabled
global variable with the value set
as false
using ArgoCD under the aicenter app details. Make sure to
sync ArgoCD after the change is done.
- Message: Failed to upload Item(s), it may be due to a slow or lost Internet connection
- Possible cause
- Solution
- Issue: Error on Pipelines pages even though permissions are in place for running pipelines
- Solution
- Issue: Service deployment can get stuck because of the DATABASECHANGELOGLOCK lock not being released by one service
- Solution
- Issue: import/export script fails
- Solution
- Message: ./export.sh: Line 2: $'r': Command Not Found
- Solution
- Issue: signed URL for public datasets is failing
- Solution
- Issue: the Update-mlskills-cm cronjob is missing
- Solution
- Disabling streaming logs
- Versions up to 2021.10.4
- Versions from 2021.10.5