ai-center
2020.10
false
AI Center
Automation CloudAutomation SuiteStandalone
Last updated Jun 6, 2024

4. Run the AI Fabric Application Installer

  1. Cloud shell console logs will have the URL to access Kots Admin console, so login to the console via the same URL. If you have opted not to expose Kots via Load Balancer, then set the kube context as shown in this section “How to set Kubernetes Cluster Context” followed by triggering the below command.
    kubectl -n aifabric port-forward service/kotsadm 8800:3000kubectl -n aifabric port-forward service/kotsadm 8800:3000
  2. Now the KotsAdmin will be accessible via: http://localhost:8800


  3. If you kotsAdmin password isn’t working or if you want to reset it.
    Option 1: 
    if you have a linux machine or WSL enabled on your windows, then
    Install Kots CLI: curl https://kots.io/install | bash
    Reset password:   kubectl kots reset-password -n aifabric 
                  or
    login to cloud shell 
    cd aks-arm
    ./kots reset-password -n aifabricOption 1: 
    if you have a linux machine or WSL enabled on your windows, then
    Install Kots CLI: curl https://kots.io/install | bash
    Reset password:   kubectl kots reset-password -n aifabric 
                  or
    login to cloud shell 
    cd aks-arm
    ./kots reset-password -n aifabric

Uploading license

Upload the license provided to you by your UiPath representative via the UI as below.



Enter Configs

You must enter the details per the screen below.



To obtain Identity Server access token:

The IS access token can be found by login in with the tenant “host“(opposed to tenant “default“) as admin user at the following address “https://<IdentityServerEndpoint>/identity/configuration”. This address should already be known by the customer because here is the place where he can configure external identity providers, such as Azure AD, Windows, or Google.

Copy the token and paste it into replicated installation console.

Enable HA for Core Service

Enabling this will ensure that 2 replicas will be always running for AI Fabric core services along with horizontal pod scaling enabled i.e based on workload if required, no of pods related to core services will be auto scaled/down scaled accordingly. If we don’t enable HA, then only one replica of core service will be running always but horizontal pod scaling is still enabled which ensures that if need arises pods will be auto scaled for short duration.

Enable HA for CPU Based ML Skills

Enabling this will ensure that we deploy 2 replicas for all CPU based ML Skills and these 2 replicas will be deployed across nodes present under multiple zones. If HA is not enabled only replica will be deployed.

Enable HA for GPU Based ML Skills

Enabling this will ensure that we deploy 2 replicas for all GPU based ML Skills and these 2 replicas will be deployed across nodes present under multiple zones. If HA is not enabled only replica will be deployed. Since GPU machines are quite expensive on azure, we are providing this option so that 2 nodes are not required to deploy an ML skill as GPU’s can be shared across deployments.



Configure Max CPU & Memory a GPU based job can consume

Since Standard NC6 is the smallest config machine available on Azure with GPU availability, that’s why we are defaulting max CPU to 5000 (5 CPU) and max memory to 50 GB. But if the customer is using Standard_NC6s_v2 or some other config machine instead of Standard_NC6 nodes for GPU node pool, in that scenario customer can override the defaults (i.e max CPU and Mem) a GPU based training job can consume.



On Saving the Config, KOTS will start validating the Inputs and if all the pre-flight checks passed, KOTS will trigger the deployment exactly similar to that of one box installation

If all the preflight checks are passed, config screen would look something like this. Click continue to proceed.



Started Depoyment

To initiate installation, click on Deploy button. Once the status is Deployed, installation has been started and setup admin can go to Application tab to check the current status



Check Deployment Status in Shell

Provisioning job status can be tracked vi querying this pod from local. Please ensure you have setup Kubernetes Cluster Context as described :

rajivchodisetti@DESKTOP-LOUPTI1:/mnt/c/Users/rajiv.chodisetti$ kubectl -n aifabric get pods | grep provision
provision-4xls7mzjpnui8j7n-s9tct            0/1     Completed          0          14h
To check the logs of this pod:
kubectl -n aifabric logs -f provision-4xls7mzjpnui8j7n-s9tct
If AIFabric deployment is successful, this is what you would see in the pod logs at the end, 
Successfully setup cronjob for oob installation run on daily basis.
< Total steps:  Current step: 8  Estimated time: 2s >
AiFabric in Azure AKS has been provisioned successfullyrajivchodisetti@DESKTOP-LOUPTI1:/mnt/c/Users/rajiv.chodisetti$ kubectl -n aifabric get pods | grep provision
provision-4xls7mzjpnui8j7n-s9tct            0/1     Completed          0          14h
To check the logs of this pod:
kubectl -n aifabric logs -f provision-4xls7mzjpnui8j7n-s9tct
If AIFabric deployment is successful, this is what you would see in the pod logs at the end, 
Successfully setup cronjob for oob installation run on daily basis.
< Total steps:  Current step: 8  Estimated time: 2s >
AiFabric in Azure AKS has been provisioned successfully

Troubleshooting

Please see here

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.