automation-suite
2023.10
true
Automation Suite on Linux Installation Guide
Last updated Nov 4, 2024

How to manually clean up logs

Cleaning up Ceph logs

Moving Ceph out of read-only mode

If you installed AI Center and use Ceph storage, take the following steps to move Ceph out of read-only mode:

  1. Check if Ceph is at full capacity:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph statuskubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
    If Ceph is at full capacity, you must adjust the read-only threshold to start up the rgw gateways.
  2. Scale down the ML skills:
    kubectl -n uipath scale deployment <skill> --replicas=0kubectl -n uipath scale deployment <skill> --replicas=0
  3. Put the cluster in write mode:
    ceph osd set-full-ratio 0.95 <95 is the default value so you could increase to 96 and go up 
    incrementall>ceph osd set-full-ratio 0.95 <95 is the default value so you could increase to 96 and go up 
    incrementall>
  4. Run Garbage Collection:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-allkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-all
  5. When storage goes down, run the following commands:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph dfkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph df

    At this point, the storage should be lower, and the cluster should be healthy.

Disabling streaming logs

To ensure everything is in a good state, disable streaming logs by taking the following steps.

  1. Disable aut-sync on UiPath and AI Center.
  2. Disable streaming logs for AI Center.
  3. If you have ML skills that have already been deployed, run the following commands:
    kubectl set env deployment [REPLICASET_NAME] LOGS_STREAMING_ENABLED=falsekubectl set env deployment [REPLICASET_NAME] LOGS_STREAMING_ENABLED=false
  4. Find out which buckets use the most space:
    kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket stats | jq -r '["BucketName","NoOfObjects","SizeInKB"], ["--------------------","------","------"], (.[] | [.bucket, .usage."rgw.main"."num_objects", .usage."rgw.main".size_kb_actual]) | @tsv' | column -ts $'\t'kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket stats | jq -r '["BucketName","NoOfObjects","SizeInKB"], ["--------------------","------","------"], (.[] | [.bucket, .usage."rgw.main"."num_objects", .usage."rgw.main".size_kb_actual]) | @tsv' | column -ts $'\t'
  5. Install s3cmd to prepare for cleaning up the sf-logs:
    pip3 install awscli s3cmd
    export PATH=/usr/local/bin:$PATHpip3 install awscli s3cmd
    export PATH=/usr/local/bin:$PATH
  6. Clean up the sf-logs logs. For details, see How to clean up old logs stored in the sf-logs bundle.
  7. Complete the cleanup operation:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-allkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- radosgw-admin gc process --include-all
  8. If the previous steps do not solve the issue, clean up the AI Center data.
  9. Check if the storage was reduced:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph dfkubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph df
  10. Once storage is no longer full, reduce the backfill setting:
    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd set-full-ratio 0.95kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd set-full-ratio 0.95
  11. Check if the ML skills are affected by the multipart upload issue:
    echo $(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket list --max-entries 10000000 --bucket train-data | jq '[.[] | select (.name | contains("_multipart")) | .meta.size] | add') | numfmt --to=iec-iecho $(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- radosgw-admin bucket list --max-entries 10000000 --bucket train-data | jq '[.[] | select (.name | contains("_multipart")) | .meta.size] | add') | numfmt --to=iec-i

    If they are affected by this issue, and the returned value is high, you may need to do a backup and restore.

Cleaning up s3 logs

If you use an s3-compatible storage provider, take the following steps to clean up your logs:

  1. Get the key for storage access.
  2. Find the large items:
    export PATH=/usr/local/bin:$PATH
    kubectl -n rook-ceph get secrets ceph-object-store-secret -o yaml
     
    # Then base64 decode the OBJECT_STORAGE_ACCESSKEY and OBJECT_STORAGE_SECRETKEY and OBJECT_STORAGE_EXTERNAL_HOSTexport PATH=/usr/local/bin:$PATH
    kubectl -n rook-ceph get secrets ceph-object-store-secret -o yaml
     
    # Then base64 decode the OBJECT_STORAGE_ACCESSKEY and OBJECT_STORAGE_SECRETKEY and OBJECT_STORAGE_EXTERNAL_HOST
    run: aws configure
    aws s3 ls --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs
    aws s3 ls --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs --recursive --human-readable --sumarizerun: aws configure
    aws s3 ls --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs
    aws s3 ls --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs --recursive --human-readable --sumarize
  3. Delete the sf-logs. For more details, see the AWS documentation.
    aws s3 rm --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs --include="2022* --exclude="2022_12_8"
    
    # You can craft an include and exclude command to help with this. use --dryrun firstaws s3 rm --endpoint-url <AWS-ENDPOINT> --no-verify-ssl --recursive s3://sf-logs --include="2022* --exclude="2022_12_8"
    
    # You can craft an include and exclude command to help with this. use --dryrun first
  4. Delete the train-data.
  • Cleaning up Ceph logs
  • Moving Ceph out of read-only mode
  • Disabling streaming logs
  • Cleaning up s3 logs

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.