Logging, Monitoring, and Debugging

On This Page

Overview

There are a variety of ways in which you can log and debug the execution of platform application services, tools, and APIs.

Note
To learn how to use the platform's default monitoring service and pre-deployed Grafana dashboards to monitor application services, see Monitoring Platform Services.
Note
If you are integrating the platform with other logging tools (e.g. datadog), contact Iguazio Support.

For further troubleshooting assistance, visit Iguazio Support.

Logging Application Services

The platform has a default tenant-wide log-forwarder application service (log-forwarder) for forwarding application-service logs. The logs are forwarded to an instance of the Elasticsearch open-source search and analytics engine by using the open-source Filebeat log-shipper utility. The log-forwarder service is disabled by default. To enable it, on the Services dashboard page, select to edit the log-forwarder service; in the Custom Parameters tab, set the Elasticsearch URL parameter to an instance of Elasticsearch that will be used to store and index the logs; then, save and apply your changes to deploy the service.

Typically, the log-forwarder service should be configured to work with your own remote off-cluster instance of Elasticsearch.

Note
  • The default transfer protocol, which is used when the URL doesn't begin with "http://" or "https://", is HTTPS.
  • The default port, which is used when the URL doesn't end with ":<port number>", is port 80 for HTTP and port 443 for HTTPS.

Checking Service Status

In the Service pages, users with both the Service Admin and the Application Read Only view policies can check the status of the pods.

  • Press Inspect to see the status. You can also download a txt file from the popup.

Kubernetes Tools

You can use the Kubernetes kubectl CLI from a platform web-shell or Jupyter Notebook application service to monitor, log, and debug your Kubernetes application cluster:

  • Use the get pods command to display information about the cluster's pods:
    kubectl get pods
    
  • Use the logs command to view the logs for a specific pod; replace POD with the name of one of the pods returned by the get command:
    kubectl logs POD
    
  • Use the top pod command to view pod resource metrics and monitor resource consumption; replace [POD] with the name of one of the pods returned by the get command or remove it to display logs for all pods:
    kubectl top pod [POD]
    
Note

To run kubectl commands from a web-shell service, the service must be configured with an appropriate service account; for more information about the web-shell service accounts, see The Web-Shell Service.

  • The get pods and logs commands require the "Log Reader" service account or higher.
  • The top pod command requires the "Service Admin" service account.

For more information about the kubectl CLI, see the Kubernetes documentation.

IGZTOP - Performance Reporting Tool

igztop is a small tool that displays useful information about pods in the default-tenant namespace.

Running igztop

Usage:
  igztop pods [--cpu] [--filter=<KEY>=<VALUE>] [--label=<KEY>=<VALUE>] [--columns=<KEY>] [--no-borders] [--no-pager]
  igztop nodes [--no-pager]
  igztop update
  igztop (-h | --help)
  igztop --version

Options:
  -h --help
  -v --version
  -c --cpu                             Sort the table by CPU usage, rather than by memory usage (default).
  -f --filter=<KEY>=<VALUE>            A filtering key-value pair, based on column names, e.g. 'node=k8s-node1', 'name=presto', 'owner=admin'.
  -l --label=<KEY>=<VALUE>             Filter pods by label, e.g. '-l app=v3iod'
  -o --columns=<KEY>                   Show additional columns. Can be one or combination of "projects","gpu","resources", e.g. '--columns projects,gpu'. Partial names are supported, e.g. '-o proj'
  --no-pager                           Print the output table to the terminal without paging

Examples

The default output includes the name, memory usage, cpu usage and node name for each running pod, sorted by memory usage.
Sorting by CPU usage is supported by passing the --cpu or -c flag.
Pods that aren't currently using resources do not appear in the table.

Information about Pods

$ igztop pods
                                                         Kubernetes Pods
┃ Name                                                            ┃ CPU  ┃ Memory   ┃ Node                                       ┃
│ jupyter-edmond-847d5bb947-25fjz                                 │ 9m   │ 2937Mi  │ ip-172-31-0-55.us-east-2.compute.internal  │
│ v3iod-6mpwh                                                     │ 4m   │ 2802Mi  │ ip-172-31-0-55.us-east-2.compute.internal  │
│ v3iod-9cgnp                                                     │ 4m   │ 2800Mi  │ ip-172-31-0-193.us-east-2.compute.internal │
│ jupyter-amit-59bf47fd7b-vb45f                                   │ 10m  │ 2255Mi  │ ip-172-31-0-55.us-east-2.compute.internal  │
│ jupyter-salesh-75b5b7db68-qttqp                                 │ 9m   │ 2237Mi  │ ip-172-31-0-55.us-east-2.compute.internal  │
│ v3io-webapi-96w5x                                               │ 7m   │ 1867Mi  │ ip-172-31-0-55.us-east-2.compute.internal  │
│ v3io-webapi-2t9lm                                               │ 6m   │ 1864Mi  │ ip-172-31-0-193.us-east-2.compute.internal │
│ docker-registry-778548878-q7tzb                                 │ 1m   │ 1738Mi  │ ip-172-31-0-193.us-east-2.compute.internal │
│ spark-master-d5d47bbb-s7jqp                                     │ 7m   │ 940Mi   │ ip-172-31-0-193.us-east-2.compute.internal │
│ spark-worker-86cc4d5d9c-4lkdl                                   │ 8m   │ 867Mi   │ ip-172-31-0-193.us-east-2.compute.internal │
│ jupyter-shapira-57db64967c-vkrm4                                │ 8m   │ 718Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ jupyter-nirs-5589c8d984-h7tzl                                   │ 6m   │ 630Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ mlrun-db-67f46884dd-c298c                                       │ 12m  │ 621Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ mlrun-api-chief-6f64c75447-xz58h                                │ 51m  │ 585Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-models-shapira-model-monitoring-stream-5fd48d8696-cjl85  │ 3m   │ 508Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ mysql-kf-699c4c75bb-rpxpg                                       │ 3m   │ 468Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ mlrun-api-worker-5f9c87bc94-qwkfb                               │ 6m   │ 430Mi   │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-streaming-test1-shapira-extract-688cc5b858-vbk6q         │ 3m   │ 308Mi   │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-fraud-demo-edmond-transactions-ingest-f8854c48f-f9hjf    │ 2m   │ 302Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-streaming-test-shapira-extract-8ff9ddc8c-2gq9z           │ 3m   │ 301Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ jupyter-3720-7c9c959759-2vwsl                                   │ 6m   │ 276Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-models-shapira-serving-model-6b56c8f87c-v486c            │ 1m   │ 212Mi   │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-models-shapira-serving-function-65cdc595b4-pwhc8         │ 1m   │ 202Mi   │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-serving-steps-shapira-test-steps-7d9b86d5df-cpdpj        │ 1m   │ 161Mi   │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-serving-steps-shapira-test-steps-archive-59cbff6666w8988 │ 1m   │ 159Mi   │ ip-172-31-0-55.us-east-2.compute.internal  │
│ jupyter-itay-68c5b5b87f-wkkwf                                   │ 9m   │ 99Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-dashboard-54df54887c-5vsww                               │ 2m   │ 83Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ ml-pipeline-visualizationserver-685c68cdbd-mj2cw                │ 4m   │ 83Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ monitoring-prometheus-server-d6588dd47-cvfhq                    │ 3m   │ 69Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ provazio-controller-6b8456dff9-v7zmn                            │ 7m   │ 64Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ grafana-75d598cc96-txrw7                                        │ 2m   │ 58Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ metadata-writer-f59d94448-444g4                                 │ 1m   │ 56Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ ml-pipeline-ui-56b9997fc7-nr9vc                                 │ 4m   │ 41Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-streaming-test1-shapira-nuclio-df-69446f7b88-n6kld       │ 1m   │ 31Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-test-func-d894d4b84-tfd5h                                │ 1m   │ 31Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-test-599845c8ff-lp6bd                                    │ 1m   │ 31Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-streaming-test-shapira-nuclio-df-6568fbfcb4-kn54c        │ 1m   │ 30Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ framesd-6c97f79585-9zhr2                                        │ 1m   │ 29Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-controller-6c9c966b56-6hktm                              │ 1m   │ 25Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ ml-pipeline-858b55c5b-g95cl                                     │ 4m   │ 21Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ workflow-controller-564ff94cd4-g4d6w                            │ 1m   │ 20Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ nuclio-scaler-b67469c78-4m2df                                   │ 1m   │ 17Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ metrics-server-exporter-77fb887958-mxs4b                        │ 24m  │ 14Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ mpi-operator-7f68c8556f-bjnsn                                   │ 3m   │ 13Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ ml-pipeline-viewer-crd-7f886bbf5f-n6vtv                         │ 1m   │ 13Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-dlx-6c8c74d497-mh4gh                                     │ 1m   │ 13Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ mlrun-ui-6768f98785-nb6v8                                       │ 0m   │ 13Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ metadata-envoy-deployment-6c975596-7qwks                        │ 4m   │ 12Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ authenticator-74cf9cd5f9-hbln6                                  │ 1m   │ 12Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ ml-pipeline-persistenceagent-65fd97c56b-g7c8k                   │ 1m   │ 11Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ ml-pipeline-scheduledworkflow-7cf8c6cd4c-4q66r                  │ 1m   │ 11Mi    │ ip-172-31-0-193.us-east-2.compute.internal │
│ spark-operator-6dbc5d9566-4m679                                 │ 1m   │ 10Mi    │ ip-172-31-0-55.us-east-2.compute.internal  │
│ keycloak-oauth2-proxy-redis-master-0                            │ 16m  │ 9Mi     │ ip-172-31-0-193.us-east-2.compute.internal │
│ v3iod-locator-65c5c44957-6nx6v                                  │ 0m   │ 8Mi     │ ip-172-31-0-193.us-east-2.compute.internal │
│ keycloak-oauth2-proxy-54cb56f465-qp5rw                          │ 1m   │ 6Mi     │ ip-172-31-0-193.us-east-2.compute.internal │
│ oauth2-proxy-7b68c8d99d-5fv6v                                   │ 1m   │ 4Mi     │ ip-172-31-0-193.us-east-2.compute.internal │
│ metadata-grpc-deployment-68b6995c89-nq6ns                       │ 1m   │ 3Mi     │ ip-172-31-0-55.us-east-2.compute.internal  │
│ nuclio-serving-steps-shapira-test-steps-archive-59cbff6666w8988 │ 10m  │ 2937Mi  │ ip-172-31-0-193.us-east-2.compute.internal │
│ Sum                                                             │ 272m │ 27128Mi │                                            │
└─────────────────────────────────────────────────────────────────┴──────┴─────────┴────────────────────────────────────────────┘

Results can be filtered to match substrings of any column:

$ igztop -f name=jupy
                                        Kubernetes Pods
┃ Name                             ┃ CPU ┃ Memory ┃ Node                                      ┃
│ jupyter-edmond-847d5bb947-25fjz  │ 9m  │ 2935Mi │ ip-172-31-0-55.us-east-2.compute.internal │
│ jupyter-amit-59bf47fd7b-vb45f    │ 9m  │ 2256Mi │ ip-172-31-0-55.us-east-2.compute.internal │
│ jupyter-salesh-75b5b7db68-qttqp  │ 7m  │ 2232Mi │ ip-172-31-0-55.us-east-2.compute.internal │
│ jupyter-shapira-57db64967c-vkrm4 │ 8m  │ 721Mi  │ ip-172-31-0-55.us-east-2.compute.internal │
│ jupyter-nirs-5589c8d984-h7tzl    │ 6m  │ 629Mi  │ ip-172-31-0-55.us-east-2.compute.internal │
│ jupyter-3720-7c9c959759-2vwsl    │ 7m  │ 277Mi  │ ip-172-31-0-55.us-east-2.compute.internal │
│ jupyter-itay-68c5b5b87f-wkkwf    │ 51m │ 134Mi  │ ip-172-31-0-55.us-east-2.compute.internal │
│ jupyter-shapira-57db64967c-vkrm4 │ 51m │ 2935Mi │ ip-172-31-0-55.us-east-2.compute.internal │
│ Sum                              │ 97m │ 9184Mi │                                           │

The --columns or -o options can be used to display additional information. Available options are projects, gpu and resources (partial strings are supported). For example, to display the pods that are using GPUs:

$ igztop -o gpu
                                        Kubernetes Pods
┃ Name                             ┃ CPU ┃ Memory ┃ Node                                      ┃ GPU ┃ GPU % ┃
│ jupyter-58c5bf598f-z86qm         │ 10m │ 2936Mi │ ip-172-31-0-86.us-east-2.compute.internal │ 1/1 │ 66%   │
│ Sum                              │ 10m │ 2936Mi │                                           │     │       │

They can also be used in combination with other options. The example below expands the table with information about function and job pods that belong to MLRun projects, and then filters the list by a specific project:

$ igztop -o proj -f project=models-shapira

┃ Name                                                           ┃ CPU ┃ Memory ┃ Node                                       ┃ Project        ┃ Owner   ┃ MLRun Job ┃ MLRun Function ┃ MLRun Job Type ┃ Nuclio Function                        ┃
│ nuclio-models-shapira-model-monitoring-stream-5fd48d8696-cjl85 │ 3m  │ 508Mi  │ ip-172-31-0-55.us-east-2.compute.internal  │ models-shapira │ shapira │           │                │ serving        │ models-shapira-model-monitoring-stream │
│ nuclio-models-shapira-serving-model-6b56c8f87c-v486c           │ 1m  │ 212Mi  │ ip-172-31-0-193.us-east-2.compute.internal │ models-shapira │ shapira │           │                │ serving        │ models-shapira-serving-model           │
│ nuclio-models-shapira-serving-function-65cdc595b4-pwhc8        │ 1m  │ 202Mi  │ ip-172-31-0-193.us-east-2.compute.internal │ models-shapira │ shapira │           │                │ remote         │ models-shapira-serving-function        │
│ nuclio-models-shapira-model-monitoring-stream-5fd48d8696-cjl85 │ 3m  │ 508Mi  │ ip-172-31-0-193.us-east-2.compute.internal │ models-shapira │ shapira │           │                │ serving        │ models-shapira-model-monitoring-stream │
│ Sum                                                            │ 5m  │ 922Mi  │                                            │                │         │           │                │                │                                        │

Information about Nodes

$ igztop nodes
                                                  Kubernetes Nodes
┃ Name                                       ┃ Status ┃ IP Address   ┃ Node Group ┃ Instance Type ┃ CPU   ┃ Memory ┃
│ ip-172-31-0-193.us-east-2.compute.internal │ Ready  │ 172.31.0.193 │ initial    │ m5.4xlarge    │ 2.63% │ 22.97% │
│ ip-172-31-0-55.us-east-2.compute.internal  │ Ready  │ 172.31.0.55  │ initial    │ m5.4xlarge    │ 4.10% │ 42.70% │

Event Logs

The Events page of the dashboard displays different platform event logs:

  • The Event Log tab displays system event logs.
  • The Alerts tab displays system alerts.
  • The Audit tab displays a subset of the system events for audit purposes — security events (such as a failed login) and user actions (such as creation and deletion of a container).

The Events page is visible to users with the IT Admin management policy — who can view all event logs — or to users with the Security Admin management policy — who can view only the Audit tab.

You can specify the email of a user with the IT Admin management policy to receive email notification of events. Press the Settings icon (), then type the user name in Users to Notify and press Apply. Verify that a test email is received. If not, check its status in the Events > Event Log tab.

Events in the Event Log Tab

Event classEvent kindEvent description
SystemSystem.Cluster.OfflineCluster 'cluster_name' moved to offline mode
SystemSystem.Cluster.ShutdownCluster 'cluster_name' shutdown
SystemSystem.Cluster.Shutdown.AbortedCluster 'cluster_name' shutdown aborted
SystemSystem.Cluster.OnlineCluster 'cluster_name' moved to online mode
SystemSystem.Cluster.MaintenanceCluster 'cluster_name' moved to maintenance mode
SystemSystem.Cluster.OnlineMaintenanceCluster 'cluster_name' moved to online maintenance mode
SystemSystem.Cluster.Degraded Cluster 'cluster_name' is running in degraded mode
SystemSystem.Cluster.Failback Cluster 'cluster_name' moved to failback mode
SystemSystem.Cluster.DataAccessType.ReadOnlySuccessfully changed cluster 'cluster_name' data access type to read only
SystemSystem.Cluster.DataAccessType.ReadWriteSuccessfully changed cluster 'cluster_name' data access type to read/write
SystemSystem.Cluster.DataAccessType.ContainerSpecificSuccessfully changed data access type of data containers
SystemSystem.Node.DownNode 'node_name' is down
SystemSystem.Node.OfflineNode 'node_name' is offline
SystemSystem.Node.OnlineNode 'node_name' is online
SystemSystem.Node.InitializationNode 'node_name' is in initialization state
SoftwareSoftware.ArtifactGathering.Job.StartedArtifact gathering job started on node 'node_name'
SoftwareSoftware.ArtifactGathering.Job.SucceededArtifact gathering completed successfully on node 'node_name'
SoftwareSoftware.ArtifactGathering.Job.FailedArtifact gathering failed on node 'node_name'
SoftwareSoftware.ArtifactBundle.Upload.SucceededSystem logs were uploaded to 'upload_paths' successfully
SoftwareSoftware.ArtifactBundle.Upload.FailedLogs collection could not be uploaded to 'upload_paths'
SoftwareSoftware.IDP.Synchronization.Started IDP synchronization with 'IDP server' has been started.
SoftwareSoftware.IDP.Synchronization.Completed IDP synchronization with 'IDP server' has been complated.
SoftwareSoftware.IDP.Synchronization.Periodic.FailedIDP synchronization with 'IDP server' failed to complete periodic update.
SoftwareSoftware.IDP.Synchronization.FailedIDP synchronization with 'IDP server' failed
HardwareHardware.UPS.NoAcPowerUPS 'upsId' connected to Node 'nodeName' lost AC power
HardwareHardware.UPS.LowBatteryUPS 'upsId' connected to Node 'nodeName' battery is low
HardwareHardware.UPS.PermanentFailureUPS 'upsId' connected to Node 'nodeName' in failed state
HardwareHardware.UPS.AcPowerRestoredUPS 'upsId' connected to Node 'nodeName' AC power restored
HardwareHardware.UPS.ReachableUPS 'upsId' connected to Node 'nodeName' is reachabley
HardwareHardware.UPS.UnreachableUPS 'upsId' connected to Node 'nodeName' is unreachable
HardwareHardware.Network.Interface.UpNetwork interface to 'interfaceName' on node 'nodeName' - link regained
HardwareHardware.Network.Interface.DownNetwork interface to'interfaceName' on node 'nodeName' - link disconnected
HardwareHardware.temperature.high Drive on node 'nodeName' temperature is above normal. Temperature is 'temp'.
CapacityCapacity.StoragePool.UsedSpace.HighSpace on storage pool 'pool_name' has reached current% of the total pool size.
CapacityCapacity.StoragePoolDevice.UsedSpace.HighSpace on storage pool device 'storage_pool_device_name' on storage device 'storage_device_name' has reached current% of the total size.
CapacityCapacity.Tenant.UsedSpace.HighSpace on tenant has reached of the total size
AlertAlert.Test.ExternalTest description
SoftwareSoftware.Cluster.Reconfiguration.CompletedReconfiguration on cluster 'cluster_name' completed successfully
SoftwareSoftware.Cluster.Reconfiguration.FailedReconfiguration on cluster 'cluster_name' failure
SoftwareSoftware.Events.Reconfiguration.CompletedReconfiguration on cluster 'cluster_name' completed successfully
SoftwareSoftware.Events.Reconfiguration.FailedReconfiguration on cluster 'cluster_name' failure
SoftwareSoftware.AppServices.Reconfiguration.CompletedReconfiguration on cluster 'cluster_name' completed successfully
SoftwareSoftware.AppServices.Reconfiguration.FailedReconfiguration on cluster 'cluster_name' failure
SoftwareSoftware.ArtifactVersionManifest.Reconfiguration.CompletedReconfiguration on cluster 'cluster_name' completed successfully
SoftwareSoftware.ArtifactVersionManifest.Reconfiguration.FailedReconfiguration on cluster 'cluster_name' failure
SystemSystem.DataContainer.NormalDataContainer 'data_container_id' is running in normal mode.
SystemSystem.DataContainer.DegradedDataContainer 'data_container_id' is running in degraded mode.
SystemSystem.DataContainer.Mapping.GenerationFailed Failed to generate container mapping for DataContainer data_container_id'
SystemSystem.DataContainer.Mapping.DistributionFailedFailed to distribute container mapping for DataContainer 'data_container_id'
SystemSystem.DataContainer.Resync.CompleteResync completed on container 'data_container_id'
SystemSystem.DataContainer.DataAccessType.ReadOnlyData container 'data_container_id' is running in read only mode
SystemSystem.DataContainer.DataAccessType.ReadWriteData container 'data_container_id' is running in read/write mode
SystemSystem.DataContainer.DataAccessType.Update.FailedFailed to set data access type for data container 'data_container_id'
SystemSystem.Failover.CompletedFailover completed successfully
SystemSystem.Failover.FailedFailover failed
SoftwareSoftware.Email.Sending.FailedSending email failed due to 'reason'
CapacityCapacity.StoragePool.UsableCapacity.CalculationFaileFailed to calculate usable capacity of storage pool
HardwareHardware.Disks.DiskFailedStorage device 'device_name' on node 'node_name' has failed
SystemSystem.AppCluster.Initialization.SucceededApp cluster 'name' was initialized successfully
SystemSystem.AppCluster.Initialization.FailedFailed to initialize app cluster 'name'
SystemSystem.AppCluster.Services.Deployment.SucceededDefault app services manifest for tenant 'tenant_name' was deployed successfully
SystemSystem.AppCluster.Services.Deployment.FailedFailed to deploy default app services manifest for tenant 'tenant_name'
SystemSystem.Tenancy.Tenant.Creation.SucceededTenant 'tenant_name' was successfully created
SystemSystem.Tenancy.Tenant.Creation.FailedFailed to create tenant
SystemSystem.Tenancy.Tenant.Deletion.SucceededTenant 'tenant_name' was successfully deleted
SystemSystem.Tenancy.Tenant.Deletion.FailedFailed to delete tenant 'tenant_name'
SystemSystem.AppCluster.Tenant.Creation.SucceededTenant 'tenant_name' was successfully created on app cluster 'app_cluster'
SystemSystem.AppCluster.Tenant.Creation.FailedFailed to create tenant on app cluster
SystemSystem.AppCluster.Tenant.Deletion.SucceededTenant 'tenant_name' was successfully deleted from app cluster 'app_cluster'
SystemSystem.AppCluster.Tenant.Deletion.FailedFailed to delete tenant 'tenant_name' from app cluster
CapacityCapacity.StorageDevice.OutOfSpaceSpace on storage device under 'service_id' on node 'node_id' is depleted
SystemSystem.AppCluster.Tenant.Update.SucceededApp services for tenant 'tenant_name' were successfully updated
SystemSystem.AppCluster.Tenant.Update.FailedFailed to update app services for tenant 'tenant_name'
SystemSystem.AppNode.CreatedApp node record 'name' was created successfully
SystemSystem.AppNode.OnlineApp node 'name' is online
SystemSystem.AppNode.UnstableApp node 'name' is unstable
SystemSystem.AppNode.DownApp node 'name' is down
SystemSystem.AppNode.DeletedApp node 'name' was successfully deleted
SystemSystem.AppNode.OfflineApp node 'name' is offline
SystemSystem.AppNode.NotReadyApp node 'name' is not ready
SystemSystem.AppNode.Preemptible.NotReadyPreemptible app node 'name' is not ready
SystemSystem.AppNode.ScalingUpApp node 'name' is scaling up
SystemSystem.AppNode.ScalingDownApp node 'name' is scaling down
SystemSystem.AppNode.OutOfDiskApp node 'name' is out of disk space
SystemSystem.AppNode.MemoryPressureApp node 'name' is low on memory
SystemSystem.AppNode.DiskPressureApp node 'name' is low on disk space
SystemSystem.AppNode.PIDPressureApp node 'name' has too many processes
SystemSystem.AppNode.NetworkUnavailableApp node 'name' has network connectivity problem
SystemSystem.AppCluster.Shutdown.FailedApp cluster shutdown failed
SystemSystem.AppCluster.OnlineApp cluster 'name' is online
SystemSystem.AppCluster.UnstableApp cluster 'name' is unstable
SystemSystem.AppCluster.DownApp cluster 'name' is down
SystemSystem.AppCluster.OfflineApp cluster 'name' is offline
SystemSystem.AppCluster.DegradedApp cluster 'name' is degraded
SystemSystem.AppService.OnlineApp service 'name' is online
SystemSystem.AppService.OfflineApp service 'name' is down
SystemSystem.CoreAppService.OnlineApp service 'name' is online (Core services: v3iod, webapi, framesd, nuclio, docker_registry, pipelines, mlrun)
SystemSystem.CoreAppService.OfflineApp service 'name' is down
Background ProcessTask.Container.ImportS3.StartedS3 container 'container_id' import started.
Background ProcessTask.Container.ImportS3.FailedS3 container 'container_id' import failed.
Background ProcessTask.Container.ImportS3.CompletedS3 container 'container_id' import completed successfully.
SecuritySecurity.User.Login.Succeededuser 'username' successfully logged into the system
Security Security.User.Login.Faileduser 'username' failed logging into the system
Securitysecurity.Session.Verification.FailedFailed to verify session for user 'username', session id 'session_id'

Events in the Audit Tab

Event classEvent kindEvent description
UserActionUserAction.Container.Createdcontainer 'container_id' created on cluster 'cluster_name'
UserActionUserAction.Container.Deletedcontainer 'container_id' deleted on cluster 'cluster_name'
UserActionUserAction.Container.Updatedcontainer 'container_id' updated on cluster 'cluster_name'
UserActionUserAction.Container.Creation.Failedcontainer 'container_id' on cluster 'cluster_name' could not be created
UserActionUserAction.Container.Update.Failed container 'container_id' on cluster 'cluster_name' could not be updated
UserActionUserAction.Container.Deletion.Failed container 'container_id' on cluster 'cluster_name' could not be deleted
UserActionUserAction.User.Createduser 'username' created on cluster 'cluster_name'
UserActionUserAction.User.Creation.Faileduser 'username' on cluster 'cluster_name' could not be created
UserActionUserAction.UserGroup.CreatedUser group 'group' created on cluster 'cluster_name'
UserActionUserAction.UserGroup.Deletion.FailedUser group 'group' on cluster 'cluster_name' could not be deleted
UserActionUserAction.User.Deleteduser 'username' deleted on cluster 'cluster_name'
UserActionUserAction.User.Deletion.Faileduser 'username' on cluster 'cluster_name' could not be deleted
UserActionUserAction.User.Updateduser 'username' updated on cluster 'cluster_name'
UserActionUserAction.User.Update.Faileduser 'username' on cluster 'cluster_name' could not be updated
UserActionUserAction.UserGroup.UpdatedUser 'group name' updated on cluster 'cluster_name'.
UserActionUserAction.UserGroup.Update.FailedUser 'group name' on cluster 'cluster_name' could not be updated
UserActionUserAction.UserGroup.Creation.FailedUser 'group name' on cluster 'cluster_name' could not be created
UserActionUserAction.UserGroup.DeletedUser 'group name' deleted on cluster 'cluster_name'
UserActionUserAction.DataAccessPolicy.AppliedData access policy for container 'name' on cluster 'cluster' applied
UserActionUserAction.Tenant.Creation.FailedPasswordEmailSending password creation email on tenant creation failed
UserActionUserAction.User.Creation.FailedPasswordEmailSending password creation email on user creation failed
UserActionUserAction.Services.Deployment.SucceededApp services for tenant 'tenant_name' were deployed successfully
UserActionUserAction.Services.Deployment.FailedFailed to deploy app services for tenant 'tenant_name'
UserActionUserAction.Project.CreatedProject 'name' was created successfully
UserActionUserAction.Project.Creation.FailedProject 'name' creation failed
UserActionUserAction.Project.UpdatedProject 'name' updated successfully
UserActionUserAction.Project.Update.FailedProject 'name' update failed
UserActionUserAction.Project.DeletedProject 'name' deleted successfully
UserActionUserAction.Project.Deletion.FailedProject 'name' deletion failed
UserActionUserAction.Project.Owner.UpdatedOwner in project 'name' was changed from %s to %s
UserActionUserAction.Project.User.Role.UpdatedRole for user 'username' in project 'name' was updated from 'old_owner' to 'new_owner'
UserActionUserAction.Project.UserGroup.Role.UpdatedRole for user 'group name' in project 'project_name' was updated from 'old_role' to 'new_role'
UserActionUserAction.Project.User.AddedUser 'group name' was added to project 'name' as 'role_name'
UserActionUserAction.Project.UserGroup.Addeduser 'username' was added to project 'name' as 'role_name'
UserActionUserAction.Project.User.Removeduser 'username' was removed from project 'name'
UserActionUserAction.Project.UserGroup.RemovedUser group name' was removed from project 'name'
UserActionUserAction.Network.CreatedNetwork 'name' created on cluster 'cluster_name'
UserActionUserAction.Network.Creation.FailedNetwork 'name' on cluster 'cluster_name' could not be created
UserActionUserAction.Network.UpdatedNetwork 'name' updated on cluster 'cluster_name'
UserActionUserAction.Network.Update.FailedNetwork 'name' on cluster 'cluster_name' could not be updated
UserActionUserAction.Network.DeletedNetwork 'name' deleted on cluster 'cluster_name'
UserActionUserAction.Network.Deletion.FailedNetwork 'name' on cluster 'cluster_name' could not be deleted
UserActionUserAction.StoragePool.Createdstorage pool 'name' created on cluster 'cluster_name'td
UserActionUserAction.StoragePool.Creation.Failedstorage pool 'name' on cluster 'cluster_name' could not be created
UserActionUserAction.Cluster.UpdatedCluster 'cluster_name' updated
UserActionUserAction.Cluster.Update.FailedCluster 'cluster_name' could not be updated
UserActionUserAction.Cluster.DeletedCluster 'cluster_name' deleted
UserActionUserAction.Cluster.Deletion.Failedcluster 'cluster_name' could not be deleted
UserActionUserAction.Cluster.ShutdownCluster 'cluster_name' is down per user request 'username'

Cluster Support Logs

Users with the IT Admin management policy can collect and download support logs for the platform clusters from the dashboard. Log collection is triggered for a data cluster, but the logs are collected from both the data and application cluster nodes.

You can trigger collection of cluster support-logs from the dashboard in one of two ways; (note that you cannot run multiple collection jobs concurrently):

  • On the Clusters page, open the action menu () for a data cluster in the clusters table (Type = "Data"); then select the Collect logs menu option.

  • On the Clusters page, select to display the Support Logs tab for a specific data cluster — either by selecting the Support logs option from the cluster's action menu () or by selecting the data cluster and then selecting the Support Logs tab; then select Collect Logs from the action toolbar. Optionally, select filter criteria in the Select a filter dialog and press Collect Logs again.

    Filters reflect both the log source and the log level. The non-full options are for more concise logs. The full versions provide full logs, which might be requested by Customer support. The context filter is usually used by Customer Support, who supplies the context string, if required.

You can view the status of all collection jobs and download archive files of the collected logs from the data-cluster's Support Logs dashboard tab.

API Error Information

The platform APIs return error codes and error and warning messages to help you debug problems with your application. See, for example, the Error Information documentation in the Data-Service Web-API General Structure reference documentation.

See Also