Working with the TSDB CLI

On This Page

Overview

The V3IO TSDB includes the V3IO TSDB command-line interface (“the TSDB CLI”), which enables users to easily create, update, query, and delete time-series databases (TSDBs), as demonstrated in this tutorial. Before you get started, please read the setup and usage information in this section:

Setup

The TSDB CLI can be run locally on a platform cluster or remotely from any computer with a network connection to the cluster. The platform’s web shell and Jupyter Notebook services include a compatible Linux version of the TSDB CLI — tsdbctl, which is found in the $IGUAZIO_HOME/bin directory; the installation directory is included in the shell path ($PATH) to simplify execution from anywhere in the shell. For remote execution, download the CLI from the Releases page of the V3IO TSDB GitHub repository.

In the web shell and Jupyter terminal environments there’s also a predefined tsdbctl alias to the native CLI that preconfigures the --server flag to the URL of the web-APIs service and the --access-key flag to the authentication access key for the running user of the parent shell or Jupyter Notebook service; you can override the default configurations in your CLI commands. When running the CLI from an on-cluster Jupyter notebook or remotely, you need to configure the web-APIs service and authentication credentials yourself, either in the CLI command or in a configuration file, as outlined in this tutorial.

Note
  • Version 2.2.0 of the platform is compatible with version 0.9.1 of the V3IO TSDB. Please consult Iguazio’s support team before using another version of the CLI.
  • When using a downloaded version of the CLI (namely for remote execution), it’s recommended that you add the file or a symbolic link to it (such as tsdbctl) to the execution path on your machine ($PATH), as done in the platform command-line environments. For the purpose of this tutorial, it’s assumed that tsdbctl is found in your path and is used to run the relevant version of the CLI.

Reference

Use the CLI’s embedded help for a detailed reference:

  • Run the general help command to get information about of all available commands:

    tsdbctl help
  • Run tsdbctl help <command> or tsdbctl <command> -h to view the help reference for a specific command. For example, use either of the following variations to get help for the query command:

    tsdbctl help query
    tsdbctl query -h

Mandatory Command Configurations

All CLI commands demonstrated in this tutorial require that you configure the following flags. This can be done either in the CLI command itself or in a configuration file. As explained in the Setup section, when running the CLI locally from an on-cluster web shell or Jupyter terminal, you can use the tsdbctl alias, which preconfigures the --server and --access-key flags.

  • User-authentication flags — one of the following alternatives:

    • For access-key authentication —
      • -k|--access-key — a valid access key for logging into the configured web-APIs service. You can get the access key from the platform dashboard: select the user-profile picture or icon from the top right corner of any page, and select Access Keys from the menu. In the Access Keys window, either copy an existing access key or create a new key and copy it.
        Note
        • The tsdbctl alias that’s available in the platform’s web shell and Jupyter terminal environments preconfigures the --access-key flag for the running user.
        • The platform’s web shell and Jupyter notebook and terminal environments store the access token of the running user of the service in a V3IO_ACCESS_KEY environment variable. When running the native V3IO TSDB CLI locally — for example, from a Jupyter notebook, which doesn’t have the tsdbctl alias — you can set the -k or --access-key flag to this variable.
    • For username-password authentication —
      • -u|--username — a valid username for logging into the configured web-APIs service.
      • -p|--password — the password of the configured web-APIs service user.
  • -s|--server — the endpoint of your platform’s web-APIs (web-gateway) service. To get this IP address, copy the API URL of the “Web APIs” service from the Services platform dashboard page, and remove the http:// or https:// prefix. For example, "webapi.default-tenant.app.mycluster.iguazio.com".

    Note
    • The tsdbctl alias that’s available in the platform’s web shell and Jupyter terminal environments preconfigures this flag for the running user.
    • The platform’s web shell and Jupyter notebook and terminal environments store the IP address and port of the web-APIs service for the running user of the service in V3IO_WEBAPI_SERVICE_HOST and V3IO_WEBAPI_SERVICE_PORT environment variables. When running the native V3IO TSDB CLI locally — for example, from a Jupyter notebook, which doesn’t have the tsdbctl alias — you can set the -s or --server flag to $V3IO_WEBAPI_SERVICE_HOST:$V3IO_WEBAPI_SERVICE_PORT.
  • -c|--container — the name of the parent data container of the TSDB instance (table). For example, "bigdata" or "mycontainer".

  • -t|--table-path — the path to the TSDB instance (table) within the configured container. For example "my_metrics_tsdb" or "tsdbs/metrics". (Any component of the path that doesn’t already exist will be created automatically.) The TSDB table path should not be set in a CLI configuration file.

Some commands require additional configurations, as detailed in the command-specific documentation.

Using a Configuration File

Some of the CLI configurations can be defined in a YAML file instead of setting the equivalent flags in the command line. By default, the CLI checks for a v3io-tsdb-config.yaml configuration file in the current directory. You can use the global CLI -g|--config flag to provide a path to a different configuration file. Command-line configurations override file configurations.

You can use the template examples/v3io-tsdb-config.yaml.template configuration file in the V3IO TSDB GitHub repository as the basis for your custom configuration file. The template includes descriptive comments to explain each key.

To simplify the examples in this tutorial and focus on the unique options of each CLI command, the examples assume that you have created a v3io-tsdb-config.yaml file in the directory from which you’re running the CLI (default path) and that this file configures the following keys; note that the web-APIs service and user-authentication configurations aren’t required if you use the on-cluster tsdbctl alias, which preconfigures these flags for the running user:

  • webApiEndpoint — the equivalent of the CLI -s|--server flag.
  • container — the equivalent of the CLI -c|--container flag.
  • accesskey — the equivalent of the CLI -k|--access-key flag.

    Alternatively, you can set the following flags for username-password authentication:

Following is an example configuration file. Replace the IP address and access key in the values of the webApiEndpoint and accessKey keys with your specific data; you can also select to replace the accesskey key with username and password keys:

# File:         v3io-tsdb-config.yaml
# Description:  V3IO TSDB Configuration File

# Endpoint of an Iguazio Data Science Platform web APIs (web-gateway) service,
# consisting of an IP address or resolvable host domain name
webApiEndpoint: "webapi.default-tenant.app.mycluster.iguazio.com"

# Name of an Iguazio Data Science Platform container for storing the TSDB table
container: "bigdata"

# Authentication credentials for the web-APIs service
accessKey: "MYACCESSKEY"
# OR
#username: "MYUSER"
#password: "MYPASSWORD"

For example, the following CLI command for getting information about a “mytsdb” TSDB in the “bigdata” container —

tsdbctl info -c bigdata -t mytsdb -n -m -s webapi.default-tenant.app.mycluster.iguazio.com -k MYACCESSKEY

— is equivalent to the following command when the current directory has the aforementioned example v3io-tsdb-config.yaml file:

tsdbctl info -t mytsdb -n -m

As indicated above, you can override any of the file configurations in the command line. For example, you can add -c metrics to the previous command to override the default “bigdata” container configuration and get information for a “mytsdb” table in a custom “metrics” container:

tsdbctl info -t mytsdb -n -m -c metrics

Creating a New TSDB

Use the CLI’s create command to create a new TSDB instance (table) — i.e., create a new TSDB. The command receives a mandatory -r|--ingestion-rate flag, which defines the TSDB’s metric-samples ingestion rate. The rate is specified as a string of the format "[0-9]+/[smh]" (where ‘s’ = seconds, ‘m’ = minutes, and ‘h’ = hours) and should be calculated according to the slowest expected ingestion rate; for example, "1/s" (1 sample per second), "20/m" (20 samples per minute), or "50/h" (50 samples per hour).

Note

It’s recommended to use similar ingestion rates for metrics ingested into a specific TSDB instance (up to a x10 difference). For example, don’t use both 1/s and 1/m ingestion rates for the same TSDB.

The following command creates a new “tsdb_example” TSDB in the configured “bigdata” container with an ingestion rate of one sample per second:

tsdbctl create -t tsdb_example -r 1/s

Defining TSDB Aggregates

You can optionally use the -a|--aggregates flag of the create CLI command to configure a list of aggregates that will be executed in real time for each metric item during the ingestion of the metric samples into the TSDB. The aggregates are provided as a string containing a comma-separated list of one or more supported aggregation functions; for example, "avg" (average sample values) or "max,min,last" (maximum, minimum, and latest sample values).

When configuring TSDB aggregates, you should also use the -i|--aggregation-granularity flag to specify the aggregation granularity — a time interval for executing the aggregation functions. The aggregation granularity is provided as string of the format "[0-9]+[mhd]" (where ‘m’ = minutes, ‘h’ = hours, and ‘d’ = days); for example, "90m" (90 minutes = 1.5 hours) or "2h" (2 hours). The default aggregation granularity is one hour (1h).

Aggregation Notes
  • When using the aggregates flag, the CLI automatically adds the count aggregate to the TSDB’s aggregates list. However, it’s recommended to set this aggregate explicitly if you need it.
  • Some aggregates are calculated from other aggregates. For example, the avg aggregate is calculated from the count and sum aggregates.

The following command creates a new “tsdb_example_aggr” TSDB with an ingestion rate of one sample per second in a tsdb_tests directory in the default configured “bigdata” container. The TSDB is created with the count, avg, min, and max aggregates and an aggregation interval of 1 hour:

tsdbctl create -t tsdb_example_aggr -r 1/s -a "count,avg,min,max" -i 1h

Supported Aggregation Functions

Version 0.9.1 of the CLI supports the following aggregation functions, which are all applied to the samples of each metric item according to the TSDB’s aggregation granularity (interval):

  • avg — the average of the sample values.
  • count — the number of ingested samples.
  • last — the value of the last sample (i.e., the sample with the latest time).
  • max — the maximal sample value.
  • min — the minimal sample value.
  • rate — the change rate of the sample values, which is calculated as <last sample value of the previous interval> - <last sample value of the current inverval>) / <aggregation granularity>.
  • stddev — the standard deviance of the sample values.
  • stdvar — the standard variance of the sample values.
  • sum — the sum of the sample values.

Adding Samples to a TSDB

Use the CLI’s add command (or its append alias) to add (ingest) metric samples to a TSDB. You must provide the name of the ingested metric and one or more sample values for the metric. You can also optionally provide the samples’ generation times (the default is the current time) and metric labels. Each unique metric name and optional labels combination corresponds to a metric item (row) in the TSDB with attributes (columns) for each label.

The ingestion input can be provided in one of two ways:

  • Using command-line arguments and flags —

    • metric argument [Required] — a string containing the name of the ingested metric. For example, "cpu".
    • labels argument [Optional] — a string containing a comma-separated list of <label name>=<label value> key-value pairs. For example, "os=mac,host=A".
    • -d|--values flag [Required] — a string containing a comma-separated list of integer or float sample data values. For example, "67.0,90.2,70.5".
    • -m|--times flag [Optional] — a string containing a comma-separated list of sample generation times (“sample times”) for the provided sample values. A sample time can be specified as a Unix timestamp in milliseconds or as a relative time of the format "now" or "now-[0-9]+[mhd]" (where ‘m’ = minutes, ‘h’ = hours, and ‘d’ = days). For example, "1537971020000,now-2d,now-95m,now".
      The default sample time is the current time (i.e., the TSDB ingestion time) — now.

      Note

      An ingested sample time cannot be earlier or equal to the latest previously ingested sample time for the same metric item. This applies also to samples ingested in the same command, so specify the ingestion times in ascending chronological order. For example, an add command with -d "1,2" -m "now,now-1m" will ingest only the first sample (1) and not the second sample (2) because the time of the second sample (now-2) is earlier than that of the first sample (now). To ingest both samples, change the order in the command to -d "2,1" "now-1m,now".

Note

When ingesting samples at scale, use a CSV file or a Nuclio function rather than providing the ingestion input in the command line.

  • Using the -f|--file flag to provide the path to a CSV metric-samples input file that contains one or more items (rows) of the following format:

    <metric name>,[<labels>],<sample data value>[,<sample time>]

    The CSV columns (attributes) are the equivalent of the arguments and flags described for the command-line arguments method in the previous bullet and their values are subject to the same guidelines. Note that all rows in the CSV file must have the same number of columns.

The following commands ingest three samples and a label for a temperature metric and multiple samples with different label combinations and no labels for a cpu metric into the tsdb_example TSDB. Only the first command specifies the sample times (using the -m flag). The sample times for the remaining commands is the default time — now; the time of each sample will be slightly newer than the time of the previous sample because of the chronological ingestion order:

tsdbctl add temperature -t tsdb_example "degrees=Celsius" -d "32,29.5,25.3" -m "now-2d,now-1d,now"
tsdbctl add cpu -t tsdb_example -d "90,82.5"
tsdbctl add cpu "host=A,os=linux" -t tsdb_example -d "23.87,47.3"
tsdbctl add cpu "host=A" -t tsdb_example -d "50.2"
tsdbctl add cpu "os=linux" -t tsdb_example -d "88.8,91"
tsdbctl add cpu "host=A,os=linux,arch=amd64" -t tsdb_example -d "70.2,55"

The same ingestion can also be done by providing the samples input in a CSV file, as demonstrated in the following command:

tsdbctl add -t tsdb_example -f ~/metric_samples.csv

The command uses this example metric_samples.csv file, which you can also download here. Copy the file to your home directory (~/) or change the file path in the ingestion command:

temperature,degrees=Celsius,32,now-2d
temperature,degrees=Celsius,29.5,now-1d
temperature,degrees=Celsius,25.3,now
cpu,,90,
cpu,,82.5,
cpu,"host=A,os=linux",23.87,
cpu,"host=A,os=linux",47.3,
cpu,host=A,50.2,
cpu,os=linux,88.8,
cpu,os=linux,91,
cpu,"host=A,os=linux,arch=amd64",70.2,
cpu,"host=A,os=linux,arch=amd64",55,

The following command demonstrates ingestion of samples for an m1 label with host and os labels using a CSV file that is found in the directory from which the CLI is run:

tsdbctl add -t tsdb_example_aggr -f tsdb_example_aggr.csv

The command uses this example tsdb_example_aggr.csv file, which you can also download here:

m1,"os=darwin,host=A",1,1514802220000
m1,"os=darwin,host=A",2,1514812086000
m1,"os=darwin,host=A",3,1514877315000
m1,"os=linux,host=A",1,1514797500000
m1,"os=linux,host=A",2,1514799605000
m1,"os=linux,host=A",3,1514804625000
m1,"os=linux,host=A",4,1514818759000
m1,"os=linux,host=A",5,1514897354000
m1,"os=linux,host=A",6,1514897858000
m1,"os=windows,host=A",1,1514803048000
m1,"os=windows,host=A",2,1514808826000
m1,"os=windows,host=A",3,1514812736000
m1,"os=windows,host=A",4,1514881791000
m1,"os=darwin,host=B",1,1514802842000
m1,"os=darwin,host=B",2,1514818576000
m1,"os=darwin,host=B",3,1514891100000
m1,"os=linux,host=B",1,1514798275000
m1,"os=linux,host=B",2,1514816100000
m1,"os=linux,host=B",3,1514895734000
m1,"os=linux,host=B",4,1514900599000
m1,"os=windows,host=B",1,1514799605000
m1,"os=windows,host=B",2,1514810326000
m1,"os=windows,host=B",3,1514881791000
m1,"os=windows,host=B",4,1514900597000

Getting TSDB Configuration and Metrics Information

Use the CLI’s info command to retrieve basic information about a TSDB. The command returns the TSDB’s configuration (schema) — which includes the version, storage class, sample retention period, chunk interval, partitioning interval, aggregates, and aggregation granularity for the entire table and for each partition (currently this is the same for all partitions); the partitions’ start times (which are also their names); the number of sharding buckets; and the schema of the TSDB’s item attributes.

You can optionally use the -n|--names flag to also display the names of the metrics contained in the TSDB, and you can use the -m|--performance flag to display a count of the number of metric items in the TSDB (i.e., the number of unique metric-name and labels combinations).

The following command returns the full schema and metrics information for the tsdb_example_aggr TSDB:

tsdbctl info -t tsdb_example_aggr -m -n

Querying a TSDB

Use the CLI’s query command (or its get alias) to query a TSDB and retrieve filtered information about the ingested metric samples. The command requires that you either set the metric string argument to the name of the queried metric (for example, "noise"), or set the -f|--filter flag to a filter-expression string that defines the scope of the query (see Filter Expression). To reference a metric name in the query filter, use the __name__ attribute (for example, "(__name__=='cpu1') OR (__name__=='cpu2')" or "starts(__name__,'cpu')"). To reference labels in the query filter, just use the label name as the attribute name (for example, "os=='linux' AND arch=='amd64'").

Range-Scan Note

Queries that set the metric argument use range scan and are therefore faster. However, you can’t use such queries to scan multiple metrics, as the metric argument only accepts a single metric name.

When using the -f|--filter flag to define a query filter, you don’t necessarily need to include a metric name in the query. You can select, for example, to filter the query only by labels. You can also query all metric samples in the query’s time range by omitting the metric argument and using a filter expression that always evaluates to true, such as "1==1"; to query the full TSDB content, also set the -b|--begin flag to 0.

You can optionally use the -b|--begin and -e|--end flags to specify start (minimum) and end (maximum) times that restrict the query to a specific time range. The start and end times are each specified as a string that contains an RFC 3339 time string, a Unix timestamp in milliseconds, or a relative time of the format "now" or "now-[0-9]+[mhd]" (where ‘m’ = minutes, ‘h’ = hours, and ‘d’ = days); the start time can also be set to zero (0) for the earliest sample time in the TSDB.
Alternatively, you can use the -l|--last flag to define the time range as the last <n> minutes, hours, or days ("[0-9]+[mdh]").
The default end time is the current time (now) and the default start time is one hour earlier than the end time. Therefore, the default time range when neither flag is set is the last hour. Note that the time range applies to the samples’ generation times (“the sample times”) and not to the times at which they were ingested into the TSDB.

By default, the command returns the query results in plain-text format ("text"), but you can use the -o|--output flag to specify a different format — "csv" (CSV) or "json" (JSON).

The following query returns all metric samples contained in the tsdb_example TSDB:

tsdbctl query -t tsdb_example -f "1==1" -b 0

The following queries both return tsdb_example TSDB cpu metric samples that were generated within the last hour and have a host label whose value is 'A' and an os label whose value is "linux":

tsdbctl query cpu -t tsdb_example -f "host=='A' AND os=='linux'" -b now-1h
tsdbctl query cpu -t tsdb_example -f "host=='A' AND os=='linux'" -l 1h

The following query returns, in CSV format, all tsdb_example TSDB metric samples that have a degrees label and were generated in 2019:

tsdbctl query -t tsdb_example -f "exists(degrees)" -b 2019-01-01T00:00:00Z -e 2019-12-31T23:59:59Z -o csv

Aggregation Queries

You can use the optional -a|--aggregates flag of the query command to provide a comma-separated list of aggregation functions (“aggregates”) to apply to the raw samples data; for example, "sum,stddev,stdvar". For a description of the supported aggregates, see Supported Aggregation Functions. You can use the -i|--aggregation-interval flag to specify the time interval for applying the specified aggregates. The interval is specified as a string of the format "[0-9]+[mhd]" (where ‘m’ = minutes, ‘h’ = hours, and ‘d’ = days); for example, "3h" (3 hours). The default aggregation interval is the difference between the query’s end and start times; for example, for the default query start and end times of now-1h and now, the default aggregation interval will be one hour (1h).

You can submit aggregation queries also for a TSDB that was created without aggregates. However, if the TSDB has aggregates that match the query aggregates and the query’s aggregation interval can be divided by the TSDB’s aggregation granularity without a remainder, the query uses the real-time aggregation information that is stored in the TSDB’s aggregation attributes instead of performing a new calculation and the query is therefore more efficient. See also the Aggregation Notes for the create command.

The following query returns for each tsdb_example TSDB metric item whose metric name begins with “cpu”, the minimal and maximal sample values and the standard deviation over two-hour aggregation intervals for samples that were generated in the last two days:

tsdbctl query -t tsdb_example -f "starts(__name__,'cpu')" -a "min,max,stddev" -i 2h -l 2d

The following queries return for each m1 metric item in the tsdb_example_aggr TSDB, the daily, hourly, or bi-hourly samples count and data-values average (depending on the aggregation interval) beginning with 1 Jan 2018 at 00:00 until the current time (default). (Note that results are returned only for interval periods that contain samples.) See the Output (-i <interval>) tabs for example command outputs that match the tsdb_example_aggr.csv ingestion example that was used earlier in this tutorial:

    tsdbctl query m1 -t tsdb_example_aggr -a "count,avg" -i 1d -b 2018-01-01T00:00:00Z

    Name: m1  Labels: host=B,os=windows,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=2.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.50
    
    Name: m1  Labels: host=A,os=linux,Aggregate=count
      2018-01-01T00:00:00Z  v=4.00
      2018-01-02T00:00:00Z  v=2.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=avg
      2018-01-01T00:00:00Z  v=2.50
      2018-01-02T00:00:00Z  v=5.50
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=count
      2018-01-01T00:00:00Z  v=3.00
      2018-01-02T00:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=avg
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=2.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.50
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.00

    Name: m1  Labels: host=B,os=windows,Aggregate=count
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-02T08:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=avg
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T08:00:00Z  v=3.00
      2018-01-02T12:00:00Z  v=4.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=count
      2018-01-01T08:00:00Z  v=2.00
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=2.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=avg
      2018-01-01T08:00:00Z  v=1.50
      2018-01-01T10:00:00Z  v=3.00
      2018-01-01T14:00:00Z  v=4.00
      2018-01-02T12:00:00Z  v=5.50
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-02T06:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T06:00:00Z  v=3.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T08:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.50
      2018-01-02T08:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=count
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=2.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=avg
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T12:00:00Z  v=3.50
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T10:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T10:00:00Z  v=3.00

    Name: m1  Labels: host=B,os=windows,Aggregate=count
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-02T08:00:00Z  v=1.00
      2018-01-02T13:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=avg
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T08:00:00Z  v=3.00
      2018-01-02T13:00:00Z  v=4.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=count
      2018-01-01T09:00:00Z  v=2.00
      2018-01-01T11:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=2.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=avg
      2018-01-01T09:00:00Z  v=1.50
      2018-01-01T11:00:00Z  v=3.00
      2018-01-01T14:00:00Z  v=4.00
      2018-01-02T12:00:00Z  v=5.50
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T13:00:00Z  v=1.00
      2018-01-02T07:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T13:00:00Z  v=2.00
      2018-01-02T07:00:00Z  v=3.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-01T13:00:00Z  v=1.00
      2018-01-02T08:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-01T13:00:00Z  v=3.00
      2018-01-02T08:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=count
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=1.00
      2018-01-02T13:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=avg
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T12:00:00Z  v=3.00
      2018-01-02T13:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T11:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T11:00:00Z  v=3.00

    As explained above, you can also submit aggregation queries for TSDBs that were not created with aggregates. In such cases, the aggregations are calculated when the query is processed. For example, the following query returns a three-day average for the tsdb_example TSDB’s temperature metric samples:

    tsdbctl query temperature -t tsdb_example -a avg -i 3d -b 0

    Deleting a TSDB

    Use the CLI’s delete command (or its del alias) to delete a TSDB or delete content from a TSDB.

    With version 0.9.1 of the V3IO TSDB, always use the -a|--all flag to delete the entire TSDB — i.e., delete the TSDB table, including its schema (which contains the configuration information) and all its content. (Future releases will support partial content deletions.) For example, the following command completely deletes the tsdb_example_aggr TSDB (subject to user confirmation in the command line):

    tsdbctl delete -t tsdb_example_aggr -a

    To avoid inadvertent deletes, by default the command prompts you to confirm the delete operation. You can use the -f|--force flag to perform a forceful deletion without prompting for confirmation.

    You can also use the -i|--ignore-errors flag to skip errors that might occur during the delete operation and attempt to proceed to the next step.