The V3IO TSDB CLI (tsdbctl)

On This Page

Overview

The V3IO TSDB includes the V3IO TSDB command-line interface ("the TSDB CLI"), which enables users to easily create, update, query, and delete time-series databases (TSDBs), as demonstrated in this tutorial. Before you get started, read the setup and usage information in this section and review the TSDB software specifications and restrictions.

Setup

The TSDB CLI can be run locally on a platform application cluster or remotely from any computer with a network connection to the cluster. The platform's web shell and Jupyter Notebook services include a compatible Linux version of the TSDB CLI — tsdbctl, which is found in the $IGUAZIO_HOME/bin directory; the installation directory is included in the shell path ($PATH) to simplify execution from anywhere in the shell. For remote execution, download the CLI from the V3IO TSDB GitHub repository.

In the web shell and Jupyter terminal environments there's also a predefined tsdbctl alias to the native CLI that preconfigures the --server flag to the URL of the web-APIs service and the --access-key flag to the authentication access key for the running user of the parent shell or Jupyter Notebook service; you can override the default configurations in your CLI commands. When running the CLI from an on-cluster Jupyter notebook or remotely, you need to configure the web-APIs service and authentication credentials yourself, either in the CLI command or in a configuration file, as outlined in this tutorial.

Note
  • Version 3.5.5 of the platform is compatible with version 0.13 of the V3IO TSDB. Please consult Iguazio's support team before using another version of the CLI.
  • When using a downloaded version of the CLI (namely for remote execution), it's recommended that you add the file or a symbolic link to it (such as tsdbctl) to the execution path on your machine ($PATH), as done in the platform command-line environments. For the purpose of this tutorial, it's assumed that tsdbctl is found in your path and is used to run the relevant version of the CLI.

Reference

Use the CLI's embedded help for a detailed reference:

  • Run the general help command to get information about of all available commands:

    tsdbctl help
    
  • Run tsdbctl help <command> or tsdbctl <command> -h to view the help reference for a specific command. For example, use either of the following variations to get help for the query command:

    tsdbctl help query
    tsdbctl query -h
    

Mandatory Command Configurations

All CLI commands demonstrated in this tutorial require that you configure the following flags. This can be done either in the CLI command itself or in a configuration file. As explained in the Setup section, when running the CLI locally from an on-cluster web shell or Jupyter terminal, you can use the tsdbctl alias, which preconfigures the --server and --access-key flags.

  • User-authentication flags — one of the following alternatives:

    • For access-key authentication —
      • -k|--access-key — a valid access key for logging into the configured web-APIs service. You can get the access key from the Access Keys window that's available from the dashboard user-profile menu, or by copying the value of the V3IO_ACCESS_KEY environment variable in a web-shell or Jupyter Notebook service.
        Note
        • The tsdbctl alias that's available in the platform's web shell and Jupyter terminal environments preconfigures the --access-key flag for the running user.
        • When running the native V3IO TSDB CLI locally — for example, from a Jupyter notebook, which doesn't have the tsdbctl alias — you can set the -k or --access-key flag to $V3IO_ACCESS_KEY.
    • For username-password authentication —
      • -u|--username — a valid username for logging into the configured web-APIs service.
      • -p|--password — the password of the configured web-APIs service user.
  • -s|--server — the endpoint of your platform's web-APIs (web-gateway) service. The tsdbctl alias that's available in the platform's web shell and Jupyter terminal environments preconfigures this flag for the running user. If you're not using the alias — for example, if you're running the native TSDB CLI from a Jupyter notebook or remotely — set this flag to <web-APIs IP>:<web-APIs HTTP port>:

    • <web-APIs IP> — the IP address of the web-APIs service; for example, webapi.default-tenant.app.mycluster.iguazio.com. The IP address is stored in a V3IO_WEBAPI_SERVICE_HOST environment variable in the platform's web shells and Jupyter notebooks and terminals You can also get this address from the web-APIs HTTPS URL: copy the HTTPS API link of the web-APIs service (webapi) from the Services dashboard page, and then remove https:// from the start of the URL.
    • <web-APIs HTTP port> — the HTTP port of the web-APIs service. The port number is stored in a V3IO_WEBAPI_SERVICE_PORT environment variable in the platform's web shells and Jupyter notebooks and terminals.
  • -c|--container — the name of the parent data container of the TSDB instance (table). For example, "projects" or "mycontainer".

  • -t|--table-path — the path to the TSDB instance (table) within the configured container. For example "my_metrics_tsdb" or "tsdbs/metrics". (Any component of the path that doesn't already exist will be created automatically.) The TSDB table path should not be set in a CLI configuration file.

Some commands require additional configurations, as detailed in the command-specific documentation.

Using a Configuration File

Some of the CLI configurations can be defined in a YAML file instead of setting the equivalent flags in the command line. By default, the CLI checks for a v3io-tsdb-config.yaml configuration file in the current directory. You can use the global CLI -g|--config flag to provide a path to a different configuration file. Command-line configurations override file configurations.

You can use the template examples/v3io-tsdb-config.yaml.template configuration file in the V3IO TSDB GitHub repository as the basis for your custom configuration file. The template includes descriptive comments to explain each key.

To simplify the examples in this tutorial and focus on the unique options of each CLI command, the examples assume that you have created a v3io-tsdb-config.yaml file in the directory from which you're running the CLI (default path) and that this file configures the following keys; note that the web-APIs service and user-authentication configurations aren't required if you use the on-cluster tsdbctl alias, which preconfigures these flags for the running user:

  • webApiEndpoint — the equivalent of the CLI -s|--server flag.

  • container — the equivalent of the CLI -c|--container flag.

  • accesskey — the equivalent of the CLI -k|--access-key flag.

    Alternatively, you can set the following flags for username-password authentication:

Following is an example configuration file. Replace the IP address and access key in the values of the webApiEndpoint and accessKey keys with your specific data; you can also select to replace the accesskey key with username and password keys:

# File:         v3io-tsdb-config.yaml
# Description:  V3IO TSDB Configuration File

# Endpoint of an Iguazio MLOps Platform web APIs (web-gateway) service,
# consisting of an IP address or resolvable host domain name
webApiEndpoint: "webapi.default-tenant.app.mycluster.iguazio.com"

# Name of an Iguazio MLOps Platform container for storing the TSDB table
container: "projects"

# Authentication credentials for the web-APIs service
accessKey: "MYACCESSKEY"
# OR
#username: "MYUSER"
#password: "MYPASSWORD"

For example, the following CLI command for getting information about a "mytsdb" TSDB in the "projects" container —

tsdbctl info -c projects -t mytsdb -n -m -s webapi.default-tenant.app.mycluster.iguazio.com -k MYACCESSKEY

— is equivalent to the following command when the current directory has the aforementioned example v3io-tsdb-config.yaml file:

tsdbctl info -t mytsdb -n -m

As indicated above, you can override any of the file configurations in the command line. For example, you can add -c metrics to the previous command to override the default "projects" container configuration and get information for a "mytsdb" table in a custom "metrics" container:

tsdbctl info -t mytsdb -n -m -c metrics

Creating a New TSDB

Use the CLI's create command to create a new TSDB instance (table) — i.e., create a new TSDB. The command receives a mandatory -r|--ingestion-rate flag, which defines the TSDB's metric-samples ingestion rate. The rate is specified as a string of the format "[0-9]+/[smh]" (where 's' = seconds, 'm' = minutes, and 'h' = hours); for example, "1/s" (1 sample per second), "20/m" (20 samples per minute), or "50/h" (50 samples per hour). It's recommended that you set the rate to the average expected ingestion rate for a unique label set (for example, for a single server in a data center), and that the ingestion rates for a given TSDB table don't vary significantly; when there's a big difference in the ingestion rates (for example, x10), consider using separate TSDB tables.

Note
In the current release, the create command doesn't support the -l|--cross-label flag.

Examples

The following command creates a new "tsdb_example" TSDB in the configured "projects" container with an ingestion rate of one sample per second:

tsdbctl create -t tsdb_example -r 1/s

Defining TSDB Aggregates

You can optionally use the -a|--aggregates flag of the create CLI command to configure a list of aggregation functions ("aggregators") that will be executed for each metric item in real time during the ingestion of the metric samples into the TSDB. The aggregations results are stored in the TSDB as array attributes ("pre-aggregates") and used to handle relevant aggregation queries. The aggregators are provided as a string containing a comma-separated list of one or more supported aggregation functions; for example, "avg" (average sample values) or "max,min,last" (maximum, minimum, and latest sample values).

When configuring the TSDB's pre-aggregates, you should also use the -i|--aggregation-granularity flag to specify the aggregation granularity — a time interval for executing the aggregation functions. The aggregation granularity is provided as a string of the format "[0-9]+[mhd]" (where 'm' = minutes, 'h' = hours, and 'd' = days); for example, "90m" (90 minutes = 1.5 hours) or "2h" (2 hours). The default aggregation granularity is one hour (1h).

Aggregation Notes
  • You can also perform aggregation queries for TSDB tables without pre-aggregates, but when configured correctly, pre-aggregation queries are more efficient. To ensure that pre-aggregation is used to process aggregation queries and improve performance —

    • When creating the TSDB table, set its aggregation granularity (-i|--aggregation-granularity) to an interval that's significantly larger than the table's metric-samples ingestion rate (-r|--ingestion-rate).
    • When querying the table, set the aggregation interval (-i|--aggregation-interval) to a sufficient multiplier of the table's aggregation granularity. For example, if the table's ingestion rate is 1 sample per second ("1/s") and you want to user hourly queries (i.e., use a query aggregation interval of "1h"), you might set the table's pre-aggregation granularity to 20 minutes ("20m").
  • When using the aggregates flag, the CLI automatically adds count to the TSDB's aggregators. However, it's recommended to set this aggregator explicitly if you need it.

  • Some aggregates are calculated from other aggregates. For example, the avg aggregate is calculated from the count and sum aggregates.

The following command creates a new "tsdb_example_aggr" TSDB with an ingestion rate of one sample per second in a tsdb_tests directory in the default configured "projects" container. The TSDB is created with the count, avg, min, and max aggregators and an aggregation interval of 1 hour:

tsdbctl create -t tsdb_example_aggr -r 1/s -a "count,avg,min,max" -i 1h

Supported Aggregation Functions

Version 0.13 of the CLI supports the following aggregation functions, which are all applied to the samples of each metric item according to the TSDB's aggregation granularity (interval):

  • avg — the average of the sample values.
  • count — the number of ingested samples.
  • last — the value of the last sample (i.e., the sample with the latest time).
  • max — the maximal sample value.
  • min — the minimal sample value.
  • rate — the change rate of the sample values, which is calculated as <last sample value of the previous interval> - <last sample value of the current interval>) / <aggregation granularity>.
  • stddev — the standard deviance of the sample values.
  • stdvar — the standard variance of the sample values.
  • sum — the sum of the sample values.

Adding Samples to a TSDB

Use the CLI's add command (or its append alias) to add (ingest) metric samples to a TSDB. You must provide the name of the ingested metric and one or more sample values for the metric. You also need to provide the samples' generation times; when ingesting a single sample, the default sample time is the current time. In addition, you can optionally specify metric labels. Each unique metric name and optional labels combination corresponds to a metric item (row) in the TSDB with attributes (columns) for each label.

The ingestion input can be provided in one of two ways:

  • Using command-line arguments and flags —

    • metric argument [Required] — a string containing the name of the ingested metric. For example, "cpu".

    • labels argument [Optional] — a string containing a comma-separated list of <label name>=<label value> key-value pairs. The label values must be of type string and cannot contain commas. For example, "os=mac,host=A".

    • -d|--values flag [Required] — a string containing a comma-separated list of sample data values. The values can be of type integer or float and cannot contain periods or commas; note that all values for a given metric must be of the same type. For example, "67.0,90.2,70.5".

    • -m|--times flag [Optional for a single metric; Required for multiple samples] — a string containing a comma-separated list of sample generation times ("sample times") for the provided sample values. A sample time can be specified as a Unix timestamp in milliseconds or as a relative time of the format "now" or "now-[0-9]+[mhd]" (where 'm' = minutes, 'h' = hours, and 'd' = days). For example, "1537971020000,now-2d,now-95m,now".
      The default sample time when ingesting a single sample is the current time (i.e., the TSDB ingestion time) — now.

      Note
      An ingested sample time cannot be earlier or equal to the latest previously ingested sample time for the same metric item. This applies also to samples ingested in the same command, so specify the ingestion times in ascending chronological order. For example, an add command with -d "1,2" -m "now,now-1m" will ingest only the first sample (1) and not the second sample (2) because the time of the second sample (now-2) is earlier than that of the first sample (now). To ingest both samples, change the order in the command to -d "2,1" "now-1m,now".
    Note
    When ingesting samples at scale, use a CSV file or a Nuclio function rather than providing the ingestion input in the command line.
  • Using the -f|--file flag to provide the path to a CSV metric-samples input file that contains one or more items (rows) of the following format:

    <metric name>,[<labels>],<sample data value>[,<sample time>]
    

    The CSV columns (attributes) are the equivalent of the arguments and flags described for the command-line arguments method in the previous bullet and their values are subject to the same guidelines. Note that all rows in the CSV file must have the same number of columns. For ingestion of multiple metrics, specify the ingestion times.

Examples

The following commands ingest three samples and a label for a temperature metric and multiple samples with different label combinations and no labels for a cpu metric into the tsdb_example TSDB. The sample times are specified using the -m flag:

tsdbctl add temperature -t tsdb_example "degrees=Celsius" -d "32,29.5,25.3" -m "now-2d,now-1d,now"
tsdbctl add cpu -t tsdb_example -d "90,82.5" -m "now-2d,now-1d"
tsdbctl add cpu "host=A,os=linux" -t tsdb_example -d "23.87,47.3" -m "now-18h,now-12h"
tsdbctl add cpu "host=A" -t tsdb_example -d "50.2" -m "now-6h"
tsdbctl add cpu "os=linux" -t tsdb_example -d "88.8,91" -m "now-1h,now-30m"
tsdbctl add cpu "host=A,os=linux,arch=amd64" -t tsdb_example -d "70.2,55" -m "now-15m,now"

The same ingestion can also be done by providing the samples input in a CSV file, as demonstrated in the following command:

tsdbctl add -t tsdb_example -f ~/metric_samples.csv

The command uses this example metric_samples.csv file, which you can also download here. Copy the file to your home directory (~/) or change the file path in the ingestion command:

temperature,degrees=Celsius,32,now-2d
temperature,degrees=Celsius,29.5,now-1d
temperature,degrees=Celsius,25.3,now
cpu,,90,now-2d
cpu,,82.5,now-1d
cpu,"host=A,os=linux",23.87,now-18h
cpu,"host=A,os=linux",47.3,now-12h
cpu,host=A,50.2,now-6h
cpu,os=linux,88.8,now-1h
cpu,os=linux,91,now-30m
cpu,"host=A,os=linux,arch=amd64",70.2,now-15m
cpu,"host=A,os=linux,arch=amd64",55,now

The following command demonstrates ingestion of samples for an m1 label with host and os labels using a CSV file that is found in the directory from which the CLI is run:

tsdbctl add -t tsdb_example_aggr -f tsdb_example_aggr.csv

The command uses this example tsdb_example_aggr.csv file, which you can also download here:

m1,"os=darwin,host=A",1,1514802220000
m1,"os=darwin,host=A",2,1514812086000
m1,"os=darwin,host=A",3,1514877315000
m1,"os=linux,host=A",1,1514797500000
m1,"os=linux,host=A",2,1514799605000
m1,"os=linux,host=A",3,1514804625000
m1,"os=linux,host=A",4,1514818759000
m1,"os=linux,host=A",5,1514897354000
m1,"os=linux,host=A",6,1514897858000
m1,"os=windows,host=A",1,1514803048000
m1,"os=windows,host=A",2,1514808826000
m1,"os=windows,host=A",3,1514812736000
m1,"os=windows,host=A",4,1514881791000
m1,"os=darwin,host=B",1,1514802842000
m1,"os=darwin,host=B",2,1514818576000
m1,"os=darwin,host=B",3,1514891100000
m1,"os=linux,host=B",1,1514798275000
m1,"os=linux,host=B",2,1514816100000
m1,"os=linux,host=B",3,1514895734000
m1,"os=linux,host=B",4,1514900599000
m1,"os=windows,host=B",1,1514799605000
m1,"os=windows,host=B",2,1514810326000
m1,"os=windows,host=B",3,1514881791000
m1,"os=windows,host=B",4,1514900597000

Getting TSDB Configuration and Metrics Information

Use the CLI's info command to retrieve basic information about a TSDB. The command returns the TSDB's configuration (schema) — which includes the version, storage class, sample retention period, chunk interval, partitioning interval, pre-aggregates, and aggregation granularity for the entire table and for each partition (currently this is the same for all partitions); the partitions' start times (which are also their names); the number of sharding buckets; and the schema of the TSDB's item attributes.

You can optionally use the -n|--names flag to also display the names of the metrics contained in the TSDB, and you can use the -m|--performance flag to display a count of the number of metric items in the TSDB (i.e., the number of unique metric-name and labels combinations).

The following command returns the full schema and metrics information for the tsdb_example_aggr TSDB:

tsdbctl info -t tsdb_example_aggr -m -n

Querying a TSDB

Use the CLI's query command (or its get alias) to query a TSDB and retrieve filtered information about the ingested metric samples. The command requires that you either set the metric string argument to the name of the queried metric (for example, "noise"), or set the -f|--filter flag to a filter-expression string that defines the scope of the query (see Filter Expression). To reference a metric name in the query filter, use the __name__ attribute (for example, "(__name__=='cpu1') OR (__name__=='cpu2')" or "starts(__name__,'cpu')"). To reference labels in the query filter, just use the label name as the attribute name (for example, "os=='linux' AND arch=='amd64'").

Note
  • Currently, only labels of type string are supported; see the Software Specifications and Restrictions. Therefore, ensure that you embed label attribute values in your filter expression within quotation marks even when the values represent a number (for example, "node == '1'"), and don't apply arithmetic operators to such attributes (unless you want to perform a lexicographic string comparison).

  • Queries that set the metric argument use range scan and are therefore faster.

  • In the current release, the query command doesn't support cross-series aggregation (-a|--aggregates with *_all aggregation functions) or the -w|--aggregation-window and --groupBy flags.

When using the -f|--filter flag to define a query filter, you don't necessarily need to include a metric name in the query. You can select, for example, to filter the query only by labels. You can also query all metric samples in the query's time range by omitting the metric argument and using a filter expression that always evaluates to true, such as "1==1"; to query the full TSDB content, also set the -b|--begin flag to 0.

You can optionally use the -b|--begin and -e|--end flags to specify start (minimum) and end (maximum) times that restrict the query to a specific time range. The start and end times are each specified as a string that contains an RFC 3339 time string, a Unix timestamp in milliseconds, or a relative time of the format "now" or "now-[0-9]+[mhd]" (where 'm' = minutes, 'h' = hours, and 'd' = days); the start time can also be set to zero (0) for the earliest sample time in the TSDB.
Alternatively, you can use the -l|--last flag to define the time range as the last <n> minutes, hours, or days ("[0-9]+[mdh]").
The default end time is the current time (now) and the default start time is one hour earlier than the end time. Therefore, the default time range when neither flag is set is the last hour. Note that the time range applies to the samples' generation times ("the sample times") and not to the times at which they were ingested into the TSDB.

By default, the command returns the query results in plain-text format ("text"), but you can use the -o|--output flag to specify a different format — "csv" (CSV) or "json" (JSON).

Examples

The following query returns all metric samples contained in the tsdb_example TSDB:

tsdbctl query -t tsdb_example -f "1==1" -b 0

The following queries both return tsdb_example TSDB cpu metric samples that were generated within the last hour and have a host label whose value is 'A' and an os label whose value is "linux":

tsdbctl query cpu -t tsdb_example -f "host=='A' AND os=='linux'" -b now-1h
tsdbctl query cpu -t tsdb_example -f "host=='A' AND os=='linux'" -l 1h

The following query returns, in CSV format, all tsdb_example TSDB metric samples that have a degrees label and were generated in 2022:

tsdbctl query -t tsdb_example -f "exists(degrees)" -b 2022-01-01T00:00:00Z -e 2022-12-31T23:59:59Z -o csv

Aggregation Queries

You can use the optional -a|--aggregates flag of the query command to provide a comma-separated list of aggregation functions ("aggregators") to apply to the raw samples data; for example, "sum,stddev,stdvar". See Supported Aggregation Functions for details. You can use the -i|--aggregation-interval flag to specify the time interval for applying the specified aggregators. The interval is specified as a string of the format "[0-9]+[mhd]" (where 'm' = minutes, 'h' = hours, and 'd' = days); for example, "3h" (3 hours). The default aggregation interval is the difference between the query's end and start times; for example, for the default query start and end times of now-1h and now, the default aggregation interval will be one hour (1h).

You can submit aggregation queries also for a TSDB without pre-aggregates. However, when the TSDB has pre-aggregates that match the query aggregators and the query's aggregation interval is a sufficient multiplier of the TSDB's aggregation granularity, the query processing is sped-up by using the TSDB's pre-aggregates (the aggregation data that's stored in the TSDB's aggregation attributes) instead of performing a new calculation. See also the Aggregation Notes for the create command.

The following query returns for each tsdb_example TSDB metric item whose metric name begins with "cpu", the minimal and maximal sample values and the standard deviation over two-hour aggregation intervals for samples that were generated in the last two days:

tsdbctl query -t tsdb_example -f "starts(__name__,'cpu')" -a "min,max,stddev" -i 2h -l 2d

The following queries return for each m1 metric item in the tsdb_example_aggr TSDB, the daily, hourly, or bi-hourly samples count and data-values average (depending on the aggregation interval) beginning with 1 Jan 2018 at 00:00 until the current time (default). (Note that results are returned only for interval periods that contain samples.) See the Output (-i <interval>) tabs for example command outputs that match the tsdb_example_aggr.csv ingestion example that was used earlier in this tutorial:

    tsdbctl query m1 -t tsdb_example_aggr -a "count,avg" -i 1d -b 2018-01-01T00:00:00Z
    
    Name: m1  Labels: host=B,os=windows,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=2.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.50
    
    Name: m1  Labels: host=A,os=linux,Aggregate=count
      2018-01-01T00:00:00Z  v=4.00
      2018-01-02T00:00:00Z  v=2.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=avg
      2018-01-01T00:00:00Z  v=2.50
      2018-01-02T00:00:00Z  v=5.50
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=count
      2018-01-01T00:00:00Z  v=3.00
      2018-01-02T00:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=avg
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=2.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.50
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=count
      2018-01-01T00:00:00Z  v=2.00
      2018-01-02T00:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=avg
      2018-01-01T00:00:00Z  v=1.50
      2018-01-02T00:00:00Z  v=3.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=count
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-02T08:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=avg
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T08:00:00Z  v=3.00
      2018-01-02T12:00:00Z  v=4.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=count
      2018-01-01T08:00:00Z  v=2.00
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=2.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=avg
      2018-01-01T08:00:00Z  v=1.50
      2018-01-01T10:00:00Z  v=3.00
      2018-01-01T14:00:00Z  v=4.00
      2018-01-02T12:00:00Z  v=5.50
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-02T06:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T06:00:00Z  v=3.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T08:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.50
      2018-01-02T08:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=count
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=2.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=avg
      2018-01-01T08:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T12:00:00Z  v=3.50
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T10:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T10:00:00Z  v=3.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=count
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-02T08:00:00Z  v=1.00
      2018-01-02T13:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=windows,Aggregate=avg
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-02T08:00:00Z  v=3.00
      2018-01-02T13:00:00Z  v=4.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=count
      2018-01-01T09:00:00Z  v=2.00
      2018-01-01T11:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=2.00
    
    Name: m1  Labels: host=A,os=linux,Aggregate=avg
      2018-01-01T09:00:00Z  v=1.50
      2018-01-01T11:00:00Z  v=3.00
      2018-01-01T14:00:00Z  v=4.00
      2018-01-02T12:00:00Z  v=5.50
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T13:00:00Z  v=1.00
      2018-01-02T07:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T13:00:00Z  v=2.00
      2018-01-02T07:00:00Z  v=3.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=1.00
      2018-01-01T13:00:00Z  v=1.00
      2018-01-02T08:00:00Z  v=1.00
    
    Name: m1  Labels: host=A,os=windows,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T12:00:00Z  v=2.00
      2018-01-01T13:00:00Z  v=3.00
      2018-01-02T08:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=count
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T12:00:00Z  v=1.00
      2018-01-02T13:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=linux,Aggregate=avg
      2018-01-01T09:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T12:00:00Z  v=3.00
      2018-01-02T13:00:00Z  v=4.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=count
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=1.00
      2018-01-02T11:00:00Z  v=1.00
    
    Name: m1  Labels: host=B,os=darwin,Aggregate=avg
      2018-01-01T10:00:00Z  v=1.00
      2018-01-01T14:00:00Z  v=2.00
      2018-01-02T11:00:00Z  v=3.00
    

    As explained above, you can also submit aggregation queries for TSDBs without pre-aggregates. In such cases, the aggregations are calculated when the query is processed. For example, the following query returns a three-day average for the tsdb_example TSDB's temperature metric samples:

    tsdbctl query temperature -t tsdb_example -a avg -i 3d -b 0
    

    Deleting a TSDB

    Use the CLI's delete command (or its del alias) to delete a TSDB or delete content from a TSDB.

    Use -a|--all flag to delete the entire TSDB — i.e., delete the TSDB table, including its schema (which contains the configuration information) and all its content.

    You can optionally use the -b|--begin and -e|--end flag to define a sample-times range for the delete operation. As with the query command, the start and end times are each specified as a string that contains an RFC 3339 time string, a Unix timestamp in milliseconds, or a relative time of the format "now" or "now-[0-9]+[mhd]" (where 'm' = minutes, 'h' = hours, and 'd' = days); the start time can also be set to zero (0) for the earliest sample time in the TSDB. The default end (maximum) time is the current time (now) and the default start (minimum) time is one hour earlier than the end time.

    To avoid inadvertent deletes, by default the command prompts you to confirm the delete operation. You can use the --force flag to perform a forceful deletion without prompting for confirmation.

    You can also use the -i|--ignore-errors flag to skip errors that might occur during the delete operation and attempt to proceed to the next step.

    Note

    Examples

    The following command completely deletes the tsdb_example_aggr TSDB (subject to user confirmation in the command line):

    tsdbctl delete -t tsdb_example_aggr -a
    

    You can add the --force flag to enforce the delete operation and bypass the confirmation prompt:

    tsdbctl delete -t tsdb_example_aggr -a --force
    

    The following command deletes all tsdb_example TSDB partitions (and contained metric items) between the earliest sample time in the TSDB and Unix time 1569887999000 (2019-09-30T23:59:59Z):

    tsdbctl delete -t tsdb_example -b 0 -e 1569887999000
    

    See Also