Skip to main content
is a -based database that is optimized for time-series data. It provides a range of features and optimizations that supercharge your queries while keeping the costs down. For example:
  • The row-columnar engine in makes queries up to 350x faster, ingests 44% faster, and reduces storage by 90%.
The following figure shows how optimizes your data for superfast real-time analytics: Main features and tiered data This page shows you how to rapidly implement the features in that enable you to ingest and query data faster while keeping the costs low.

Prerequisites

To follow the steps on this page:
  • Create a target with time-series and analytics enabled.

    You need your connection details. This procedure also works for .

Optimize time-series data in s with

Time-series data represents the way a system, process, or behavior changes over time. _CAPs are tables that help you improve insert and query performance by automatically partitioning your data by time. Each is made up of child tables called s. Each is assigned a range of time, and only contains data from that range. When you run a query, identifies the correct and runs the query on it, instead of going through the entire table. You can also tune s to increase performance even more. Hypertable structure is the hybrid row-columnar storage engine in used by . Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities. dynamically stores data in the most efficient format for its lifecycle:
  • Row-based storage for recent data: the most recent chunk (and possibly more) is always stored in the , ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
  • Columnar storage for analytical performance: chunks are automatically compressed into the , optimizing storage efficiency and accelerating analytical queries.
Unlike traditional columnar databases, allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics—within a single database. _CAPs exist alongside regular tables. You use regular tables for relational data, and interact with s and regular tables in the same way. This section shows you how to create regular tables and s, and import relational and time-series data from external files.
  1. Import some time-series data into s
    1. Unzip crypto_sample.zip to a <local folder>. This test dataset contains:
      • Second-by-second data for the most-traded crypto-assets. This time-series data is best suited for optimization in a hypertable.
      • A list of asset symbols and company names. This is best suited for a regular relational table.
      To import up to 100GB of data directly from your current -based database, migrate with downtime using native tooling. To seamlessly import 100GB-10TB+ of data, use the live migration tooling supplied by . To add data from non- data sources, see Import and ingest data.
    2. Upload data into a : To more fully understand how to create a , how s work, and how to optimize them for performance by tuning intervals and enabling chunk skipping, see the s documentation.
      1. In Terminal, navigate to <local folder> and connect to your .
        psql -d "postgres://<username>:<password>@<host>:<port>/<database-name>"
        
        You use your connection details to fill in this connection string.
      2. Create tables for the data to import:
        • For the time-series data:
          1. In your sql client, create a : Create a for your time-series data using CREATE TABLE. For efficient queries, remember to segmentby the column you will use most often to filter your data. For example:
            CREATE TABLE crypto_ticks (
              "time" TIMESTAMPTZ,
              symbol TEXT,
              price DOUBLE PRECISION,
              day_volume NUMERIC
            ) WITH (
               tsdb.hypertable,
               tsdb.partition_column='time',
               tsdb.segmentby = 'symbol'
            );
            
            If you are self-hosting v2.19.3 and below, create a relational table, then convert it using create_hypertable. You then enable with a call to ALTER TABLE.
        • For the relational data: In your sql client, create a normal table:
          CREATE TABLE crypto_assets (
           symbol TEXT NOT NULL,
           name TEXT NOT NULL
          );
          
      3. Upload the dataset to your :
        \COPY crypto_ticks from './tutorial_sample_tick.csv' DELIMITER ',' CSV HEADER;
        
        \COPY crypto_assets from './tutorial_sample_assets.csv' DELIMITER ',' CSV HEADER;
        
  2. Have a quick look at your data You query s in exactly the same way as you would a relational table. Use one of the following SQL editors to run a query and see the data you uploaded:
    • psql: easily run queries on your s or self-hosted deployment from Terminal.

Enhance query performance for analytics

_CAP is the hybrid row-columnar storage engine, designed specifically for real-time analytics and powered by time-series data. The advantage of is its ability to seamlessly switch between row-oriented and column-oriented storage. This flexibility enables to deliver the best of both worlds, solving the key challenges in real-time analytics. Move from rowstore to columstore in hypercore When converts s from the to the , multiple records are grouped into a single row. The columns of this row hold an array-like structure that stores all the data. Because a single row takes up less disk space, you can reduce your size by more than 90%, and can also speed up your queries. This helps you save on storage costs, and keeps your queries operating at lightning speed. is enabled by default when you call CREATE TABLE. Best practice is to compress data that is no longer needed for highest performance queries, but is still accessed regularly in the . For example, yesterday’s market data.
  1. Add a policy to convert s to the at a specific time interval For example, yesterday’s data:
    CALL add_columnstore_policy('crypto_ticks', after => INTERVAL '1d');
    
    If you have not configured a segmentby column, chooses one for you based on the data in your . For more information on how to tune your s for the best performance, see efficient queries.
  2. View your data space saving When you convert data to the , as well as being optimized for analytics, it is compressed by more than 90%. This helps you save on storage costs and keeps your queries operating at lightning speed. To see the amount of space saved, click Explorer > public > crypto_ticks. Columnstore data savings

Write fast and efficient analytical queries

Aggregation is a way of combing data to get insights from it. Average, sum, and count are all examples of simple aggregates. However, with large amounts of data, aggregation slows things down, quickly. _CAPs are a kind of that is refreshed automatically in the background as new data is added, or old data is modified. Changes to your dataset are tracked, and the behind the is automatically updated in the background. Reduced data calls with CAGGs You create s on uncompressed data in high-performance storage. They continue to work on data in the and rarely accessed data in tiered storage. You can even create s on top of your s. You use s to create a . s aggregate data in s by time interval. For example, a 5-minute, 1-hour, or 3-day bucket. The data grouped in a uses a single timestamp. _CAPs minimize the number of records that you need to look up to perform your query. This section shows you how to run fast analytical queries using s and in . You can also do this using psql.
  1. Connect to your In , select your in the connection drop-down in the top right.
  2. Create a For a , data grouped using a is stored in a MATERIALIZED VIEW in a . timescaledb.continuous ensures that this data is always up to date. In data mode, use the following code to create a on the real-time data in the crypto_ticks table:
    CREATE MATERIALIZED VIEW assets_candlestick_daily
    WITH (timescaledb.continuous) AS
    SELECT
      time_bucket('1 day', "time") AS day,
      symbol,
      max(price) AS high,
      first(price, time) AS open,
      last(price, time) AS close,
      min(price) AS low
    FROM crypto_ticks srt
    GROUP BY day, symbol;
    
    This creates the candlestick chart data you use to visualize the price change of an asset.
  3. Create a policy to refresh the view every hour
    SELECT add_continuous_aggregate_policy('assets_candlestick_daily',
    start_offset => INTERVAL '3 weeks',
    end_offset => INTERVAL '24 hours',
    schedule_interval => INTERVAL '3 hours');
    
  4. Have a quick look at your data You query s exactly the same way as your other tables. To query the assets_candlestick_daily for all assets:
To see the change in terms of query time and data returned between a regular query and a , run the query part of the ( SELECT ...GROUP BY day, symbol; ) and compare the results.