Introduction to Time Series Databases and InfluxDB 2.0: Architecture, Features, Installation, and Practical Applications
This article explains what time series databases are, introduces InfluxDB as the leading TSDB, describes its TICK architecture and storage engine, provides step‑by‑step installation and configuration of InfluxDB and Telegraf, demonstrates visualization, JMeter integration, and Flux queries in Python, and highlights the rapid market growth of TSDBs.
Time series databases (TSDB) store large volumes of time‑stamped data such as CPU usage, housing prices, or temperature trends, supporting fast writes, persistence, multi‑dimensional queries and aggregation.
InfluxDB is the most popular TSDB, ranking first on DB‑Engines; it has evolved from the InfluxQL‑based 1.x series to the Flux‑driven 2.x series, offering both cloud and OSS editions.
The TICK architecture (Telegraf, InfluxDB, Chronograf, Kapacitor) provides data collection, storage, visualization and processing, and can be replaced by Grafana for UI.
InfluxDB’s storage engine uses a Time‑Structured Merge Tree (TSM) built on a Log‑Structured Merge Tree (LSM) with components such as Memtable, Immutable, SSTable, WAL, cache and compactor.
Installation steps include downloading the .deb package, installing with sudo dpkg -i influxdb2-2.0.7-amd64.deb , starting and stopping the service via sudo service influxdb start , sudo service influxdb stop , and checking status with sudo service influxdb status .
Telegraf is installed similarly and started with systemctl start telegraf ; its status can be checked with systemctl status telegraf .
Using InfluxDB 2.0’s built‑in UI, a bucket is created, a Telegraf configuration is generated, and data are visualized directly or via Grafana; visualizations can be exported and saved to dashboards.
JMeter can write performance metrics to InfluxDB by generating a read/write token and configuring the BackendListener with the appropriate URL, organization, bucket and token.
Flux, the new query language, can be used from Python via the InfluxDBClient; example code shows connecting, querying a DataFrame, and inserting points:
import pandas as pd
from influxdb_client import InfluxDBClient
my_token = "PePwz1xFzM_edpm6NB0DyR2B04XWqDNQEFPmp9i8hxVW8DmlTTSzywrTyh_p5uv_k1h0Qdxy3U99J2S7TV9X7A=="
client = InfluxDBClient(url='http://192.168.79.147:8086', token=my_token, org='org_demo')
query_api = client.query_api()
mem_query = '''
from(bucket: "demo_bucket")
|> range(start: -5w, stop: now())
|> filter(fn: (r) => r["_measurement"] == "mem")
|> filter(fn: (r) => r["_field"] == "available")
|> filter(fn: (r) => r["host"] == "ubuntu")
|> yield(name: "mean")
'''
table = query_api.query_data_frame(mem_query, "org_demo")
mem_example = pd.DataFrame(table, columns=['_start', '_value', '_field', 'host'])
print(mem_example.head(5))
client.close()Data insertion with Flux can also be performed from Python:
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
my_token = "ENL3dUfGzTBFGcHzJ8iCIfbKF0fF7C7-P5PDkGpDWLzvvHuP2v9tKVgeZAFqV3y8sLXJt8alK0e-jicHVDgOEg=="
client = InfluxDBClient(url='http://192.168.79.147:8086', token=my_token, org='org_demo')
write_api = client.write_api(write_options=SYNCHRONOUS)
_point1 = Point("_measurement").tag("location", "Beijing").field("temperature", 36.0)
_point2 = Point("_measurement").tag("location", "Shanghai").field("temperature", 32.0)
write_api.write(bucket="python_bucket", record=[_point1, _point2])The article concludes that TSDBs are the fastest‑growing segment of the database market, driven by big‑data workloads that require real‑time analytics and forecasting, enabling enterprises to gain predictive and decision‑making capabilities.
360 Quality & Efficiency
360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.