2017-11-07 13:00:38 -08:00
---
title: Storage
sort_rank: 5
---
# Storage
Prometheus includes a local on-disk time series database, but also optionally integrates with remote storage systems.
## Local storage
2020-10-27 02:50:37 -07:00
Prometheus's local time series database stores data in a custom, highly efficient format on local storage.
2017-11-07 13:00:38 -08:00
### On-disk layout
2021-04-15 01:55:01 -07:00
Ingested samples are grouped into blocks of two hours. Each two-hour block consists
of a directory containing a chunks subdirectory containing all the time series samples
for that window of time, a metadata file, and an index file (which indexes metric names
and labels to time series in the chunks directory). The samples in the chunks directory
2022-11-15 05:00:15 -08:00
are grouped together into one or more segment files of up to 512MB each by default. When
series are deleted via the API, deletion records are stored in separate tombstone files
(instead of deleting the data immediately from the chunk segments).
2019-01-15 02:32:29 -08:00
2020-10-27 02:50:37 -07:00
The current block for incoming samples is kept in memory and is not fully
persisted. It is secured against crashes by a write-ahead log (WAL) that can be
replayed when the Prometheus server restarts. Write-ahead log files are stored
in the `wal` directory in 128MB segments. These files contain raw data that
has not yet been compacted; thus they are significantly larger than regular block
files. Prometheus will retain a minimum of three write-ahead log files.
2021-09-13 11:34:07 -07:00
High-traffic servers may retain more than three WAL files in order to keep at
2020-10-27 02:50:37 -07:00
least two hours of raw data.
2017-11-07 13:00:38 -08:00
2020-10-27 02:50:37 -07:00
A Prometheus server's data directory looks something like this:
2017-11-07 13:00:38 -08:00
```
2019-04-16 03:40:13 -07:00
./data
├── 01BKGV7JBM69T2G1BGBGM6KB12
│ └── meta.json
├── 01BKGTZQ1SYQJTR4PB43C8PD98
│ ├── chunks
│ │ └── 000001
│ ├── tombstones
│ ├── index
│ └── meta.json
├── 01BKGTZQ1HHWHV8FBJXW1Y3W0K
│ └── meta.json
├── 01BKGV7JC0RY8A6MACW02A2PJD
│ ├── chunks
│ │ └── 000001
│ ├── tombstones
│ ├── index
│ └── meta.json
2020-07-31 10:17:16 -07:00
├── chunks_head
│ └── 000001
2019-04-16 03:40:13 -07:00
└── wal
2020-07-31 10:17:16 -07:00
├── 000000002
└── checkpoint.00000001
└── 00000000
2017-11-07 13:00:38 -08:00
```
2020-10-27 02:50:37 -07:00
Note that a limitation of local storage is that it is not clustered or
replicated. Thus, it is not arbitrarily scalable or durable in the face of
drive or node outages and should be managed like any other single node
2024-06-18 22:46:13 -07:00
database.
[Snapshots ](querying/api.md#snapshot ) are recommended for backups. Backups
made without snapshots run the risk of losing data that was recorded since
the last WAL sync, which typically happens every two hours. With proper
2020-10-27 02:50:37 -07:00
architecture, it is possible to retain years of data in local storage.
2019-10-16 12:45:01 -07:00
2022-11-15 05:00:15 -08:00
Alternatively, external storage may be used via the
[remote read/write APIs ](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage ).
Careful evaluation is required for these systems as they vary greatly in durability,
performance, and efficiency.
2017-11-07 13:00:38 -08:00
2020-07-31 10:17:16 -07:00
For further details on file format, see [TSDB format ](/tsdb/docs/format/README.md ).
2018-07-28 00:02:03 -07:00
2019-08-07 09:04:48 -07:00
## Compaction
The initial two-hour blocks are eventually compacted into longer blocks in the background.
2022-11-15 05:00:15 -08:00
Compaction will create larger blocks containing data spanning up to 10% of the retention time,
or 31 days, whichever is smaller.
2019-08-07 09:04:48 -07:00
2017-11-07 13:00:38 -08:00
## Operational aspects
2020-10-27 02:50:37 -07:00
Prometheus has several flags that configure local storage. The most important are:
2017-11-07 13:00:38 -08:00
2022-11-15 05:00:15 -08:00
- `--storage.tsdb.path` : Where Prometheus writes its database. Defaults to `data/` .
2024-08-09 04:03:04 -07:00
- `--storage.tsdb.retention.time` : How long to retain samples in storage. If neither
this flag nor `storage.tsdb.retention.size` is set, the retention time defaults to
`15d` . Supported units: y, w, d, h, m, s, ms.
2022-11-15 05:00:15 -08:00
- `--storage.tsdb.retention.size` : The maximum number of bytes of storage blocks to retain.
The oldest data will be removed first. Defaults to `0` or disabled. Units supported:
B, KB, MB, GB, TB, PB, EB. Ex: "512MB". Based on powers-of-2, so 1KB is 1024B. Only
the persistent blocks are deleted to honor this retention although WAL and m-mapped
chunks are counted in the total size. So the minimum requirement for the disk is the
peak space taken by the `wal` (the WAL and Checkpoint) and `chunks_head`
(m-mapped Head chunks) directory combined (peaks every 2 hours).
- `--storage.tsdb.wal-compression` : Enables compression of the write-ahead log (WAL).
Depending on your data, you can expect the WAL size to be halved with little extra
cpu load. This flag was introduced in 2.11.0 and enabled by default in 2.20.0.
Note that once enabled, downgrading Prometheus to a version below 2.11.0 will
require deleting the WAL.
Prometheus stores an average of only 1-2 bytes per sample. Thus, to plan the
capacity of a Prometheus server, you can use the rough formula:
2017-11-07 13:00:38 -08:00
```
needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample
```
2022-11-15 05:00:15 -08:00
To lower the rate of ingested samples, you can either reduce the number of
time series you scrape (fewer targets or fewer series per target), or you
can increase the scrape interval. However, reducing the number of series is
likely more effective, due to compression of samples within a series.
2017-11-07 13:00:38 -08:00
2021-01-30 03:04:48 -08:00
If your local storage becomes corrupted for whatever reason, the best
2020-11-28 17:23:10 -08:00
strategy to address the problem is to shut down Prometheus then remove the
2020-10-27 02:50:37 -07:00
entire storage directory. You can also try removing individual block directories,
2022-11-15 05:00:15 -08:00
or the WAL directory to resolve the problem. Note that this means losing
2020-10-27 02:50:37 -07:00
approximately two hours data per block directory. Again, Prometheus's local
storage is not intended to be durable long-term storage; external solutions
2020-11-28 17:23:10 -08:00
offer extended retention and data durability.
2020-07-30 02:22:44 -07:00
2022-11-15 05:00:15 -08:00
CAUTION: Non-POSIX compliant filesystems are not supported for Prometheus'
local storage as unrecoverable corruptions may happen. NFS filesystems
(including AWS's EFS) are not supported. NFS could be POSIX-compliant,
but most implementations are not. It is strongly recommended to use a
local filesystem for reliability.
2017-11-07 13:00:38 -08:00
2020-10-27 02:50:37 -07:00
If both time and size retention policies are specified, whichever triggers first
will be used.
2019-01-18 05:48:36 -08:00
2022-11-15 05:00:15 -08:00
Expired block cleanup happens in the background. It may take up to two hours
to remove expired blocks. Blocks must be fully expired before they are removed.
2019-08-07 09:04:48 -07:00
2024-07-15 09:30:16 -07:00
## Right-Sizing Retention Size
If you are utilizing `storage.tsdb.retention.size` to set a size limit, you
will want to consider the right size for this value relative to the storage you
have allocated for Prometheus. It is wise to reduce the retention size to provide
a buffer, ensuring that older entries will be removed before the allocated storage
for Prometheus becomes full.
At present, we recommend setting the retention size to, at most, 80-85% of your
2024-09-10 13:32:03 -07:00
allocated Prometheus disk space. This increases the likelihood that older entries
2024-07-15 09:30:16 -07:00
will be removed prior to hitting any disk limitations.
2017-11-07 13:00:38 -08:00
## Remote storage integrations
2020-10-27 02:50:37 -07:00
Prometheus's local storage is limited to a single node's scalability and durability.
Instead of trying to solve clustered storage in Prometheus itself, Prometheus offers
a set of interfaces that allow integrating with remote storage systems.
2017-11-07 13:00:38 -08:00
### Overview
2024-09-06 05:50:10 -07:00
Prometheus integrates with remote storage systems in four ways:
2017-11-07 13:00:38 -08:00
2024-09-06 05:50:10 -07:00
- Prometheus can write samples that it ingests to a remote URL in a [Remote Write format ](https://prometheus.io/docs/specs/remote_write_spec_2_0/ ).
- Prometheus can receive samples from other clients in a [Remote Write format ](https://prometheus.io/docs/specs/remote_write_spec_2_0/ ).
- Prometheus can read (back) sample data from a remote URL in a [Remote Read format ](https://github.com/prometheus/prometheus/blob/main/prompb/remote.proto#L31 ).
- Prometheus can return sample data requested by clients in a [Remote Read format ](https://github.com/prometheus/prometheus/blob/main/prompb/remote.proto#L31 ).
2017-11-07 13:00:38 -08:00
![Remote read and write architecture ](images/remote_integrations.png )
2024-09-06 05:50:10 -07:00
The remote read and write protocols both use a snappy-compressed protocol buffer encoding over
HTTP. The read protocol is not yet considered as stable API.
2017-11-07 13:00:38 -08:00
2024-09-06 05:50:10 -07:00
The write protocol has a [stable specification for 1.0 version ](https://prometheus.io/docs/specs/remote_write_spec/ )
and [experimental specification for 2.0 version ](https://prometheus.io/docs/specs/remote_write_spec_2_0/ ),
both supported by Prometheus server.
For details on configuring remote storage integrations in Prometheus as a client, see the
2022-11-15 05:00:15 -08:00
[remote write ](configuration/configuration.md#remote_write ) and
[remote read ](configuration/configuration.md#remote_read ) sections of the Prometheus
configuration documentation.
2017-11-07 13:00:38 -08:00
2022-11-15 05:00:15 -08:00
Note that on the read path, Prometheus only fetches raw series data for a set of
label selectors and time ranges from the remote end. All PromQL evaluation on the
raw data still happens in Prometheus itself. This means that remote read queries
have some scalability limit, since all necessary data needs to be loaded into the
querying Prometheus server first and then processed there. However, supporting
fully distributed evaluation of PromQL was deemed infeasible for the time being.
2017-11-07 13:00:38 -08:00
2024-09-06 05:50:10 -07:00
Prometheus also serves both protocols. The built-in remote write receiver can be enabled
by setting the `--web.enable-remote-write-receiver` command line flag. When enabled,
the remote write receiver endpoint is `/api/v1/write` . The remote read endpoint is
available on [`/api/v1/read` ](https://prometheus.io/docs/prometheus/latest/querying/remote_read_api/ ).
2017-11-07 13:00:38 -08:00
### Existing integrations
2022-11-15 05:00:15 -08:00
To learn more about existing integrations with remote storage systems, see the
[Integrations documentation ](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage ).
2020-12-10 07:29:44 -08:00
## Backfilling from OpenMetrics format
### Overview
2022-11-15 05:00:15 -08:00
If a user wants to create blocks into the TSDB from data that is in
[OpenMetrics ](https://openmetrics.io/ ) format, they can do so using backfilling.
However, they should be careful and note that it is not safe to backfill data
from the last 3 hours (the current head block) as this time range may overlap
with the current head block Prometheus is still mutating. Backfilling will
create new TSDB blocks, each containing two hours of metrics data. This limits
the memory requirements of block creation. Compacting the two hour blocks into
larger blocks is later done by the Prometheus server itself.
2020-12-10 07:29:44 -08:00
2022-11-15 05:00:15 -08:00
A typical use case is to migrate metrics data from a different monitoring system
or time-series database to Prometheus. To do so, the user must first convert the
source data into [OpenMetrics ](https://openmetrics.io/ ) format, which is the
input format for the backfilling as described below.
2020-12-10 07:29:44 -08:00
2024-05-21 05:44:55 -07:00
Note that native histograms and staleness markers are not supported by this
procedure, as they cannot be represented in the OpenMetrics format.
2021-01-30 03:04:48 -08:00
### Usage
2020-12-10 07:29:44 -08:00
2022-11-15 05:00:15 -08:00
Backfilling can be used via the Promtool command line. Promtool will write the blocks
to a directory. By default this output directory is ./data/, you can change it by
using the name of the desired output directory as an optional argument in the sub-command.
2020-12-10 07:29:44 -08:00
```
promtool tsdb create-blocks-from openmetrics < input file > [< output directory > ]
```
2022-11-15 05:00:15 -08:00
After the creation of the blocks, move it to the data directory of Prometheus.
If there is an overlap with the existing blocks in Prometheus, the flag
`--storage.tsdb.allow-overlapping-blocks` needs to be set for Prometheus versions
v2.38 and below. Note that any backfilled data is subject to the retention
configured for your Prometheus server (by time or size).
2021-04-01 13:38:00 -07:00
2021-06-29 02:23:38 -07:00
#### Longer Block Durations
2022-11-15 05:00:15 -08:00
By default, the promtool will use the default block duration (2h) for the blocks;
this behavior is the most generally applicable and correct. However, when backfilling
data over a long range of times, it may be advantageous to use a larger value for
the block duration to backfill faster and prevent additional compactions by TSDB later.
2021-06-29 02:23:38 -07:00
2022-11-15 05:00:15 -08:00
The `--max-block-duration` flag allows the user to configure a maximum duration of blocks.
The backfilling tool will pick a suitable block duration no larger than this.
2021-06-29 02:23:38 -07:00
2022-11-15 05:00:15 -08:00
While larger blocks may improve the performance of backfilling large datasets,
drawbacks exist as well. Time-based retention policies must keep the entire block
around if even one sample of the (potentially large) block is still within the
retention policy. Conversely, size-based retention policies will remove the entire
block even if the TSDB only goes over the size limit in a minor way.
2021-06-29 02:23:38 -07:00
2022-11-15 05:00:15 -08:00
Therefore, backfilling with few blocks, thereby choosing a larger block duration,
must be done with care and is not recommended for any production instances.
2021-06-29 02:23:38 -07:00
2021-04-01 13:38:00 -07:00
## Backfilling for Recording Rules
### Overview
2022-11-15 05:00:15 -08:00
When a new recording rule is created, there is no historical data for it.
Recording rule data only exists from the creation time on.
`promtool` makes it possible to create historical recording rule data.
2021-04-01 13:38:00 -07:00
### Usage
To see all options, use: `$ promtool tsdb create-blocks-from rules --help` .
Example usage:
2021-06-17 03:20:53 -07:00
2021-04-01 13:38:00 -07:00
```
$ promtool tsdb create-blocks-from rules \
--start 1617079873 \
--end 1617097873 \
--url http://mypromserver.com:9090 \
rules.yaml rules2.yaml
```
2022-11-15 05:00:15 -08:00
The recording rule files provided should be a normal
[Prometheus rules file ](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/ ).
2021-04-01 13:38:00 -07:00
2022-11-15 05:00:15 -08:00
The output of `promtool tsdb create-blocks-from rules` command is a directory that
contains blocks with the historical rule data for all rules in the recording rule
files. By default, the output directory is `data/` . In order to make use of this
new block data, the blocks must be moved to a running Prometheus instance data dir
`storage.tsdb.path` (for Prometheus versions v2.38 and below, the flag
`--storage.tsdb.allow-overlapping-blocks` must be enabled). Once moved, the new
blocks will merge with existing blocks when the next compaction runs.
2021-04-01 13:38:00 -07:00
### Limitations
2022-11-15 05:00:15 -08:00
- If you run the rule backfiller multiple times with the overlapping start/end times,
blocks containing the same data will be created each time the rule backfiller is run.
2021-04-01 13:38:00 -07:00
- All rules in the recording rule files will be evaluated.
2022-11-15 05:00:15 -08:00
- If the `interval` is set in the recording rule file that will take priority over
the `eval-interval` flag in the rule backfill command.
2021-04-01 13:38:00 -07:00
- Alerts are currently ignored if they are in the recording rule file.
2022-11-15 05:00:15 -08:00
- Rules in the same group cannot see the results of previous rules. Meaning that rules
that refer to other rules being backfilled is not supported. A workaround is to
backfill multiple times and create the dependent data first (and move dependent
data to the Prometheus server data dir so that it is accessible from the Prometheus API).