2015-01-21 11:07:45 -08:00
|
|
|
// Copyright 2013 The Prometheus Authors
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
|
|
|
|
package remote
|
|
|
|
|
|
|
|
import (
|
2016-08-29 08:08:19 -07:00
|
|
|
"fmt"
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
"sync"
|
2016-08-23 13:26:33 -07:00
|
|
|
"sync/atomic"
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
"testing"
|
2016-08-23 13:26:33 -07:00
|
|
|
"time"
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
|
2015-08-20 08:18:46 -07:00
|
|
|
"github.com/prometheus/common/model"
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
)
|
|
|
|
|
2015-03-29 17:42:04 -07:00
|
|
|
type TestStorageClient struct {
|
2016-08-29 08:08:19 -07:00
|
|
|
receivedSamples map[string]model.Samples
|
|
|
|
expectedSamples map[string]model.Samples
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
wg sync.WaitGroup
|
2016-08-29 08:08:19 -07:00
|
|
|
mtx sync.Mutex
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
}
|
|
|
|
|
2016-08-29 08:08:19 -07:00
|
|
|
func NewTestStorageClient() *TestStorageClient {
|
|
|
|
return &TestStorageClient{
|
|
|
|
receivedSamples: map[string]model.Samples{},
|
|
|
|
expectedSamples: map[string]model.Samples{},
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *TestStorageClient) expectSamples(ss model.Samples) {
|
|
|
|
c.mtx.Lock()
|
|
|
|
defer c.mtx.Unlock()
|
|
|
|
|
|
|
|
for _, s := range ss {
|
|
|
|
ts := s.Metric.String()
|
|
|
|
c.expectedSamples[ts] = append(c.expectedSamples[ts], s)
|
|
|
|
}
|
|
|
|
c.wg.Add(len(ss))
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
}
|
|
|
|
|
2015-03-29 17:42:04 -07:00
|
|
|
func (c *TestStorageClient) waitForExpectedSamples(t *testing.T) {
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
c.wg.Wait()
|
2016-08-29 08:08:19 -07:00
|
|
|
|
|
|
|
c.mtx.Lock()
|
|
|
|
defer c.mtx.Unlock()
|
|
|
|
for ts, expectedSamples := range c.expectedSamples {
|
|
|
|
for i, expected := range expectedSamples {
|
|
|
|
if !expected.Equal(c.receivedSamples[ts][i]) {
|
|
|
|
t.Fatalf("%d. Expected %v, got %v", i, expected, c.receivedSamples[ts][i])
|
|
|
|
}
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-29 08:08:19 -07:00
|
|
|
func (c *TestStorageClient) Store(ss model.Samples) error {
|
|
|
|
c.mtx.Lock()
|
|
|
|
defer c.mtx.Unlock()
|
|
|
|
|
|
|
|
for _, s := range ss {
|
|
|
|
ts := s.Metric.String()
|
|
|
|
c.receivedSamples[ts] = append(c.receivedSamples[ts], s)
|
|
|
|
}
|
|
|
|
c.wg.Add(-len(ss))
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2015-08-25 17:04:01 -07:00
|
|
|
func (c *TestStorageClient) Name() string {
|
2015-04-02 11:20:00 -07:00
|
|
|
return "teststorageclient"
|
|
|
|
}
|
|
|
|
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
func TestSampleDelivery(t *testing.T) {
|
|
|
|
// Let's create an even number of send batches so we don't run into the
|
|
|
|
// batch timeout case.
|
2017-02-13 12:43:20 -08:00
|
|
|
n := defaultQueueCapacity * 2
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
|
2015-08-20 08:18:46 -07:00
|
|
|
samples := make(model.Samples, 0, n)
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
for i := 0; i < n; i++ {
|
2016-08-29 08:08:19 -07:00
|
|
|
name := model.LabelValue(fmt.Sprintf("test_metric_%d", i))
|
2015-08-20 08:18:46 -07:00
|
|
|
samples = append(samples, &model.Sample{
|
|
|
|
Metric: model.Metric{
|
2016-08-29 08:08:19 -07:00
|
|
|
model.MetricNameLabel: name,
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
},
|
2015-08-20 08:18:46 -07:00
|
|
|
Value: model.SampleValue(i),
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
})
|
|
|
|
}
|
|
|
|
|
2016-08-29 08:08:19 -07:00
|
|
|
c := NewTestStorageClient()
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
c.expectSamples(samples[:len(samples)/2])
|
2017-02-13 12:43:20 -08:00
|
|
|
|
2017-02-21 12:45:43 -08:00
|
|
|
m := NewQueueManager(QueueManagerConfig{
|
2017-02-13 12:43:20 -08:00
|
|
|
Client: c,
|
|
|
|
Shards: 1,
|
|
|
|
})
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
|
|
|
|
// These should be received by the client.
|
2015-03-14 19:36:15 -07:00
|
|
|
for _, s := range samples[:len(samples)/2] {
|
|
|
|
m.Append(s)
|
|
|
|
}
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
// These will be dropped because the queue is full.
|
2015-03-14 19:36:15 -07:00
|
|
|
for _, s := range samples[len(samples)/2:] {
|
|
|
|
m.Append(s)
|
|
|
|
}
|
2016-09-19 13:47:51 -07:00
|
|
|
m.Start()
|
2014-10-10 05:19:02 -07:00
|
|
|
defer m.Stop()
|
Add optional sample replication to OpenTSDB.
Prometheus needs long-term storage. Since we don't have enough resources
to build our own timeseries storage from scratch ontop of Riak,
Cassandra or a similar distributed datastore at the moment, we're
planning on using OpenTSDB as long-term storage for Prometheus. It's
data model is roughly compatible with that of Prometheus, with some
caveats.
As a first step, this adds write-only replication from Prometheus to
OpenTSDB, with the following things worth noting:
1)
I tried to keep the integration lightweight, meaning that anything
related to OpenTSDB is isolated to its own package and only main knows
about it (essentially it tees all samples to both the existing storage
and TSDB). It's not touching the existing TieredStorage at all to avoid
more complexity in that area. This might change in the future,
especially if we decide to implement a read path for OpenTSDB through
Prometheus as well.
2)
Backpressure while sending to OpenTSDB is handled by simply dropping
samples on the floor when the in-memory queue of samples destined for
OpenTSDB runs full. Prometheus also only attempts to send samples once,
rather than implementing a complex retry algorithm. Thus, replication to
OpenTSDB is best-effort for now. If needed, this may be extended in the
future.
3)
Samples are sent in batches of limited size to OpenTSDB. The optimal
batch size, timeout parameters, etc. may need to be adjusted in the
future.
4)
OpenTSDB has different rules for legal characters in tag (label) values.
While Prometheus allows any characters in label values, OpenTSDB limits
them to a to z, A to Z, 0 to 9, -, _, . and /. Currently any illegal
characters in Prometheus label values are simply replaced by an
underscore. Especially when integrating OpenTSDB with the read path in
Prometheus, we'll need to reconsider this: either we'll need to
introduce the same limitations for Prometheus labels or escape/encode
illegal characters in OpenTSDB in such a way that they are fully
decodable again when reading through Prometheus, so that corresponding
timeseries in both systems match in their labelsets.
Change-Id: I8394c9c55dbac3946a0fa497f566d5e6e2d600b5
2013-12-09 08:03:49 -08:00
|
|
|
|
|
|
|
c.waitForExpectedSamples(t)
|
|
|
|
}
|
2016-05-19 06:30:09 -07:00
|
|
|
|
2016-08-29 08:08:19 -07:00
|
|
|
func TestSampleDeliveryOrder(t *testing.T) {
|
|
|
|
ts := 10
|
2017-02-13 12:43:20 -08:00
|
|
|
n := defaultMaxSamplesPerSend * ts
|
2016-08-29 08:08:19 -07:00
|
|
|
|
|
|
|
samples := make(model.Samples, 0, n)
|
|
|
|
for i := 0; i < n; i++ {
|
|
|
|
name := model.LabelValue(fmt.Sprintf("test_metric_%d", i%ts))
|
|
|
|
samples = append(samples, &model.Sample{
|
|
|
|
Metric: model.Metric{
|
|
|
|
model.MetricNameLabel: name,
|
|
|
|
},
|
|
|
|
Value: model.SampleValue(i),
|
|
|
|
Timestamp: model.Time(i),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
c := NewTestStorageClient()
|
|
|
|
c.expectSamples(samples)
|
2017-02-21 12:45:43 -08:00
|
|
|
m := NewQueueManager(QueueManagerConfig{
|
2017-02-13 12:43:20 -08:00
|
|
|
Client: c,
|
|
|
|
// Ensure we don't drop samples in this test.
|
|
|
|
QueueCapacity: n,
|
|
|
|
})
|
2016-08-29 08:08:19 -07:00
|
|
|
|
|
|
|
// These should be received by the client.
|
|
|
|
for _, s := range samples {
|
|
|
|
m.Append(s)
|
|
|
|
}
|
2016-09-19 13:47:51 -07:00
|
|
|
m.Start()
|
2016-08-29 08:08:19 -07:00
|
|
|
defer m.Stop()
|
|
|
|
|
|
|
|
c.waitForExpectedSamples(t)
|
|
|
|
}
|
|
|
|
|
2016-08-23 13:26:33 -07:00
|
|
|
// TestBlockingStorageClient is a queue_manager StorageClient which will block
|
|
|
|
// on any calls to Store(), until the `block` channel is closed, at which point
|
|
|
|
// the `numCalls` property will contain a count of how many times Store() was
|
|
|
|
// called.
|
|
|
|
type TestBlockingStorageClient struct {
|
|
|
|
numCalls uint64
|
2016-09-10 15:32:57 -07:00
|
|
|
block chan bool
|
2016-08-23 13:26:33 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
func NewTestBlockedStorageClient() *TestBlockingStorageClient {
|
|
|
|
return &TestBlockingStorageClient{
|
|
|
|
block: make(chan bool),
|
|
|
|
numCalls: 0,
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *TestBlockingStorageClient) Store(s model.Samples) error {
|
|
|
|
atomic.AddUint64(&c.numCalls, 1)
|
|
|
|
<-c.block
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *TestBlockingStorageClient) NumCalls() uint64 {
|
|
|
|
return atomic.LoadUint64(&c.numCalls)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *TestBlockingStorageClient) unlock() {
|
|
|
|
close(c.block)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (c *TestBlockingStorageClient) Name() string {
|
|
|
|
return "testblockingstorageclient"
|
|
|
|
}
|
|
|
|
|
2017-02-21 12:45:43 -08:00
|
|
|
func (t *QueueManager) queueLen() int {
|
2016-09-19 13:47:51 -07:00
|
|
|
queueLength := 0
|
|
|
|
for _, shard := range t.shards {
|
|
|
|
queueLength += len(shard)
|
|
|
|
}
|
|
|
|
return queueLength
|
|
|
|
}
|
|
|
|
|
2016-05-19 06:30:09 -07:00
|
|
|
func TestSpawnNotMoreThanMaxConcurrentSendsGoroutines(t *testing.T) {
|
2016-08-23 13:26:33 -07:00
|
|
|
// Our goal is to fully empty the queue:
|
2016-08-29 08:08:19 -07:00
|
|
|
// `MaxSamplesPerSend*Shards` samples should be consumed by the
|
|
|
|
// per-shard goroutines, and then another `MaxSamplesPerSend`
|
|
|
|
// should be left on the queue.
|
2017-02-13 12:43:20 -08:00
|
|
|
n := defaultMaxSamplesPerSend*defaultShards + defaultMaxSamplesPerSend
|
2016-05-19 06:30:09 -07:00
|
|
|
|
|
|
|
samples := make(model.Samples, 0, n)
|
|
|
|
for i := 0; i < n; i++ {
|
2016-08-29 08:08:19 -07:00
|
|
|
name := model.LabelValue(fmt.Sprintf("test_metric_%d", i))
|
2016-05-19 06:30:09 -07:00
|
|
|
samples = append(samples, &model.Sample{
|
|
|
|
Metric: model.Metric{
|
2016-08-29 08:08:19 -07:00
|
|
|
model.MetricNameLabel: name,
|
2016-05-19 06:30:09 -07:00
|
|
|
},
|
|
|
|
Value: model.SampleValue(i),
|
|
|
|
})
|
|
|
|
}
|
|
|
|
|
|
|
|
c := NewTestBlockedStorageClient()
|
2017-02-21 12:45:43 -08:00
|
|
|
m := NewQueueManager(QueueManagerConfig{
|
2017-02-13 12:43:20 -08:00
|
|
|
Client: c,
|
|
|
|
QueueCapacity: n,
|
|
|
|
})
|
2016-05-19 06:30:09 -07:00
|
|
|
|
2016-09-19 13:47:51 -07:00
|
|
|
m.Start()
|
2016-08-24 08:30:38 -07:00
|
|
|
|
|
|
|
defer func() {
|
|
|
|
c.unlock()
|
|
|
|
m.Stop()
|
|
|
|
}()
|
2016-05-19 06:30:09 -07:00
|
|
|
|
|
|
|
for _, s := range samples {
|
|
|
|
m.Append(s)
|
|
|
|
}
|
|
|
|
|
2016-08-29 08:08:19 -07:00
|
|
|
// Wait until the runShard() loops drain the queue. If things went right, it
|
2016-08-23 13:26:33 -07:00
|
|
|
// should then immediately block in sendSamples(), but, in case of error,
|
|
|
|
// it would spawn too many goroutines, and thus we'd see more calls to
|
|
|
|
// client.Store()
|
|
|
|
//
|
|
|
|
// The timed wait is maybe non-ideal, but, in order to verify that we're
|
|
|
|
// not spawning too many concurrent goroutines, we have to wait on the
|
|
|
|
// Run() loop to consume a specific number of elements from the
|
|
|
|
// queue... and it doesn't signal that in any obvious way, except by
|
2016-08-24 08:30:38 -07:00
|
|
|
// draining the queue. We cap the waiting at 1 second -- that should give
|
|
|
|
// plenty of time, and keeps the failure fairly quick if we're not draining
|
|
|
|
// the queue properly.
|
2016-08-29 08:08:19 -07:00
|
|
|
for i := 0; i < 100 && m.queueLen() > 0; i++ {
|
2016-08-23 13:26:33 -07:00
|
|
|
time.Sleep(10 * time.Millisecond)
|
2016-05-19 06:30:09 -07:00
|
|
|
}
|
|
|
|
|
2017-02-13 12:43:20 -08:00
|
|
|
if m.queueLen() != defaultMaxSamplesPerSend {
|
2017-02-21 12:45:43 -08:00
|
|
|
t.Fatalf("Failed to drain QueueManager queue, %d elements left",
|
2016-08-29 08:08:19 -07:00
|
|
|
m.queueLen(),
|
2016-08-24 08:30:38 -07:00
|
|
|
)
|
|
|
|
}
|
|
|
|
|
2016-08-23 13:26:33 -07:00
|
|
|
numCalls := c.NumCalls()
|
2017-02-13 12:43:20 -08:00
|
|
|
if numCalls != uint64(defaultShards) {
|
|
|
|
t.Errorf("Saw %d concurrent sends, expected %d", numCalls, defaultShards)
|
2016-05-19 06:30:09 -07:00
|
|
|
}
|
|
|
|
}
|