prometheus/main.go
Julius Volz 740d448983 Use custom timestamp type for sample timestamps and related code.
So far we've been using Go's native time.Time for anything related to sample
timestamps. Since the range of time.Time is much bigger than what we need, this
has created two problems:

- there could be time.Time values which were out of the range/precision of the
  time type that we persist to disk, therefore causing incorrectly ordered keys.
  One bug caused by this was:

  https://github.com/prometheus/prometheus/issues/367

  It would be good to use a timestamp type that's more closely aligned with
  what the underlying storage supports.

- sizeof(time.Time) is 192, while Prometheus should be ok with a single 64-bit
  Unix timestamp (possibly even a 32-bit one). Since we store samples in large
  numbers, this seriously affects memory usage. Furthermore, copying/working
  with the data will be faster if it's smaller.

*MEMORY USAGE RESULTS*
Initial memory usage comparisons for a running Prometheus with 1 timeseries and
100,000 samples show roughly a 13% decrease in total (VIRT) memory usage. In my
tests, this advantage for some reason decreased a bit the more samples the
timeseries had (to 5-7% for millions of samples). This I can't fully explain,
but perhaps garbage collection issues were involved.

*WHEN TO USE THE NEW TIMESTAMP TYPE*
The new clientmodel.Timestamp type should be used whenever time
calculations are either directly or indirectly related to sample
timestamps.

For example:
- the timestamp of a sample itself
- all kinds of watermarks
- anything that may become or is compared to a sample timestamp (like the timestamp
  passed into Target.Scrape()).

When to still use time.Time:
- for measuring durations/times not related to sample timestamps, like duration
  telemetry exporting, timers that indicate how frequently to execute some
  action, etc.

*NOTE ON OPERATOR OPTIMIZATION TESTS*
We don't use operator optimization code anymore, but it still lives in
the code as dead code. It still has tests, but I couldn't get all of them to
pass with the new timestamp format. I commented out the failing cases for now,
but we should probably remove the dead code soon. I just didn't want to do that
in the same change as this.

Change-Id: I821787414b0debe85c9fffaeb57abd453727af0f
2013-12-03 09:11:28 +01:00

370 lines
11 KiB
Go

// Copyright 2013 Prometheus Team
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package main
import (
"flag"
"os"
"os/signal"
"syscall"
"time"
"github.com/golang/glog"
"github.com/prometheus/client_golang/extraction"
clientmodel "github.com/prometheus/client_golang/model"
"github.com/prometheus/prometheus/config"
"github.com/prometheus/prometheus/notification"
"github.com/prometheus/prometheus/retrieval"
"github.com/prometheus/prometheus/rules"
"github.com/prometheus/prometheus/storage/metric"
"github.com/prometheus/prometheus/web"
"github.com/prometheus/prometheus/web/api"
)
const deletionBatchSize = 100
// Commandline flags.
var (
configFile = flag.String("configFile", "prometheus.conf", "Prometheus configuration file name.")
metricsStoragePath = flag.String("metricsStoragePath", "/tmp/metrics", "Base path for metrics storage.")
alertmanagerUrl = flag.String("alertmanager.url", "", "The URL of the alert manager to send notifications to.")
samplesQueueCapacity = flag.Int("storage.queue.samplesCapacity", 4096, "The size of the unwritten samples queue.")
diskAppendQueueCapacity = flag.Int("storage.queue.diskAppendCapacity", 1000000, "The size of the queue for items that are pending writing to disk.")
memoryAppendQueueCapacity = flag.Int("storage.queue.memoryAppendCapacity", 10000, "The size of the queue for items that are pending writing to memory.")
headCompactInterval = flag.Duration("compact.headInterval", 3*time.Hour, "The amount of time between head compactions.")
bodyCompactInterval = flag.Duration("compact.bodyInterval", 5*time.Hour, "The amount of time between body compactions.")
tailCompactInterval = flag.Duration("compact.tailInterval", 7*time.Hour, "The amount of time between tail compactions.")
headGroupSize = flag.Int("compact.headGroupSize", 500, "The minimum group size for head samples.")
bodyGroupSize = flag.Int("compact.bodyGroupSize", 5000, "The minimum group size for body samples.")
tailGroupSize = flag.Int("compact.tailGroupSize", 10000, "The minimum group size for tail samples.")
headAge = flag.Duration("compact.headAgeInclusiveness", 5*time.Minute, "The relative inclusiveness of head samples.")
bodyAge = flag.Duration("compact.bodyAgeInclusiveness", time.Hour, "The relative inclusiveness of body samples.")
tailAge = flag.Duration("compact.tailAgeInclusiveness", 24*time.Hour, "The relative inclusiveness of tail samples.")
deleteInterval = flag.Duration("delete.interval", 11*time.Hour, "The amount of time between deletion of old values.")
deleteAge = flag.Duration("delete.ageMaximum", 15*24*time.Hour, "The relative maximum age for values before they are deleted.")
arenaFlushInterval = flag.Duration("arena.flushInterval", 15*time.Minute, "The period at which the in-memory arena is flushed to disk.")
arenaTTL = flag.Duration("arena.ttl", 10*time.Minute, "The relative age of values to purge to disk from memory.")
notificationQueueCapacity = flag.Int("alertmanager.notificationQueueCapacity", 100, "The size of the queue for pending alert manager notifications.")
concurrentRetrievalAllowance = flag.Int("concurrentRetrievalAllowance", 15, "The number of concurrent metrics retrieval requests allowed.")
printVersion = flag.Bool("version", false, "print version information")
)
type prometheus struct {
headCompactionTimer *time.Ticker
bodyCompactionTimer *time.Ticker
tailCompactionTimer *time.Ticker
deletionTimer *time.Ticker
curationSema chan bool
stopBackgroundOperations chan bool
unwrittenSamples chan *extraction.Result
ruleManager rules.RuleManager
notifications chan notification.NotificationReqs
storage *metric.TieredStorage
curationState metric.CurationStateUpdater
}
func (p *prometheus) interruptHandler() {
notifier := make(chan os.Signal)
signal.Notify(notifier, os.Interrupt, syscall.SIGTERM)
<-notifier
glog.Warning("Received SIGINT/SIGTERM; Exiting gracefully...")
p.close()
os.Exit(0)
}
func (p *prometheus) compact(olderThan time.Duration, groupSize int) error {
select {
case p.curationSema <- true:
default:
glog.Warningf("Deferred compaction for %s and %s due to existing operation.", olderThan, groupSize)
return nil
}
defer func() {
<-p.curationSema
}()
processor := metric.NewCompactionProcessor(&metric.CompactionProcessorOptions{
MaximumMutationPoolBatch: groupSize * 3,
MinimumGroupSize: groupSize,
})
defer processor.Close()
curator := metric.NewCurator(&metric.CuratorOptions{
Stop: p.stopBackgroundOperations,
ViewQueue: p.storage.ViewQueue,
})
defer curator.Close()
return curator.Run(olderThan, clientmodel.Now(), processor, p.storage.DiskStorage.CurationRemarks, p.storage.DiskStorage.MetricSamples, p.storage.DiskStorage.MetricHighWatermarks, p.curationState)
}
func (p *prometheus) delete(olderThan time.Duration, batchSize int) error {
select {
case p.curationSema <- true:
default:
glog.Warningf("Deferred deletion for %s due to existing operation.", olderThan)
return nil
}
processor := metric.NewDeletionProcessor(&metric.DeletionProcessorOptions{
MaximumMutationPoolBatch: batchSize,
})
defer processor.Close()
curator := metric.NewCurator(&metric.CuratorOptions{
Stop: p.stopBackgroundOperations,
ViewQueue: p.storage.ViewQueue,
})
defer curator.Close()
return curator.Run(olderThan, clientmodel.Now(), processor, p.storage.DiskStorage.CurationRemarks, p.storage.DiskStorage.MetricSamples, p.storage.DiskStorage.MetricHighWatermarks, p.curationState)
}
func (p *prometheus) close() {
select {
case p.curationSema <- true:
default:
}
if p.headCompactionTimer != nil {
p.headCompactionTimer.Stop()
}
if p.bodyCompactionTimer != nil {
p.bodyCompactionTimer.Stop()
}
if p.tailCompactionTimer != nil {
p.tailCompactionTimer.Stop()
}
if p.deletionTimer != nil {
p.deletionTimer.Stop()
}
if len(p.stopBackgroundOperations) == 0 {
p.stopBackgroundOperations <- true
}
p.ruleManager.Stop()
p.storage.Close()
close(p.notifications)
close(p.stopBackgroundOperations)
}
func main() {
// TODO(all): Future additions to main should be, where applicable, glumped
// into the prometheus struct above---at least where the scoping of the entire
// server is concerned.
flag.Parse()
versionInfoTmpl.Execute(os.Stdout, BuildInfo)
if *printVersion {
os.Exit(0)
}
conf, err := config.LoadFromFile(*configFile)
if err != nil {
glog.Fatalf("Error loading configuration from %s: %v", *configFile, err)
}
ts, err := metric.NewTieredStorage(uint(*diskAppendQueueCapacity), 100, *arenaFlushInterval, *arenaTTL, *metricsStoragePath)
if err != nil {
glog.Fatal("Error opening storage: ", err)
}
unwrittenSamples := make(chan *extraction.Result, *samplesQueueCapacity)
ingester := &retrieval.MergeLabelsIngester{
Labels: conf.GlobalLabels(),
CollisionPrefix: clientmodel.ExporterLabelPrefix,
Ingester: retrieval.ChannelIngester(unwrittenSamples),
}
// Coprime numbers, fool!
headCompactionTimer := time.NewTicker(*headCompactInterval)
bodyCompactionTimer := time.NewTicker(*bodyCompactInterval)
tailCompactionTimer := time.NewTicker(*tailCompactInterval)
deletionTimer := time.NewTicker(*deleteInterval)
// Queue depth will need to be exposed
targetManager := retrieval.NewTargetManager(ingester, *concurrentRetrievalAllowance)
targetManager.AddTargetsFromConfig(conf)
notifications := make(chan notification.NotificationReqs, *notificationQueueCapacity)
// Queue depth will need to be exposed
ruleManager := rules.NewRuleManager(&rules.RuleManagerOptions{
Results: unwrittenSamples,
Notifications: notifications,
EvaluationInterval: conf.EvaluationInterval(),
Storage: ts,
PrometheusUrl: web.MustBuildServerUrl(),
})
if err := ruleManager.AddRulesFromConfig(conf); err != nil {
glog.Fatal("Error loading rule files: ", err)
}
go ruleManager.Run()
notificationHandler := notification.NewNotificationHandler(*alertmanagerUrl, notifications)
go notificationHandler.Run()
flags := map[string]string{}
flag.VisitAll(func(f *flag.Flag) {
flags[f.Name] = f.Value.String()
})
prometheusStatus := &web.PrometheusStatusHandler{
BuildInfo: BuildInfo,
Config: conf.String(),
RuleManager: ruleManager,
TargetPools: targetManager.Pools(),
Flags: flags,
Birth: time.Now(),
}
alertsHandler := &web.AlertsHandler{
RuleManager: ruleManager,
}
databasesHandler := &web.DatabasesHandler{
Provider: ts.DiskStorage,
RefreshInterval: 5 * time.Minute,
}
metricsService := &api.MetricsService{
Config: &conf,
TargetManager: targetManager,
Storage: ts,
}
webService := &web.WebService{
StatusHandler: prometheusStatus,
MetricsHandler: metricsService,
DatabasesHandler: databasesHandler,
AlertsHandler: alertsHandler,
}
prometheus := &prometheus{
bodyCompactionTimer: bodyCompactionTimer,
headCompactionTimer: headCompactionTimer,
tailCompactionTimer: tailCompactionTimer,
deletionTimer: deletionTimer,
curationState: prometheusStatus,
curationSema: make(chan bool, 1),
unwrittenSamples: unwrittenSamples,
stopBackgroundOperations: make(chan bool, 1),
ruleManager: ruleManager,
notifications: notifications,
storage: ts,
}
defer prometheus.close()
storageStarted := make(chan bool)
go ts.Serve(storageStarted)
<-storageStarted
go prometheus.interruptHandler()
go func() {
for _ = range prometheus.headCompactionTimer.C {
glog.Info("Starting head compaction...")
err := prometheus.compact(*headAge, *headGroupSize)
if err != nil {
glog.Error("could not compact: ", err)
}
glog.Info("Done")
}
}()
go func() {
for _ = range prometheus.bodyCompactionTimer.C {
glog.Info("Starting body compaction...")
err := prometheus.compact(*bodyAge, *bodyGroupSize)
if err != nil {
glog.Error("could not compact: ", err)
}
glog.Info("Done")
}
}()
go func() {
for _ = range prometheus.tailCompactionTimer.C {
glog.Info("Starting tail compaction...")
err := prometheus.compact(*tailAge, *tailGroupSize)
if err != nil {
glog.Error("could not compact: ", err)
}
glog.Info("Done")
}
}()
go func() {
for _ = range prometheus.deletionTimer.C {
glog.Info("Starting deletion of stale values...")
err := prometheus.delete(*deleteAge, deletionBatchSize)
if err != nil {
glog.Error("could not delete: ", err)
}
glog.Info("Done")
}
}()
go func() {
err := webService.ServeForever()
if err != nil {
glog.Fatal(err)
}
}()
// TODO(all): Migrate this into prometheus.serve().
for block := range unwrittenSamples {
if block.Err == nil {
ts.AppendSamples(block.Samples)
}
}
}