You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Prometheus scape error for the scrape_pool=GPFSDiskCap
Mar 20 21:08:56 prometheus[12576]: ts=2025-03-20T21:08:56.114Z caller=scrape.go:1747 level=warn component="scrape manager" scrape_pool=GPFSDiskCap target=http://10.xxx.x.xxx:9250/metrics_gpfs_diskcap msg="Error on ingesting samples that are too old or are too far into the future" num_dropped=9
The metric’s don’t show up in metric explorer but the timestamp is over 24 hours old.
Hi @ussxs01,
the GPFSDiskCap sensor is running once per day. This is not usual for Prometheus data collection. The most of the scrape jobs are running every 5s, 1m... Therefore, the Prometheus TSDB considers records that arrive once a day to be out of range.
To avoid it, you need to add to your prometheus config file (prometheus.yml) the following setting and restart the service.
Thanks for reporting this issue!
I'll update the examples of prometheus.yml in our wiki.
Also I'll add this attribute to the prometheus config generator api in our code.
Describe the bug
Prometheus scape error for the scrape_pool=GPFSDiskCap
Mar 20 21:08:56 prometheus[12576]: ts=2025-03-20T21:08:56.114Z caller=scrape.go:1747 level=warn component="scrape manager" scrape_pool=GPFSDiskCap target=http://10.xxx.x.xxx:9250/metrics_gpfs_diskcap msg="Error on ingesting samples that are too old or are too far into the future" num_dropped=9
The metric’s don’t show up in metric explorer but the timestamp is over 24 hours old.
Here is the /metrics_gpfs_diskcap results
HELP gpfs_disk_disksize Desc not found
TYPE gpfs_disk_disksize gauge
gpfs_disk_disksize{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-01",gpfs_diskpool_name="system",gpfs_disk_name="NSD001"} 104857600.0 1742428800000
gpfs_disk_disksize{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-02",gpfs_diskpool_name="system",gpfs_disk_name="NSD002"} 104857600.0 1742428800000
gpfs_disk_disksize{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-03",gpfs_diskpool_name="system",gpfs_disk_name="NSD003"} 104857600.0 1742428800000
HELP gpfs_disk_free_fragkb Desc not found
TYPE gpfs_disk_free_fragkb gauge
gpfs_disk_free_fragkb{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-01",gpfs_diskpool_name="system",gpfs_disk_name="NSD001"} 224880.0 1742428800000
gpfs_disk_free_fragkb{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-02",gpfs_diskpool_name="system",gpfs_disk_name="NSD002"} 48696.0 1742428800000
gpfs_disk_free_fragkb{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-03",gpfs_diskpool_name="system",gpfs_disk_name="NSD003"} 11896.0 1742428800000
HELP gpfs_disk_free_fullkb Desc not found
TYPE gpfs_disk_free_fullkb gauge
gpfs_disk_free_fullkb{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-01",gpfs_diskpool_name="system",gpfs_disk_name="NSD001"} 96059392.0 1742428800000
gpfs_disk_free_fullkb{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-02",gpfs_diskpool_name="system",gpfs_disk_name="NSD002"} 102068224.0 1742428800000
gpfs_disk_free_fullkb{gpfs_cluster_name="par-scale52.local",gpfs_fs_name="par-scale52-03",gpfs_diskpool_name="system",gpfs_disk_name="NSD003"} 103133184.0 1742428800000
The text was updated successfully, but these errors were encountered: