Prometheus Metrics Setup for GCP
Learn how to setup Prometheus Metrics collection for GCP, with Open Telemetry collector running on GKE, to scrape and feed the Aviator Prometheus metrics.
Last updated
Was this helpful?
Learn how to setup Prometheus Metrics collection for GCP, with Open Telemetry collector running on GKE, to scrape and feed the Aviator Prometheus metrics.
Last updated
Was this helpful?
This how-to guide explains how to setup for GCP. We are going to use Open Telemetry collector running on GKE to scrape and feed the Aviator Prometheus metrics.
GKE cluster
We are going to create a GCP service account that has necessary permission to write metrics. This service account is used via GKE's workload identify federation.
Create a GCP SA metric-collector@YOUR_PROJECT.iam.gserviceaccount.com
and give it roles/monitoring.metricWriter
and roles/monitoring.viewer
permission at the project level.
On the service account permission, give Kubernetes Service Account to use that GCP SA. Give roles/iam.workloadIdentityUser
to serviceAccount:YOUR_PROJECT.svc.id.goog[YOUR_K8S_NAMESPACE/metric-collector
.
Open Telemetry collector requires a config where to scrape the metrics and where to send the metrics. Create the following Kubernetes Secret for the configuration.
Deploy the open-telemetry collector to your GKE cluster. Apply this config to your namespace.
The metrics should appear as a Prometheus Metric. You can setup a dashboard like this to monitor the queue length and GitHub API usage on your account.
Update YOUR_ORG/YOUR_REPO
and YOUR API KEY HERE
parts in the config. The API key can be obtained from . Apply this with kubectl apply -n YOUR_K8S_NAMESPACE -f FILE
.