Skip to main content

Kubernetes API Aggregation Layer & Extension apiservers

Bibin Wilson

API server contains an aggreagation layer which allows you to extend Kubernetes API (almost infinite flexibility) to create custom APIs resources which are not natively available in Kubernetes.

Why would you need a custom API?

Custom APIs allow you to add new features and resources to Kubernetes without modifying its core code. This enables you to tailor Kubernetes to your specific needs.

Extension API Server

To enable custom APIs, you need to build and deploy a extension API server.

The aggregation layer sits in front of the main Kubernetes API server, routing requests to either the core API server or to extension API servers you've set up.

Real World Example

A classic example of using the Aggregation Layer in Kubernetes is the implementation of the Metrics Server. The Metrics Server is usually deployed as an add-on component in Kubernetes clusters.

The Metrics Server is widely used in Kubernetes clusters to collect CPU and memory metrics, which are important for implementing Horizontal Pod Autoscalers (HPA) & VPA to retrieve metrics data and make scaling decisions based on them.

Metrics server implements the Metrics API as an add-on API server, meaning it extends the Kubernetes API using the Aggregation Layer we discussed earlier.

Here is how it works

  1. The Metrics Server collects resource metrics from Kubelets on each node in your cluster.
  2. It then serves these metrics via the Metrics API, which is exposed through Kubernetes API aggregation.
  3. This allows the Kubernetes API server to serve these metrics alongside other core APIs.

You can view the API services in a cluster that has the Metrics Server deployed by running the following command:

$ kubectl get apiservices

v1.storage.k8s.io                      Local                        
v1beta1.metallb.io                     Local                        
v1beta1.metrics.k8s.io                 kube-system/metrics-server   

In the output,

  • Local means the API is served directly by the main Kubernetes API server.
  • kube-system/metrics-server indicates that this API is served by the metrics-server running in the kube-system namespace.

This means that even though the Metrics Server is a separate service, its API can be accessed as if it were a native part of the Kubernetes API.

So when the Metrics Server is deployed in your cluster, it acts as an add-on API server that collects and serves resource metrics (like CPU and memory usage) through the Metrics API, which is accessible via the Kubernetes API server.

Another example is Prometheus Adapter. It Exposes arbitrary Prometheus metrics to Kubernetes' custom metrics API. It issed for autoscaling based on custom application metrics and more complex monitoring scenarios. Whereas metrics server provides only basic resource-based autoscaling. This API is served under the path /apis/custom.metrics.k8s.io/v1beta1.

Building Extension API Server

To build an extension API server you can make use of the apiserver-builder repo.

apiserver-builder is a tool designed to simplify the process of creating Kubernetes extension API servers.

It automates much of the boilerplate code generation needed to create a Kubernetes API server. For example,

  • It generates API definitions
  • It creates controller scaffolding
  • It handles common tasks like setting up CRUD operations

Here is an example api-server implementation

When to use Aggregation Layer

In practice, it's common to start with CRDs and move to the Aggregation Layer if you encounter limitations.

For the majority of scenarios where you need to extend Kubernetes, such as defining application-specific resources or operational patterns, CRDs provide all the necessary functionality.

Many projects, including major Kubernetes extensions like Istio and Knative, use a combination of both approaches.

Further Reading

  1. Configure the Aggregation Layer
  2. Set up an Extension API Server
Bibin Wilson