Provisioning Admission Check Controller

An admission check controller providing kueue integration with cluster autoscaler.

The Provisioning AdmissionCheck Controller is an AdmissionCheck Controller designed to integrate Kueue with Kubernetes cluster-autoscaler. Its primary function is to create ProvisioningRequests for the workloads holding Quota Reservation and keeping the AdmissionCheckState in sync.

The controller is part of Kueue. It is enabled by default. You can disable it by editing the ProvisioningACC feature gate. Check the Installation guide for details on feature gate configuration.

The Provisioning Admission Check Controller is supported on Kubernetes cluster-autoscaler versions 1.29 and later. However, some cloud-providers may not have an implementation for it.

Check the list of supported Provisioning Classes and prerequisite for them in ClusterAutoscaler documentation.

Usage

To use the Provisioning AdmissionCheck, create an AdmissionCheck with kueue.x-k8s.io/provisioning-request as a .spec.controllerName and create a ProvisioningRequest configuration using a ProvisioningRequestConfig object.

Next, you need to reference the AdmissionCheck from the ClusterQueue, as detailed in Admission Check usage.

See below for a full setup.

ProvisioningRequest configuration

There are two ways to configure the ProvisioningRequests that Kueue creates for your Jobs.

  • ProvisioningRequestConfig: This configuration in the AdmissionCheck applies to all the jobs that go through this check. It enables you to set provisioningClassName, managedResources, and parameters
  • Job annotation: This configuration enables you to set parameters to a specific job. If both the annotation and the ProvisioningRequestConfig refer to the same parameter, the annotation value takes precedence.

ProvisioningRequestConfig

A ProvisioningRequestConfig looks like the following:

apiVersion: kueue.x-k8s.io/v1beta1
kind: ProvisioningRequestConfig
metadata:
  name: prov-test-config
spec:
  provisioningClassName: check-capacity.autoscaling.x-k8s.io
  managedResources:
  - nvidia.com/gpu
  retryStrategy:
    backoffLimitCount: 2
    backoffBaseSeconds: 60
    backoffMaxSeconds: 1800
  podSetMergePolicy: IdenticalWorkloadSchedulingRequirements

Where:

  • provisioningClassName - describes the different modes of provisioning the resources. Supported ProvisioningClasses are listed in ClusterAutoscaler documentation, also check your cloud provider’s documentation for other ProvisioningRequest classes they support.
  • managedResources - contains the list of resources managed by the autoscaling.
  • retryStrategy.backoffLimitCount - indicates how many times ProvisioningRequest should be retried in case of failure. Defaults to 3.
  • retryStrategy.backoffBaseSeconds - provides the base for calculating backoff time that ProvisioningRequest waits before being retried. Defaults to 60.
  • retryStrategy.backoffMaxSeconds - indicates the maximum backoff time (in seconds) before retrying a ProvisioningRequest. Defaults to 1800.
  • podSetMergePolicy - allows to merge similar PodSets into a single PodTemplate used by the ProvisioningRequest.
  • podSetUpdates - allows to update the Workload’s PodSets with nodeSelectors based on the successful ProvisioningRequest. This allows to restrict scheduling of the PodSets’ pods to the newly provisioned nodes.

PodSet merge policy

Retry strategy

If a ProvisioningRequest fails, it may be retried after a backoff period. The backoff time (in seconds) is calculated using the following formula, where n is the retry number (starting at 1):

time = min(backoffBaseSeconds^n, backoffMaxSeconds)

When a ProvisioningRequest fails, the quota reserved for a Workload is released, and the Workload needs to restart the admission cycle.

PodSet updates

In order to restrict scheduling of the workload’s Pods to the newly provisioned nodes you can use the “podSetUpdates” API which allows to inject node selectors to target the nodes.

For example:

podSetUpdates:
  nodeSelector:
  - key: autoscaling.cloud-provider.com/provisioning-request
    valueFromProvisioningClassDetail: RequestKey

This snippet in ProvisioningRequestConfig instructs Kueue to update the Job’s PodTemplate, after provisioning, to target the newly provisioned nodes which have the label: autoscaling.cloud-provider.com/provisioning-request with the value coming from the ProvisiongClassDetails map, under the “RequestKey” key.

Note that, this assumes the provisioning class (which can be cloud-provider specific) supports setting unique node label on the newly provisioned nodes.

Reference

Check the API definition for more details.

Job annotations

Another way to pass ProvisioningRequest’s parameters is by using Job annotations. Every annotation with the provreq.kueue.x-k8s.io/ prefix will be directly passed to created ProvisioningRequest. E.g. provreq.kueue.x-k8s.io/ValidUntilSeconds: "60" will pass ValidUntilSeconds parameter with the value of 60. See more examples below.

Once Kueue creates a ProvisioningRequest for the job you submitted, modifying the value of annotations in the job will have no effect in the ProvisioningRequest.

Example

Setup

apiVersion: kueue.x-k8s.io/v1beta1
kind: ResourceFlavor
metadata:
  name: "default-flavor"
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: ClusterQueue
metadata:
  name: "cluster-queue"
spec:
  namespaceSelector: {} # match all.
  resourceGroups:
  - coveredResources: ["cpu", "memory", "nvidia.com/gpu"]
    flavors:
    - name: "default-flavor"
      resources:
      - name: "cpu"
        nominalQuota: 9
      - name: "memory"
        nominalQuota: 36Gi
      - name: "nvidia.com/gpu"
        nominalQuota: 9
  admissionChecksStrategy:
    admissionChecks:
      - name: "sample-prov"
        onFlavors: [default-flavor] # optional if the admission check targets all flavors.
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: LocalQueue
metadata:
  namespace: "default"
  name: "user-queue"
spec:
  clusterQueue: "cluster-queue"
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: AdmissionCheck
metadata:
  name: sample-prov
spec:
  controllerName: kueue.x-k8s.io/provisioning-request
  parameters:
    apiGroup: kueue.x-k8s.io
    kind: ProvisioningRequestConfig
    name: prov-test-config
---
apiVersion: kueue.x-k8s.io/v1beta1
kind: ProvisioningRequestConfig
metadata:
  name: prov-test-config
spec:
  provisioningClassName: check-capacity.autoscaling.x-k8s.io
  managedResources:
  - nvidia.com/gpu
  retryStrategy:
    backoffLimitCount: 2
    backoffBaseSeconds: 0
    backoffMaxSeconds: 1800

Job using a ProvisioningRequest

apiVersion: batch/v1
kind: Job
metadata:
  generateName: sample-job-
  namespace: default
  labels:
    kueue.x-k8s.io/queue-name: user-queue
  annotations:
    provreq.kueue.x-k8s.io/maxRunDurationSeconds: "600"
spec:
  parallelism: 3
  completions: 3
  suspend: true
  template:
    spec:
      tolerations:
      - key: "nvidia.com/gpu"
        operator: "Exists"
        effect: "NoSchedule"
      containers:
      - name: dummy-job
        image: registry.k8s.io/e2e-test-images/agnhost:2.53
        args: ["pause"]
        resources:
          requests:
            cpu: "100m"
            memory: "100Mi"
            nvidia.com/gpu: 1
          limits:
            nvidia.com/gpu: 1
      restartPolicy: Never