Kubernetes API-Server with Multiple IdPs (and Github Actions)
In my article on Authentik, I mentioned the possibility of having SSO on the Kubernetes API-Server.
As a reminder, to enable OIDC authentication when using kubectl
, you first need to configure the Kubernetes API so it can accept JWT tokens issued by a compatible OIDC identity provider (IdP). This is done by adding parameters to the Kubernetes API server command line:
cluster:
apiServer:
extraArgs:
oidc-issuer-url: "https://goauthentik.une-tasse-de.cafe/application/o/k8s-lucca-poc/"
oidc-client-id: my-super-client-id
oidc-username-claim: email
oidc-groups-claim: groups
By installing oidc-login
(a plugin for kubectl
), you can have a workflow that opens your browser to log in to the IdP and obtain a JWT token, which will then be used to authenticate to the Kubernetes API.
It’s clean, efficient, and works very well!
But if you want to authenticate multiple populations of users with different IdPs… tough luck!
Typically, I use Dex and Authentik on my Kubernetes clusters for internal users, and I need to do Machine2Machine authentication for GitHub Actions pipelines!
Note that GitHub does not offer OIDC (so if you want to do social-login with GitHub, you’ll need an intermediary) for users … but for pipelines, it’s possible!
So, technically, you can authenticate to the API-Server with a JWT token issued by GitHub in a GitHub Actions workflow directly (using Dex or another would have been a bit more complex).
But there’s a big limitation: the arguments passed to the API-Server are global, which means if you have multiple IdPs, you can’t use them at the same time. So you have to choose between your users or your pipelines!
But that was before I discovered the game-changing feature: APIServer configuration via a file (which allows configuring multiple Issuers).
Goodbye to the --oidc-issuer-url
, --oidc-client-id
, etc. arguments and hello to the configuration file. Here’s what it looks like:
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://goauthentik.une-tasse-de.cafe/application/o/k8s-lucca-poc/ # equivalent to oidc-issuer-url
audiences:
- my-client-id-on-goauthentik # equivalent to oidc-client-id
audienceMatchPolicy: MatchAny
claimValidationRules:
- expression: "claims.email_verified == true"
message: "email must be verified"
claimMappings:
username:
expression: 'claims.email + ":authentik"' # equivalent to oidc-username-claim
groups:
expression: "claims.groups" # equivalent to oidc-groups-claim
uid:
expression: "claims.sub"
userValidationRules:
- expression: "!user.username.startsWith('system:')"
message: "username cannot use reserved system: prefix"
- expression: "user.groups.all(group, !group.startsWith('system:'))"
message: "groups cannot use reserved system: prefix"
In addition to finally being able to configure this in a file rather than via arguments, you also get access to many new features:
- Validate a specific field in the JWT token (for example, check that the email is verified as in the example above).
- Map JWT claims to specific fields (using CEL processing).
- Validate fields (to validate the JWT or the user) with CEL rules.
To configure the APIServer with this file, you need to add the --authentication-config-file
argument to the Kubernetes API server command line. In a Talos or RKE2 context, keep in mind that the kubelet is containerized, so you need to add the file to the system and create a mount point for the kubelet container.
For Talos (I couldn’t write an article without mentioning it), here’s an example configuration:
cluster:
apiServer:
extraArgs:
authentication-config: /var/lib/apiserver/authentication.yaml
extraVolumes:
- hostPath: /var/lib/apiserver
mountPath: /var/lib/apiserver
readonly: true
machine:
files:
- content: |
apiVersion: apiserver.config.k8s.io/v1beta1
kind: AuthenticationConfiguration
jwt:
- issuer:
url: https://goauthentik.une-tasse-de.cafe/application/o/k8s-lucca-poc/ # equivalent to oidc-issuer-url
audiences:
- my-client-id-on-goauthentik # equivalent to oidc-client-id
audienceMatchPolicy: MatchAny
claimValidationRules:
- expression: "claims.email_verified == true"
message: "email must be verified"
claimMappings:
username:
expression: 'claims.email + ":authentik"' # equivalent to oidc-username-claim (+ a suffix)
groups:
expression: "claims.groups" # equivalent to oidc-groups-claim
uid:
expression: "claims.sub"
userValidationRules:
- expression: "!user.username.startsWith('system:')"
message: "username cannot use reserved system: prefix"
- expression: "user.groups.all(group, !group.startsWith('system:'))"
message: "groups cannot use reserved system: prefix"
permissions: 0o444
path: /var/lib/apiserver/authentication.yaml
op: create
For RKE2, you’re on your own—I’m a Talos ambassador, not Rancher! (But if Rancher is reading this, I’m open to collaboration 😉)
To test this:
$ kubectl config set-credentials oidc \
--exec-api-version=client.authentication.k8s.io/v1beta1 \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg=--oidc-issuer-url=https://goauthentik.une-tasse-de.cafe/application/o/k8s-lucca-poc/ \
--exec-arg=--oidc-client-id=my-client-id-on-goauthentik \
--exec-arg=--oidc-client-secret=my-secret-id-on-goauthentik \
--exec-arg=--oidc-extra-scope=profile \
--exec-arg=--oidc-extra-scope=email
$ kubectl auth whoami --user=oidc
ATTRIBUTE VALUE
Username goauthentik@une-pause-cafe.fr:second
UID 6d47d08157e7d71d1a3b18087dc068ae689b142934cbb3517562d9b74162edba
Groups [authentik Admins Tech Omni system:authenticated]
Now that we know how to apply our configuration, let’s go a bit further by adding our second IdP: GitHub Actions.
The GH issuer is https://token.actions.githubusercontent.com
, you can freely choose the client ID (as long as it’s identical on the API-Server and in the GitHub Actions workflow) and you can map the JWT claims as you wish.
# ... after the first issuer
- issuer:
url: https://token.actions.githubusercontent.com
audiences:
- coffee-lucca-poc
audienceMatchPolicy: MatchAny
claimMappings:
username:
expression: '"github-actions:" + claims.sub'
uid:
expression: "claims.sub"
extra:
- key: "github.com/repository"
valueExpression: "claims.repository"
- key: "github.com/repository_owner"
valueExpression: "claims.repository_owner"
- key: "github.com/ref"
valueExpression: "claims.ref"
It’s more or less the same configuration as for Authentik, but with different claims (and I add extra claims to get info about the repository and branch in my audit logs).
How do you test this? First, create a kubeconfig to use in the pipeline:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJpakNDQVRDZ0F3SUJBZ0lSQVBydklDT2xMMGNCZ05iLzI4QlNtaUl3Q2dZSUtvWkl6ajBFQXdJd0ZURVQKTUJFR0ExVUVDaE1LYTNWaVpYSnVaWFJsY3pBZUZ3MHlOVEEzTVRVd09EVXlOREJhRncwek5UQTNNVE13T0RVeQpOREJhTUJVeEV6QVJCZ05WQkFvVENtdDFZbVZ5Ym1WMFpYTXdXVEFUQmdjcWhrak9QUUlCQmdncWhrak9QUU1CCkJ3TkNBQVJQcGpwWmM5blc4Sm5YQXV2VWVXdjlNeG1IeDRuUXVhbGdFVWtuT1VmMTZYRlU4S0N1M1NvY0tLRS8KUHNQY3ZYclVHQnV3V21Ib1lSamZWOE8yZVZ0a28yRXdYekFPQmdOVkhROEJBZjhFQkFNQ0FvUXdIUVlEVlIwbApCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPCkJCWUVGQ2Q0TnllTlppRFlQSHBSeWtUYXd4TVdIelJQTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDY1lzTzgKOHUyUXZMKzN6UlY2UGJNMWF3Tk0zSUNPaHZwU2tTZElEVzEydXdJZ0ZrUjcyRFhRVTZhMFU0ZlREZ09pU1FVRQpZZEJFT0VxdzFFMHNrT0UreGlZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.1.170:6443
name: lucca-oidc
contexts:
- context:
cluster: lucca-oidc
namespace: default
user: does-not-exist
name: admin@lucca-oidc
current-context: admin@lucca-oidc
kind: Config
preferences: {}
And no, you’re not dreaming: There is no user defined in the kubeconfig! We’ll inject the JWT ourselves into the kubectl
command to authenticate.
We’ll need to create a repository to test our kubeconfig in a workflow.
name: K8S OIDC Authentication Test
on:
push:
workflow_dispatch:
jobs:
get-nodes:
runs-on: ubuntu-latest
permissions:
id-token: write # This permission is required to obtain the OIDC token
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Fetch kubeconfig secret
id: kubeconfig
run: |
echo "${{ secrets.KUBECONFIG_B64 }}" | base64 -d > kubeconfig
env:
KUBECONFIG_B64: ${{ secrets.KUBECONFIG_B64 }}
- name: Set KUBECONFIG env
run: echo "KUBECONFIG=$PWD/kubeconfig" >> $GITHUB_ENV
- name: Install kubectl
uses: azure/setup-kubectl@v3
- name: Get OIDC token from GitHub
id: get_token
run: |
token_json=$(curl -s -H "Authorization: bearer $ACTIONS_ID_TOKEN_REQUEST_TOKEN" "$ACTIONS_ID_TOKEN_REQUEST_URL&audience=coffee-lucca-poc")
oidc_token=$(echo "$token_json" | jq -r '.value')
echo "token=${oidc_token}" >> $GITHUB_OUTPUT
- name: Debug JWT token claims
run: |
echo "Decoding JWT token for debugging..."
echo "${{ steps.get_token.outputs.token }}" | cut -d'.' -f2 | base64 -d 2>/dev/null | jq . || echo "Failed to decode JWT payload"
- name: Test authentication with kubectl
run: |
echo "Testing kubectl auth..."
kubectl auth whoami --token="${{ steps.get_token.outputs.token }}" || echo "Auth failed"
The variables ACTIONS_ID_TOKEN_REQUEST_TOKEN
and ACTIONS_ID_TOKEN_REQUEST_URL
are automatically set by GitHub Actions to obtain the OIDC token.
This workflow will:
- Retrieve the kubeconfig encoded in base64 from the repository secrets.
- Retrieve the OIDC token from GitHub with a specific audience (the one defined in our kubeconfig + API-Server).
- Authenticate by injecting the token into the
kubectl
command (which avoids having to contact the issuer to obtain this token when using Kubelogin).
Let’s see what gets decoded in the JWT token:
{
"actor": "qjoly",
"actor_id": "82603435",
"aud": "coffee-lucca-poc",
"base_ref": "",
"event_name": "workflow_dispatch",
"exp": 1752630402,
"head_ref": "",
"iat": 1752608802,
"iss": "https://token.actions.githubusercontent.com",
"job_workflow_ref": "qjoly/lucca-oidc-poc/.github/workflows/test.yaml@refs/heads/main",
"job_workflow_sha": "81ae75f767dd22d31f21ae4cff3460bcffbcea1c",
"jti": "72b5e0be-16a1-46d2-930a-6a184407f2fc",
"nbf": 1752608502,
"ref": "refs/heads/main",
"ref_protected": "false",
"ref_type": "branch",
"repository": "qjoly/lucca-oidc-poc",
"repository_id": "1020257999",
"repository_owner": "qjoly",
"repository_owner_id": "82603435",
"repository_visibility": "private",
"run_attempt": "1",
"run_id": "16302845430",
"run_number": "2",
"runner_environment": "github-hosted",
"sha": "81ae75f767dd22d31f21ae4cff3460bcffbcea1c",
"sub": "repo:qjoly/lucca-oidc-poc:ref:refs/heads/main",
"workflow": "K8S OIDC Get Nodes",
"workflow_ref": "qjoly/lucca-oidc-poc/.github/workflows/test.yaml@refs/heads/main",
"workflow_sha": "81ae75f767dd22d31f21ae4cff3460bcffbcea1c"
}
And for the next step:
ATTRIBUTE VALUE
Username github-actions:repo:qjoly/lucca-oidc-poc:ref:refs/heads/main
UID repo:qjoly/lucca-oidc-poc:ref:refs/heads/main
Groups [system:authenticated]
Extra: authentication.kubernetes.io/credential-id [JTI=72b5e0be-16a1-46d2-930a-6a184407f2fc]
Extra: github.com/ref [refs/heads/main]
Extra: github.com/repository [qjoly/lucca-oidc-poc]
Extra: github.com/repository_owner [qjoly]
It’s all beautiful, but as it stands: it won’t take long before we find a bitcoin miner in our cluster since all GitHub repositories have access to our Kubernetes, even those that don’t belong to us, oops! (as long as they have the correct audience
claim).
To be extra safe, we need to do a two-step verification:
- Check that the JWT actually comes from authorized repositories (via a
claimValidationRules
) and reject those not on the list. - Limit permissions based on the repository and branch (via Kubernetes RBAC).
For the first part, we’ll add a rule that checks the repository is in the list of authorized ones. You can do this using claimValidationRules
in your APIServer configuration:
- issuer:
url: https://token.actions.githubusercontent.com
audiences:
- coffee-lucca-poc
audienceMatchPolicy: MatchAny
claimMappings:
username:
expression: '"github-actions:" + claims.sub'
uid:
expression: "claims.sub"
extra:
- key: "github.com/repository"
valueExpression: "claims.repository"
- key: "github.com/repository_owner"
valueExpression: "claims.repository_owner"
- key: "github.com/ref"
valueExpression: "claims.ref"
+ claimValidationRules:
+ - expression: 'claims.repository in ["qjoly/lucca-oidc-poc", "qjoly/another-repo", "myorg/yet-another-repo"]'
+ message: "repository must be in the allowed list"
On the Kubernetes RBAC side, it’s a bit tricky because you can’t use wildcards in subjects
(so no repo:qjoly/lucca-oidc-poc:ref:.*
), nor can you rely on “extra” claims… you’ll need to be a bit more creative.
I distinguish 2 (complementary) cases where you can do RBAC in Kubernetes with our GitHub Actions setup:
- Give different permissions based on the repository.
- Give different permissions based on the branch.
In a (Cluster)RoleBinding, you can only consider two parameters in subjects
:
- a
User
- a
Group
# Example RoleBinding for a specific repository
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: sysadmins-of-doom
apiGroup: rbac.authorization.k8s.io
- kind: User
name: qjoly
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: can-destroy-everything
apiGroup: rbac.authorization.k8s.io
The conditions in Subjects
are OR
, which means that even if you manage to put the branch in the JWT Group
(via claimMappings
), you can’t distinguish between branches in the RoleBinding
with a rule If the repository is qjoly/lucca-oidc-poc
and the branch is main
, then you have permissions.
What you can do instead, still by adding a claimMappings
, is to store the repository information in a group and keep the repository+branch (repo:qjoly/lucca-oidc-poc:ref:refs/heads/main
) in the username
. This way, you can create a RoleBinding
with elevated permissions for a specific branch on the repository and another RoleBinding
targeting the group named after the repository. This way: you don’t have to create a RoleBinding
per branch (which, logically, wouldn’t have been possible, as it’s unpredictable).
- issuer:
url: https://token.actions.githubusercontent.com
audiences:
- coffee-lucca-poc
audienceMatchPolicy: MatchAny
claimMappings:
username:
expression: '"github-actions:" + claims.sub'
uid:
expression: "claims.sub"
+ groups:
+ expression: "claims.repository"
extra:
- key: "github.com/repository"
valueExpression: "claims.repository"
- key: "github.com/repository_owner"
valueExpression: "claims.repository_owner"
- key: "github.com/ref"
valueExpression: "claims.ref"
claimValidationRules:
- expression: 'claims.repository in ["qjoly/lucca-oidc-poc", "qjoly/another-repo", "myorg/yet-another-repo"]'
message: "repository must be in the allowed list"
As a result, Kubernetes will now recognize the group qjoly/lucca-oidc-poc
and you can give it specific permissions in a RoleBinding
:
ATTRIBUTE VALUE
Username repo:qjoly/lucca-oidc-poc:ref:refs/heads/main
UID repo:qjoly/lucca-oidc-poc:ref:refs/heads/main
Groups [qjoly/lucca-oidc-poc system:authenticated]
Extra: authentication.kubernetes.io/credential-id [JTI=133d3a4d-631c-48e8-ba3f-8d55315b4bfd]
Extra: github.com/ref [refs/heads/main]
Extra: github.com/repository [qjoly/lucca-oidc-poc]
Extra: github.com/repository_owner [qjoly]
So, here are our two RoleBinding
objects that differentiate permissions based on the repository and branch:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: godmode
subjects:
- kind: User
name: repo:qjoly/lucca-oidc-poc:ref:refs/heads/main
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: can-destroy-everything
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: list-pod-pods
subjects:
- kind: Group
name: qjoly/lucca-oidc-poc
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: can-list-pods # Safe ClusterRole
apiGroup: rbac.authorization.k8s.io
The pipelines of the repository qjoly/lucca-oidc-poc
can only list pods, but those on the main
branch can have much more permissive access via the can-destroy-everything
ClusterRole.
If I wrote this article (which is a mix between an expresso and a real blog post), it’s because I found very few resources on AuthenticationConfiguration
and GitHub Actions authentication with Kubernetes. It was a good opportunity to share my discoveries and experiments.
There are probably many cases I haven’t covered (or foreseen). If you have additional information or suggestions, feel free to share them below 😄