A Deeper Look at GKE Basic Auth
Basic Authentication in GKE
Until GKE 1.12, the default behavior for new GKE clusters was to generate a new set of credentials with cluster-admin access to the cluster. The username is hardcoded as admin
, and the password was dynamically generated. This Stackoverflow Post from 2014 around the time of Kubernetes 0.5.x
and 0.6.x
provides the early guidance at the time for how to retrieve them via gcloud
commands after a cluster was created.
As soon as OAuth authentication was available in GKE, OAuth became the preferred method, but “Basic Auth” stayed around. In GKE 1.19, several years later, “Basic Auth” is finally gone. For some organizations, though, that might be 6-12 more months from now, and the risks may be present right now.
Credentials in gcloud container clusters describe
?
The following commands allow those with container.clusters.get
AND container.clusters.getCredentials
permissions to view these critical credentials:
gcloud container clusters list --format yaml
or
gcloud container clusters describe <clustername> --format yaml
will output contents containing a masterAuth
block with fields for username
and password
populated. A trimmed example output:
---
...snip...
endpoint: 35.224.32.184
initialClusterVersion: 1.17.14-gke.1600
...snip...
ipAllocationPolicy:
...snip...
location: us-central1-c
masterAuth:
clusterCaCertificate: LS0tL...snip...
password: MC6OeUR3v8A21W4q
username: admin
...snip...
name: cluster-1
...snip...
Without .getCredentials
, the clusters list
and clusters describe
calls return the same cluster configuration information but just without the sensitive contents of the masterAuth
block. Therefore, it is incredibly important not to grant container.clusters.getCredentials
to non-administrative users/accounts in conjunction with clusters configured with basic authentication.
Using These Credentials with kubectl
To leverage these credentials from the describe clusters
call above using kubectl and skipping the creation of a kubeconfig entry, run the following command that dumps all secrets
contents from the cluster in all namespaces
:
kubectl -s https://35.224.32.184 \
--insecure-skip-tls-verify=true \
--username=admin \
--password=MC6OeUR3v8A21W4q \
get secrets -A --output yaml
Determining Who Has Access
The following IAM Roles have container.clusters.getCredentials
by default:
- Cloud Composer API Service Agent
- Composer Worker
- Editor
- Kubernetes Engine Admin
- Kubernetes Engine Service Agent
- Owner
However, the container.clusters.getCredentials
permission can sometimes be included by accident in a custom IAM Role that is assigned to users or a group that are not intended to have cluster-admin
like abilities. This can be challenging to audit properly as it requires inspecting all IAM Roles and bindings for this combination at the resource level, project level, folder level, and organizational level.
When all GCP Resources and IAM Resources are loaded into OpenCSPM’s graph database backend, this introspection can be performed with a Cypher query like the following:
MATCH (cluster:GCP_CONTAINER_CLUSTER)-[:IN_HIERARCHY*1..5]->(resource)<-[:IS_GRANTEDTO]-(role:GCP_IAM_ROLE)<-[:HAS_ACCESSVIA]-(gi:GCP_IDENTITY)
WHERE cluster.resource_data_masterAuth_username IS NOT NULL
MATCH (role)-[:HAS_PERMISSION]->(perm:GCP_IAM_PERMISSION)
WHERE perm.name = 'container.clusters.getCredentials'
AND NOT role.name ENDS WITH '.serviceAgent'
AND NOT role.name = 'roles/composer.worker'
AND NOT role.name = 'roles/container.admin'
AND NOT role.name = 'roles/owner'
AND NOT role.name = 'roles/editor'
RETURN DISTINCT gi.name, role.name, cluster.name
This query says: “Find all GKE Clusters where Basic Auth is enabled. Get the IAM Roles bound to identities anywhere in the resource hierarchy. Return the identities, the role name, and the cluster name where the IAM Role granted container.clusters.getCredentials
but is not one of the known/built-in roles that has it.” Essentially, “Show me all the identities that unintentionally have a path to privilege escalation”.
Remediation and Considerations
Basic Auth credentials can be revoked by modifying the live cluster in the Console UI. Under Kubernetes Engine, click the cluster name, then click Edit
at the top, and then flip the Basic Authentication
dropdown to disabled
, and hit save
. This causes a cluster update operation just like if you bumped cluster versions, so zonal clusters will have the control plane down for a few minutes and regional clusters you probably wouldn’t notice.
The equivalent via gcloud
:
gcloud container clusters update cluster-name --no-enable-basic-auth
Note: For GKE clusters managed via Terraform, changing the basic_auth
settings will trigger a cluster destroy
and create
. This is most likely not what you want. You can disable it via the Console UI or gcloud
first and then update the Terraform state to match by “blanking” the fields:
resource "google_container_cluster" "primary" {
name = "my-gke-cluster"
location = "us-central1"
...snip...
master_auth {
username = ""
password = ""
}
}
and running terraform plan
to validate no changes and then terraform apply
to update the local state.
You also might want to alert on access logs to the GKE API for the user/subject admin
to know if these credentials have been or are in use.