Google Kubernetes Engine IAM Roles

Google Kubernetes Engine IAM Roles

As a Google Cloud Administrator planning your IAM strategy for how to best use the built-in Google Kubernetes Engine (GKE) IAM Roles, there are a few details that might be confusing and/or surprising that could have unintended consequences.

Clarifying the GKE Predefined IAM Roles

GKE Built-in Roles listing

Here’s a listing of the predefined IAM Roles for Kubernetes Engine purposes, their official descriptions (as of the time of this writing), and a few notes on potential areas for confusion:

  • Kubernetes Engine Admin
    • Description: “Full management of Kubernetes Clusters and their Kubernetes API objects.”
    • Intended use: Typically assigned to users or service accounts that manage “everything” about GKE clusters in a project such as a platform team. They have the ability to create/manage/destroy clusters and node pools plus create/manage/destroy all workloads on them.
    • Point of confusion: Commonly confused with Kubernetes Engine Cluster Admin during assignment, but because it contains all those permissions, nothing “breaks” but that assignment no longer follows least privilege.
  • Kubernetes Engine Cluster Admin
    • Description: “Management of Kubernetes Clusters.”
    • Intended use: Typically assigned to users or automation accounts that are just responsible for creating/managing/destroying GKE clusters but should not have access to the workloads running on them via this role.
    • Points of confusion: Commonly confused with the Kubernetes native cluster-admin RBAC ClusterRole which grants all access to all Kubernetes API resources but actually has no direct in-cluster permissions included. However, binding this role via IAM and the RBAC cluster-admin ClusterRole via ClusterRoleBinding can grant near-similar permissions as the IAM-only Kubernetes Engine Admin approach.
  • Kubernetes Engine Cluster Viewer
    • Description: “Get and list access to GKE Clusters.”
    • Intended use: Typically assigned to users or automation accounts that need “just enough” IAM access to query the GCP APIs to find and connect to a given GKE cluster but have their permissions to Kubernetes API resources delegated to and managed solely via in-cluster RBAC bindings.
    • Point of confusion: Commonly confused with the Kubernetes native RBAC bindings that would give access to in-cluster resources but actually has none.
  • Kubernetes Engine Developer
    • Description: “Full access to Kubernetes API objects inside Kubernetes Clusters.”
    • Intended use: Typically assigned to users who need permissions to work comfortably to manage most Kubernetes API resources except for a few privileged permissions.
    • Point of surprise: Being named “Developer”, it appears at first glance to be a much lower privilege than Kubernetes Engine Admin. While the “Developer” cannot manage the cluster itself, it has near-full control of the Kubernetes API resources in all namespaces, including kube-system.
  • Kubernetes Engine Host Service Agent User
    • Description: “Allows the Kubernetes Engine service account in the host project to configure shared network resources for cluster management. Also gives access to inspect the firewall rules in the host project.”
    • Intended use: A role used for granting the GKE service project’s “robot” account the necessary access to a host project in a Shared VPC scenario. Not terribly useful otherwise.
  • Kubernetes Engine Service Agent
    • Description: “Gives Kubernetes Engine account access to manage cluster resources. Includes access to service accounts.”
    • Intended use: Intended to be assigned only to the project’s GKE “robot” account. Not really ideal for tenant/customer use as it contains ~1000 permissions and iam.serviceAccounts.actAs along with lots of computecontainer, and networkservices resources.
  • Kubernetes Engine Viewer
    • Description: “Read-only access to Kubernetes Engine resources.”
    • Intended use: Typically assigned to users or automation accounts that need to be able to find GKE clusters and have “read-only” access to all non-sensitive Kubernetes API resources.
    • Point of surprise: Commonly confused with Kubernetes Engine Cluster Viewer during assignment, but because it contains all those permissions, nothing “breaks” but that assignment no longer follows least privilege.

Potential Risks and Privilege Escalation Paths

  1. Kubernetes Engine Viewer
    • Having container.pods.list (and cronjobs, deployments, jobs, and statefulsets list) and container.configMaps.list can potentially leak sensitive credentials and/or details from pod environment variables or what is stored in configMaps. Note that viewing secrets is not allowed by this role.
  2. Kubernetes Engine Developer
    • Having container.secrets.list allows reading secrets contents in all namespaces in the GKE cluster, including kube-system. Kubernetes secrets are also where Kubernetes serviceaccount tokens are stored, so a “Developer” is actually the union of all permissions granted to all serviceaccounts in the cluster. If a controller like Anthos Config ManagementGKE Config SyncWeaveworks’ Flux, or Helm v2 is deployed or any serviceaccount was manually bound to the cluster-admin ClusterRole, a direct path to reading those JWT tokens from their secrets resource and using them to authenticate against the API server is possible and very likely in most GKE clusters. We previously wrote about the power of LIST permissions in this related blog post that covers this scenario in greater detail.

Diving into Kubernetes Engine Admin vs Developer

The following list of permissions are available to the Kubernetes Engine Admin IAM Role that are not directly assigned to the Kubernetes Engine Developer IAM Role, but those in bold are what can potentially be gained back via the previously mentioned container.secrets.list escalation path:

  • container.certificateSigningRequests.approve
  • container.certificateSigningRequests.getStatus
  • container.clusterRoleBindings.create
  • container.clusterRoleBindings.delete
  • container.clusterRoleBindings.update
  • container.clusterRoles.bind
  • container.clusterRoles.create
  • container.clusterRoles.delete
  • container.clusterRoles.escalate
  • container.clusterRoles.update
  • container.clusters.create
  • container.clusters.delete
  • container.clusters.getCredentials
  • container.clusters.update
  • container.controllerRevisions.create
  • container.controllerRevisions.delete
  • container.controllerRevisions.update
  • container.hostServiceAgent.use
  • container.mutatingWebhookConfigurations.create
  • container.mutatingWebhookConfigurations.delete
  • container.mutatingWebhookConfigurations.update
  • container.operations.get
  • container.operations.list
  • container.podSecurityPolicies.create
  • container.podSecurityPolicies.delete
  • container.podSecurityPolicies.update
  • container.podSecurityPolicies.use
  • container.roleBindings.create
  • container.roleBindings.delete
  • container.roleBindings.update
  • container.roles.bind
  • container.roles.create
  • container.roles.delete
  • container.roles.escalate
  • container.roles.update
  • container.validatingWebhookConfigurations.create
  • container.validatingWebhookConfigurations.delete
  • container.validatingWebhookConfigurations.update

Conclusion

Be aware that the name of these GKE IAM Roles can be confusing and potentially risky if applied without a deeper understanding of their usage in combination with other configuration settings like GKE Basic Authentication and RBAC ClusterRoleBindings. Our recommendation is to ensure GKE Basic Authentication isn’t in place and to handle authorization for users via native RBAC permissions in-cluster. This allows permissions to be granted to a subset of the clusters in the project and at the per-namespace level which provides the ideal levels of granularity.

Read more