Simplifying Kubernetes Access with kubectl-tokensshtunnel

Kubernetes k3s SSH tunnel

4 min read | by Jordi Prats

Kubernetes is a powerful container orchestration platform used by many organizations to deploy and manage their applications. Interacting with a Kubernetes cluster requires configuring the kubeconfig file with the necessary credentials. However, managing these credentials can be challenging, especially in scenarios where a bastion host or SSH tunnel is required.

With kubectl-tokensshtunnel we can automate the process of creating an SSH tunnel to a remote server and retrieving the Kubernetes credentials from there. This tool simplifies access to remote Kubernetes clusters by securely caching the credentials for a specified duration.

Imagine a scenario where you have a remote Kubernetes cluster that you need to access securely. This cluster may be running on a cloud provider like AWS, and you don't want to expose the Kubernetes API server's (6443) directly to the internet. Instead, you want to establish an SSH tunnel to the remote server and securely retrieve the Kubernetes credentials.

Doing so manually can be a pain, and having the tunnel permanently established a waste of resources. With kubectl-tokensshtunnel we'll be able to set it up on demand. And even if the credentials get rotated, use always the ones that are available.


Before we dive into using kubectl-tokensshtunnel, let's first cover the installation process:

  • Clone the kubectl-tokensshtunnel repository or manually download the script.
  • Copy the kubectl-tokensshtunnel script to any location within your PATH, for example /usr/local/bin.
  • Make sure it has execution permissions: chmod +x kubectl-tokensshtunnel

Configuring kubectl-tokensshtunnel

The tool provides several options for configuration:

  • -c <ssh command>: Set the SSH command to use. This command establishes an SSH connection to the remote server. For example, you can specify the command to connect to a bastion host.
  • -k <kube config> (optional): Set the remote kube config file location. By default, kubectl-tokensshtunnel looks for the kube config file at /etc/rancher/k3s/k3s.yaml
  • -L <ssh tunnel> (optional): Set the SSH tunnel configuration. The format is [<local_bind>:]<local_port>:<remote_host>:<remote_port> This option allows you to forward a local port to the remote Kubernetes API server.
  • -s (optional): Add sudo to the SSH command. Use this option if sudo access is required for the SSH connection.
  • -t <tmp pattern> (optional): Set the location to store the cached credentials. This is the location where the generated Kubernetes config file will be stored.
  • -d <tunnel duration> (optional): Set the duration for which the SSH tunnel will be available. Specify the duration in a format compatible with the date command. The default duration is 1 hour.

Updating kubeconfig

To use kubectl-tokensshtunnel, you need to update your kubeconfig file and add the necessary configuration to the contexts section. Follow the steps below:

  • Locate the users section and add the following, updating the options to your needs:
- name: sshtunnel
      - tokensshtunnel
      - -c
      - awstools ec2 ssh bastion
      - -s
      - -L
      - -T
      - /etc/rancher/k3s/k3s.yaml
      command: kubectl
      env: null
      interactiveMode: IfAvailable
      provideClusterInfo: false

In this example we are using awstools to connect to an EC2 instance, but we can use a plain ssh command instead, for example: ssh user@

  • In the clusters section, make sure it uses the address you have setup the tunnel:
- cluster:
    certificate-authority-data: LS0....
  name: awsk3s
  • In the contexts section you need to make use you are linking the pieces:
- context:
    cluster: awsk3s
    namespace: pet2cattle
    user: sshtunnel
  name: awsk3s

If you just need to update it, rather than adding it you can change it using kubectl config:

kubectl config use-context awsk3s --user sshtunnel

Posted on 18/05/2023