Return to site

Ssh Tunnel 16 07

broken image


I can use ‘Back to My Mac' to ssh into one of the iMacs at home, and thought it should be easy to connect to the router with an ssh tunnel: $ ssh -N -L 8080:192.168.1.1:80 mac.example.com This seemed to work, but whenever I tried to point a browser to localhost:8080 it couldn't connect to the web page.

  1. Ssh Tunnel 16 07 04
  2. Ssh Tunnel 16 07 09
  3. Ssh Tunnel 16 07 06
  4. Ssh Tunnel 16 07 Lv8
16 Jun 2020 · Filed in Tutorial
  1. You can use the ssh feature of executing a command on the first server you connect to in order to ssh into a 3rd computer. Ssh -t user@100.100.100.100 ssh user@192.168.25.100 The -t option forces ssh to allocate a pseudo-tty so you can run an interactive command. This can work with ssh keys as well.
  2. Setting Up autossh to Maintain a Reverse Tunnel (SSH Server Having a Dynamic IP Address) 16 Aug, 2018. by Cnly. I have bumped into several obstacles in the process of setting up an autossh reverse tunnel, so I'm writing this post to cover the process itself, as well as some other things to pay attention to during the process.

In this post, I'd like to share one way (not the only way!) to use kubectl to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you're interested). I'm sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.

If you're unfamiliar with SSH bastion hosts, see this post for an overview.

To use kubectl via an SSH tunnel through a bastion host to a Kubernetes cluster, there are two steps required:

  1. The Kubernetes API server needs an appropriate Subject Alternative Name (SAN) on its certificate.
  2. The Kubeconfig file needs to be updated to reflect the tunnel details.

Ensuring an Appropriate SAN for the API Server

As is the case with just about any TLS-secured connection, if the destination to which you're connecting with kubectl doesn't match any of the SANs on the API server's certificate, the kubectl commands will fail with an error (server name mismatch or similar). In the case of wanting to use an SSH tunnel with kubectl, this means that the API server certificate is going to need a SAN entry for 127.0.0.1. Why 127.0.0.1? Although there are several different ways to use SSH tunnels, in this instance you're going to take a local port and forward that local port across the tunnel to a remote system and remote port. Thus, kubectl will be talking to a local port (a port listening on 127.0.0.1)—and now you see why this SAN is needed on the API server.

For existing clusters, this means you'll have to go back and add a name to the Kubernetes API server certificate. Expressions 1 2 1.

For new clusters, you can 'bake' the extra SAN in easily with a kubeadm configuration file. This YAML snippet shows how:

Ssh

(See here for the full reference of the kubeadm v1beta2 API.)

For new workload clusters spawned by Cluster API, you can add the SAN via the KubeadmConfigSpec, part of the KubeadmControlPlane object, as shown in this YAML (this is for CAPI v1alpha3):

Ssh Tunnel 16 07 04

Ssh

(See here for the full reference of the kubeadm v1beta2 API.)

For new workload clusters spawned by Cluster API, you can add the SAN via the KubeadmConfigSpec, part of the KubeadmControlPlane object, as shown in this YAML (this is for CAPI v1alpha3):

Ssh Tunnel 16 07 04

Regardless of the method you use, the commands outlined in this article and this article can be re-purposed to help you verify that 127.0.0.1 is indeed listed as a SAN on the API server's certificate.

Ssh Tunnel 16 07 09

One the API server's certificate is correctly configured, then you're ready for step 2—updating the Kubeconfig.

Updating the Kubeconfig with SSH Tunnel Information

The change to the Kubeconfig file for your cluster is pretty straightforward:

Substitute the 12345 in the command above for whatever local port you're going to forward across the SSH tunnel. Don't worry; you can change this later without any major ramifications (the API server certificate doesn't have any port information, so this is easily changed as needed). I prefer to leave the original server line in the file but commented out, just in case I need that information later.

Ssh Tunnel 16 07 06

Once the SAN entry for 127.0.0.1 is on the API server certificate and your local Kubeconfig file has been updated, then it is just a matter of opening the SSH tunnel:

(Your ssh parameters may be slightly different, depending on your SSH version and OS.)

Again, change 12345 to match whatever you specified in the Kubeconfig file. Mac dock replacement. After you've established the tunnel, then running kubectl commands should work without any issues. Voila!

As I mentioned at the start of this post, this is just one way of using SSH to help access a Kubernetes cluster that isn't otherwise directly accessible. There are other ways! If you're interested in having me explore some of those other ways in future posts, let me know—either find me on Twitter or on the Kubernetes Slack community.

Metadata and Navigation

Be social and share this post!

Related Posts

Ssh Tunnel 16 07 Lv8

  • Technology Short Take 12624 Apr 2020
  • Technology Short Take 1242 Mar 2020
  • Technology Short Take 12227 Dec 2019




broken image