Recently I had the pleasure of setting up a SSH
tunnel between two virtual machines that share no route and are located in two different subnets.
They can however reach each other via SSH, hopping their host.
Let's assume the following setup:
client1 (Arch Linux) has 10.0.5.2/24
client2 (Arch Linux) has 10.0.6.2/24
host (Debian) is 10.0.5.1/24 to client1 and 10.0.6.1/24 to client2
As I needed the two clients to be able to send mail to each other and reach each others' services, I did some digging and opted for a SSH connection using TUN
devices (aka. "poor man's VPN
The following is needed to set this up:
root access on both virtual machines (client1 & client2)
a user account on the host system
SSH (OpenSSH assumed) installed on all three machines
Connect the clients
The following two settings have to be made in each clients /etc/ssh/sshd_config (to allow root login and the creation of TUN devices):
I hope it is needless to say, that permitting root access via SSH has its caveats. You should make sure to set a very secure password, or only allow SSH keys for login.
Generate and exchange keys
Generate SSH keys on client1 (you can of course use other key types, if your OpenSSH installation allows and supports it):
ssh-keygen -t rsa -b 4096 -C "$(whoami)@$(hostname)-$(date -I)"
Here you can choose between setting a password for the key (to unlock the key with ssh-add yourself) or not setting one (to be able to use the key on system boot with an automated service).
Add them to your user at host like this:
ssh-copy-id -i .ssh/id_rsa user@host
Also add it to /root/.ssh/authorized_keys on client2.
Use ProxyCommand to connect
To make a first connection between the clients, one can use the following settings in /root/.ssh/config of client1 to hop host and connect to client2:
ProxyCommand ssh firstname.lastname@example.org -W 10.0.6.2:%p
The ForwardAgent yes setting here is especially interesting, as it forwards the SSH key of client1 to client2.
On client1 a simple
should now directly connect to client2 by hopping host.
Start the tunnel
Now to the fun part: Creating the tunnel.
OpenSSH supports a feature similar to VPN, that creates a TUN device on both ends of the connection. As the "direct" (hopping host) connection between client1 and client2 has been setup already, let's try the tunnel:
The -w switch will create a TUN device (tun5 to be exact) on each client.
Now, to start the tunnel without executing a remote command (-N), compression of the data (-C) and disabling pseudo-tty allocation (-T), one can use the following:
Setting up the TUN devices
on client1 and client2 shows, that the tun5 devices have been created on both clients. However they don't feature a link yet.
This can be achieved by setting up a systemd network
with the help of systemd-networkd
. By placing a .network
file in /etc/systemd/network/
, the TUN device will be configured as soon as it shows up.
Here I chose the 10.0.10.0/24 subnet, but you could use any other private subnet (that's still available in your setup).
On client1 (/etc/systemd/network/client1-tun.network):
On client2 (/etc/systemd/network/client2-tun.network):
After adding the files a restart of the systemd-networkd service on both machines is necessary.
systemctl restart systemd-networkd
Now starting the tunnel again should give a fully working point-to-point TCP
connection between the two (virtual) machines using the TUN devices.
If you need a more complex setup (i.e. to access the other clients' subnet), you will have to apply some routes (either using netfilter
), depending on your individual setup.
To make both hosts know about each other by hostname (and domain, if any), too, those can be added to the clients' /etc/hosts files.
On client1 (/etc/hosts):
10.0.10.2 client2.org client2
10.0.10.1 client1.org client1
If using postfix
, the service has to be configured to use /etc/hosts
before resolving to your networks DNS resolving.
On client1 and client2 (/etc/postfix/main.cf):
lmtp_host_lookup = native
smtp_host_lookup = native
ignore_mx_lookup_error = yes
Autossh and system boot
Wrapping it all up, it's usually intended to have a tunnel service be started on system boot. SSH tunnels are supposedly known for their poor connectivity. One way to get around this issue is to manage them with autossh
On client1 (/etc/systemd/system/tunnel@.service):
Description=AutoSSH tunnel to a host
ExecStart=/usr/bin/autossh -M 0 -NCTv -o ServerAliveInterval=45 -o ServerAliveCountMax=2 -o TCPKeepAlive=yes -w 5:5 %I
systemctl enable tunnel@client2
systemctl start tunnel@client2