My shell scripting skills suck. So it comes naturally that I learn a lot every time I need to write a script. The trick I’m about to describe is so neat that I want to share it.
Assume for a second that you need to execute a series of commands on a remote machine, but you can’t pass them to SSH at once. Or consider that you might need to transfer a file over to a remote host and then extract it there. Normally, you’d need to create several SSH connections for this. In the “transfer, then extract” scenario, you need one connection for scp(1) and another one to extract the file on the remote host. Unless you have your public key in the remote host’s ~/.ssh/authorized_keys file, you need to enter your password for the remote host twice. It’s probably not a big deal for most, but it’s annoying at times.
It might be obvious for some, but I recently found out that ssh(1) offers a solution for the problem described above. It allows you to open one connection in "master" mode to which other SSH processes can connect through a socket. The options for the "master" connection are:
$ ssh -M -S /path/to/socket user@rhost
Alternatively, the ControlPath and ControlMaster options can be set in the appropriate SSH configuration files. Other SSH processes that want to connect to the "master" connection only need to use the -S option. The authentication of the "master" connection will be re-used for all other connections that go through the socket. I’m not sure if SSH even opens a separate TCP connection.
Going further, this can be used in scripts to save the user a lot of password typing, especially if the execution flow switches between local and remote commands a lot. At the beginning of the script, simply create a &qout;master" connection like this:
$ ssh -M -S /path/to/socket -f user@rhost "while true ; do sleep 3600 ; done"
It should be noted that the path to the socket can be made unique by using a combination of the placeholders %l, %h, %p, %r for the local host, remote host, port and remote username, respectively. The -f parameter makes SSH go into the background just before command execution. However, it requires that a command is specified, hence an endless loop of sleep(1) calls is added to the command line. Other SSH connections can be opened like this, not requiring any further user input (for authentication):
$ ssh -S /path/to/socket user@rhost
This leaves the problem of how the "master" process can be terminated when the script has finished. Some versions of SSH offer the -O parameter which can be used to check if the "master" is still running and, more importantly, to tell the "master" to exit. Note that in addition to the socket, the remote user and host need to be specified.
$ ssh -S /path/to/socket -O check user@rhost $ ssh -S /path/to/socket -O exit user@rhost
However, there are still two problems to be solved. First, when the "master" quits, the dummy sleep loop continues to run. And second, how can the "master" be terminated if the given SSH version doesn’t offer the -O parameter (short of killing the process by its PID)?