top | item 37240187

An excruciatingly detailed guide to SSH (but only the things I find useful)

449 points| weeha | 2 years ago |grahamhelton.com

112 comments

order

withinboredom|2 years ago

There is an amazingly simple directive missing here:

    # in sshd_config:

    AuthorizedKeysCommand /usr/bin/php /etc/ssh/auth.php %u

    # in /etc/ssh/auth.php

    $user = $argv[1] ?? '';
    $user = rawurlencode($user);
    echo file_get_contents("https://gihub.com/{$user}.keys");
This is obviously not production quality code, but just demonstrates the gist of the configuration. Basically, you can do a number of things, like verify the user is part of your org and in a certain group on Github. Then, if the user exists (and is rewritten via nss-ato or something), they can login to the server.

This saves a lot of trouble when off/on-boarding folks, since you can simply add/remove them from a github group to revoke or grant access to your machines.

pxc|2 years ago

Amazon Linux does something sort of like this, which I guess is 'production quality', meaning much more complex. It annoys me on older versions of Amazon Linux (2 and earlier) because it involves (among other things) an invocation of the openssl CLI to verify the format of individual keys in the authorized keys file that is hardcoded to use RSA, so you can't authenticate to Amazon Linux 2 hosts using ed25519 even though the version of OpenSSH on them supports it.

In theory it's kinda nice because it can let you do fancy things¹, but my actual experiences with it breaking basic functionality even for people who don't use those fancy things has ultimately made me trust Amazon Linux less.

It was especially frustrating because when I first encountered this, I was trying to SSH into a box owned by one of our cloud-first DevOps guys. I couldn't diagnose the box because I didn't have hands on it. He couldn't diagnose the issue because he knows AWS better than he knows Linux and didn't know where to look. He'd chosen Amazon Linux because it's by the owner of the cloud platform, so it must be 'more compatible', right? But here, 'more compatible' actually meant 'more full of stupid surprises'.

Bleh.

--

1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-...

zamadatix|2 years ago

Things like this make me wish I wasn't in the network side (where we miss out on awesome shit like this because "the network isn't working right" is part our job).

uses|2 years ago

little typo there: "gihub"

fmajid|2 years ago

You really should use ssh certificates for this instead.

bomewish|2 years ago

why php?

wahern|2 years ago

Here's something I bet few people know: the OpenSSH configuration parser ignores duplicate directives; only the first such directive has any effect. This is more than a little counter intuitive as IME the more common semantic in configuration parsers and rules engines is for subsequent directives to take precedence over previous ones.

This may seem inconsequential, but IME when changing defaults in, e.g., /etc/ssh/sshd_config, people and software tend to append their changes to the end of a file or directive block, not the beginning, expecting those changes to be effective. Even security companies and organizations get this wrong, including various SSH bastion products I've seen. CIS Benchmarks recommendations (IIRC) and most (all?) third-party CIS audit suites don't consider precedence at all or get it wrong--e.g. by recommending appending a value, or by providing a compliance test that accepts a broken configuration. FWIW, the proper way to check whether an OpenSSH configuration directive is defined as expected is to use `sshd -T` or `ssh -G` to dump the derived, internal configuration, not by directly inspecting the configuration file(s).

pxc|2 years ago

> Here's something I bet few people know: the OpenSSH configuration parser ignores duplicate directives; only the first such directive has any effect. This is more than a little counter intuitive

This is how the sudoers file works as well. I think this is desirable in software that authenticates or authorizes users, and maybe more broadly wherever security concerns are essential. That's because this logic makes it easy to create settings that can't be overridden in by adding a new file in a whatever.conf.d directory: you define those settings in the main config file before you source whatever.conf.d/* and you put some kind of special protections on that file.

Even where you're not worried about somebody evading your controls per se, it can be nice from a configuration management perspective in giving you a soft 'guarantee' that if some new hire who doesn't have the whole picture adds a new file in there, or some package tries to install a stupid default for its own service, your baseline settings can retain priority.

In other contexts you probably see the opposite behavior because what you really want is not a 'baseline configuration' but a collection of defaults in the strict sense: fallback settings to be used in case nothing is explicitly configured by the user, developer, or administrator (as the case may be).

n8henrie|2 years ago

I think some directives can be duplicated -- like AllowUsers.

But I got bit by something related yesterday when NixOS suddenly changed the merge order and put my AllowUsers from own file below a Match from another file and locked me out :(

ggm|2 years ago

> The best way I found to remember this is local forwarding with-L means local is on the left-hand side of the address. Remote forwarding with-R means the local port is on the right-hand side of the address.

This is the most important, succinct statement made in this piece. -L and -R confused me from the get-go. Having which port instance L or R is "local" change is in some ways, annoying. I "get" that -L and -R change the direction of intentionality, where initiator and responder are, but I think it might have been sensible to make a port:address:port phrase ALWAYS refer to local:binding:remote and have -L and -R define which was listen and which was send.

tuukkah|2 years ago

I think it's easiest to remember that -L listens on a local port whose number comes right after, and -R listens on a remote port whose number comes right after.

Then the rest (the host:port) is just the normal way to tell where to connect.

Since we're doing port forwarding over an SSH tunnel, it's obvious that the host is contacted from the other side of the tunnel than where the listening port is.

sophacles|2 years ago

A lesser known but quite useful bit of ssh is connection multiplexing. Rather than establish a new tcp connection, doing the auth dance, etc, you can tell ssh to reuse an existing connection. (The protocol itself has a notion of channels, a bit of metadata with every data frame to distinguish different streams, and this functionality uses that).

The big thing with it is that you don't have to do a full auth for subsequent sessions - nice if you don't have tmux (etc) on the remote, and do multiple panes via multiple terminal windows. Particularly when auth involves a passphrase and hsm touch or similar that can take several seconds.

It also has a "connection persistence" setting so when you're bouncing around between a handful of servers you don't have to auth each and every time you switch between servers.

Overall I think of it as one of those features that's nice to have, but not really life changing or anything - Some servers I connect to have it turned off and I notice it's absence more than I notice when it's working.

More info: https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing

fanf2|2 years ago

It makes a huge difference if there’s significant latency between the client and server: ssh is a very chatty protocol, which has not been optimized to reduce round trips like TLS has been - apart from this multiplexing option.

e12e|2 years ago

This can make Ansible much more usable when going through (ProxyJump) bastion host.

cholmon|2 years ago

If you have a lot of hosts listed in your ~/.ssh/config file, you can keep the file from getting too cluttered by using the Include directive, which supports wildcards...

    # in ~/.ssh/config
    Include config.d/*.conf

    # in ~/.ssh/config.d/work.conf
    host work
        hostname myoffice.example.com
        user myuser

    # in ~/.ssh/config.d/client1.conf
    host client1.dev
        hostname dev.client.example.net
        user someuser
    
    host client1.prod
        hostname prod.client.example.net
        user someuser

sureglymop|2 years ago

The host directive also supports wildcards.

For example, add `host *_work ` and then some stiff that is the same for all work hosts like host1_work.

labawi|2 years ago

There an additional trick - you can put include inside a Host/Match directive.

  # in ~/.ssh/config
  Host proj1.*.corp
    Include ~/.ssh/proj1.conf

  # in ~/.ssh/proj1.conf
  ...
This way, I can put project-specific matches at or near the top, while being sure I don't have to wade through numerous of individual files during review.

zamadatix|2 years ago

For forwarding I almost never do -f. It can be a footgun in making it hard to tell which forwards are still open or operational.

-t is a cool trick, didn't know about that one.

An important note that's easy to overlook in the ~ escape command list is you can nest the escape when in nested sessions (i.e. if you're not using -J for whatever reason).

Cool list, it definitely lines up with what I've found useful and had a few more.

progbits|2 years ago

-t is great, I use

$ ssh -t my-dev-vps 'tmux new-session -A -s main'

So each time I run it I'm right back where I left off.

ilyt|2 years ago

> For forwarding I almost never do -f. It can be a footgun in making it hard to tell which forwards are still open or operational.

That kinda still is a problem when you have multiple shells open to the target server. I wish SSH exported it in any reasonable way aside from trying to get it myself from the process list...

rwmj|2 years ago

There's a current pull request for adding AF_UNIX support, which should make all kinds of exciting forwarding possible, since it will make it easy to proxy ssh connections through an arbitrary local process which can do anything to forward the data to the remote end.

https://github.com/openssh/openssh-portable/pull/431

joveian|2 years ago

The one I am interested in is -D using AF_UNIX, but good to see everything possible working over AF_UNIX. It looks like curl as of about a year ago can use AF_UNIX SOCKS via the ALL_PROXY syntax socks5://localhost/path (or socks5h). It looks like this was added due to Tor using an AF_UNIX SOCKS proxy. I want it to be able to configure network access via standard unix permissions (and ideally, IMO, kicking TCP/IP out of the kernel entirely).

pram|2 years ago

The SSH console blew my mind when I first saw it. A coworker showed me ~# and it felt like discovering some kind of secret cheat menu you'd see in a SEGA Genesis game.

ggm|2 years ago

Why tilde? Because rlogin, rsh used it.

Why did rlogin, rsh use tilde? because cu used it.

Why cu? Because if you had a modem or serial line, cu was the way you talked to it, to send Hayes codes, and you can't use Hayes codes breakouts because they will break to the modem, so you need a signal to break to cu.

Why not ^[ ? Because thats telnet. so if you had telnet to a host, to connect to the modem over cu, you needed a distinct break-back for cu, to not break back to telnet.

Its breakout syntax all the way down.

Also, its not actually tilde, it <cr>tilde

lost_tourist|2 years ago

I thought it was CR, tilde, .<dot> ? have i been doing it wrong?

eduction|2 years ago

> ssh-copy-id

This section starts out talking about how the command uploads your public key then seamlessly switches to saying it uploads your private key (which I am guessing are typos).

Also that command does not just upload the key, it appends it to ~/.ssh/authorized_keys which is considerably more useful.

Finally, in the ssh-keygen section, ed25519 is from everything I’ve read preferred these days to ecdsa.

detuur|2 years ago

Some years ago, I read a post on HN where someone made a text-mode game(? or something similar?) available through SSH. People could play the game by opening an SSH session and play from their terminals. This was non-trivial, and they explained all the ways they configured sshd to prevent players from running binaries other than the game.

I didn't bookmark that post, and I haven't been able to find it again to my great dismay. If anyone remembers this post and still has it, I'd love to read it again.

e12e|2 years ago

> they configured sshd to prevent players from running binaries other than the game

You might be interested in how to limit users to nologin shell coupled with subsystem access:

https://news.ycombinator.com/item?id=3527754

walth|2 years ago

I'd add to this list that in 2023 you should be securely storing your key in a HSM.

On Mac, that's easy to do via the Secure Enclave: https://github.com/maxgoedjen/secretive

aidos|2 years ago

For people using 1Password you can get it to act as an agent to lookup keys directly in your vaults. Again, great integration on a Mac where you can use your fingerprint each time the key is required.

Foxboron|2 years ago

I've been hacking on `ssh-tpm-agent` which allows you to create or import TPM sealed keys. This is practical as it prevents key extraction and it has dictionary attack protection which allows you to have 4 digit pins instead of passphrases to protect your private keys.

https://github.com/Foxboron/ssh-tpm-agent

Currently hacking up better support for `HostKeyAgent` and `HostKey` for `sshd`.

b112|2 years ago

It's never enough for you walth, is it?

I used to use an easily memorizable password, but you said that was wrong, and set me straight. Now my password is so complex, I have to rely upon a 3rd party service, that keeps getting hacked.

Then you insisted I use keys. After, you became irate if I left the keys on my work dir.

Now you want me to lug around a 2U HSM appliance?!

For shame!

josephcsible|2 years ago

One more useful trick I'm surprised wasn't mentioned since -D and -R both were: if you do "ssh -R 8080 somehost", that does dynamic port forwarding just like -D, but on the remote end instead of the local end.

ScottEvtuch|2 years ago

The remote port forwarding example seems wrong. It's specifying the loopback address which would be pointing to vuln-server (where we are connecting via SSH) and not internal-web, right? How is vuln-server accessing the site hosted on the loopback of internal-web?

Edit: Okay now I see that command is supposed to be run from internal-web and not campfire. I guess you would also have to ProxyJump through vuln-server to internal-web to even run that command!

badrabbit|2 years ago

This is really cool stuff.

I just wanted to mention that not only does it have subsystems like sftp but you can make up your own subsystem (i use parameko!) to so just about whatever you want. Cool stuff like exposing the remote host's sound or random devices over ssh, cool BBS displays/apps or for the sneaky: command and control protocol between your malware/implant that proxies normal ssh for normal ssh clients as usual.

temporallobe|2 years ago

Tangentially related, I love that VSCode has an extension that lets you ssh into a remote host folder and treat it like a workspace. Most useful thing ever.

fanf2|2 years ago

In emacs land “tramp” is the thing that does this. It is pervasive enough that if the files you are editing on some remote host are in a git repository, then magit just works, same as on the local machine (mod latency). Tramp is the modern-ish version of this facility, I think a little more than 20 years old now? There was a predecessor that I used back in the 1990s whose name I forget, which allowed remote editing of files and browsing remote filesystems with dired, but not remote commands as needed for features l like compilation-mode.

TheRealPomax|2 years ago

I can't read this unless I hack up the CSS to not be dark text using a thin typeface on a black background. I could edit the CSS so that it's normal fonts with higher contrast colors, but instead I think I'll go "someone made something on the internet and I'm not the audience, good for them, but I'm closing this tab again."

callalex|2 years ago

Doesn’t your browser of choice have some kind of reader mode?

aftbit|2 years ago

`-g` was new to me. I believe I have done something similar by providing an explicit bind address to -L, like this:

ssh -L 0.0.0.0:2222:10.0.0.1:22 host

I think this will bind to 0.0.0.0:2222 (allowing remote hosts to connect) and forward all traffic to that port to 10.0.0.1:22 (from the server's perspective).

The biggest gap in this collection of tricks (IMO) is SSH certificate support.

sureglymop|2 years ago

One thing I would find interesting would be how to read someones private key or agent from ram. For example, when ssh agent forwarding to a machine, the root could extract that agent probably.

semi|2 years ago

I don't believe a remote host that has access to your forwarded agent can extract the keys().

But they can tell the agent to authenticate with any key loaded in your agent, not just the one you used to ssh into the machine you forwarded your agent to

So e.g if you have a distinct ssh key for GitHub and a different one for all other uses and you ssh to a compromised server with agent forwarding, the attacker can then ssh to GitHub as you.

() there was a vulnerability not too long ago involving getting the remote agent to load arbitrary shared objects for remote code execution, which obviously changes things

pastage|2 years ago

Hard to do with secure enclaves. You should protect your agent on your local machine to not allow requests willy nilly if the machine you ssh through is part of the threat model. You may need to rethink wether to use agent forward at all if that is something you need to worry about.

There are alot if details in this that can go wrong. I seldom use agent forward in unknown/undesigned environments because of this.

lmz|2 years ago

I'm not sure about extracting keys from remote agents, but using a remotely forwarded agent is just like using a local agent. Just point the client in to the right socket.

_dev_urandom|2 years ago

Random tidbit, the -g option isn’t global port but rather gateway port. Also don’t forget to enable gateway ports in sshd_config

tedunangst|2 years ago

Another member of the intergovernmental agency domain squatters club.

zamadatix|2 years ago

For those wondering what this is referring to it's the use of ".int" for the TLD of the examples. ".test" and ".example" are the only really good reserved TLDs for this (".local" isn't quite what people think it is).

sevenseventen|2 years ago

Great article that collected a lot of info that I usually wind up looking for separately.

But I have to ask: do people really find color schemes like this easier to read? I'm squinting at it throughout.

deepspace|2 years ago

> I'm squinting at it throughout

Same here. I have anecdotally noted that many people, like myself, who were around in the days of actual green screen terminals and later green monochrome monitors are far less likely to prefer dark mode than younger people.

For me, the advent of color screens that made black-on-white text possible was a huge improvement in terms of readability and eye strain reduction, and I cannot imagine going back to the old ways.

onetom|2 years ago

I grew up on green CGA, goldenrod Hercules monitors and grayscale mono VGA ones.

Indeed, for the past decade I also prefer the light themes, though I find these dark themes pretty usable too: https://protesilaos.com/emacs/modus-themes-pictures

As I've noticed, the main problem with dark themes is the low contrast usually. These modus themes were designed scientifically, to meet the contrast ratios recommended by the Web Content Accessibility Guidelines.

Also, back in those days mono-monitor days, people just kept the brightness at the same level they used during the day and complained how tiring is it for their eyes to "use the computer for so many hours".

My eyes never got tired, because I was constantly adjusting the brightness to match the ambient lighting and I've drastically lowered it, when I was coding in dark.

I've noticed that dark-theme users (eg during live streaming) crank up their brightness/contrast to compensate for the low-contrast dark themes, then they are surprised to be blinded when they open a webpage, which is extremely likely to be light themed and they keep complaining about light themes...

layer8|2 years ago

> But I have to ask: do people really find color schemes like this easier to read?

No, and also not the monospace font for prose.

pwg|2 years ago

> do people really find color schemes like this easier to read

I simply turned off style sheets in Firefox (Alt+V Y M [View menu->Page Style->No Style]) to get rid of the website color scheme and have it render using the colors I have set as default in Firefox.

king_geedorah|2 years ago

Do you mean the color scheme of the web page itself or the color scheme of the terminal in the screen captures? They're pretty close so maybe the distinction doesn't matter. The former seems more legible than most pages I see posted here since most of them have poor contrast. The latter is approximately what I use in my own terminal, except my text color is closer to that of the text editor screen capture than any of the shell ones for the same reason of contrast. I've pretty poor eyesight in general though so perhaps there's something to that.

h2odragon|2 years ago

> do people really find color schemes like this easier to read

That scheme works great for me. Not just "dark mode," if it were my site i'd have made the colors more neon-ish.