I believe security models for HTTP and SSH are pretty different. HTTP is usually public and anonymous, SSH is usually private and authenticated. While QUIC is definitely a great technology for HTTP use case, I'm not so sure about SSH. Not saying it's not, just it's something to reason about.
For example x509 seems to be a disadvantage to me. I do not want anyone with a cheap DigiCert certificate be able to log in to my server, even as a result of some fat finger misconfiguration. OAuth assumes that both client and service provider can reach identity provider. Is it so for most serves? I am used to see pretty restricted setups, where many servers have no internet access and only update from a private package repository.
From one side, I really like the idea of reusing HTTP. Who does new protocols these days? Everything is JSON or XML over HTTP. And it's good enough for most cases. But is it good enough for SSH? WinRM works over HTTP, but it uses Kerberos for authentication.
Are there any significant real practical advantages? I don't see any. Are there any vulnerabilities, possibilities for misconfiguration, architectural flaws? Quite possible.
I don't see anything about using an X.509 certificate for logging in, just for a client authenticating the remote server. And, even then, TLS has support for mutual authentication so someone with a cheap DigiCert certificate logging into your server is not really a problem if you could configure mTLS on the server side to accept only certificates in a certain chain.
SSH Certs are not related to x509 PKI certs. SSH certs are created with ssh-keygen and is the result of one key signing another. The public portion of the signing key (ie. the “cert”) needs to be distributed separately.
You can already use certificates to login via SSH. Usually you setup your own certificate authority and sign your own certs because they need special attributes.
QUIC is damn good though! Its minimal header has a very tiny overhead, and the protocol gives us so much for free. What’s to lament? The userspace impl?
QUIC is indeed impressive and one of the developments that excite me.
I meant, I wish there were more application-layer protocols (like a new IRC, NNTP, rather than say Matrix or ActivityPub which is just JSON shunting over HTTP), the rest of the OSI stack gives us plenty of choice already :)
Why is there http/3 in the middle? SSH over QUIC makes a lot of sense and was something I thought about before.
The SSH protocol is designed to multiplex many “channels” over an encrypted tcp socket. Over each channel you can run things like a shell or SFTP.
It would need some engineering but you could keep the same SSH features but replace the multiplexing channels over tcp with QUIC channels over udp. Where does HTTP/3 fit in besides to add overhead?
I don't see any advantage of layering HTTP/3 here. It adds more friction, and the only advantage it brings is being able to "hide" the SSH server over a URL path. I guess x.509 certificates would be fine, but SSH hostkeys, SSHFP or TOFU is enough and far more secure (because it implicitly pins the server public key).
It's a relatively new project from the looks of it, so I'd definitely not use it anywhere half important having to create something interesting with QUIC and HTTP/3.
They list other advantages in the README such as tying into the web authentication model, which is pretty big for enterprise use as everything moves towards OIDC. If they could eventually use passkeys that’d be really nice.
With hardware tokens, yes, I use that. I was thinking that building it on a web server would be really handy with an integrated client you could use with iCloud, Windows Hello, etc.
With /passkeys/, actually! It's more generic than just hardware keys. I don't know of any good implementation yet, but there were a few projects on github mentioned in some passkey-related discussions here.
I do not use anything like iCloud or Windows Hello and I don't know what these services actually use, but if they implement these open standards, it's only a matter of adding some glue code. I'd say it's likely that Putty will implement this over on Windows eventually. That is my speculation, as I said, I don't actually use any of this.
I mentioned those because key management is the hard part and most people are going to be using platform authenticators for that reason. In some cases there are APIs (this was one of the features in the last macOS / iOS release) but I was also thinking that moving it closer to a browser is interesting because between platform passkeys and SSO, there are a lot of people who have all of their credentials & MFA ready in a browser and would like to reuse that.
Some searching suggests there’s at least one implementation of the SSH agent protocol using Windows Hello, which is great.
Why are people obsessed with implanting CloudFlare right in the middle of everything they do? There is absolutely nobody that needs DDoS for their SSH server.
I get that CloudFlare has been a well behaved netizen so far, but let's be real, it won't last forever. It never does. Eventually the shareholders start turning the screws and CloudFlare is going to succumb to the same pressures every company does and they're going to start extracting advertising value from their "customers".
How about we save the CDNs for the serious stuff and just run our SSH servers and low traffic HTTP sites ourselves?
Absolutely nothing to do with DDoS in my case. I want censorship regimes to have to break large portions of the internet for their citizens to stop even the most simple leak vector. Let them block Cloudflare, Akamai, and Cloudfront.
Which still means that the HTTP-server can be behind CloudFlare, but nobody accesses your blog through SSH (hence not necessary to put it behind CloudFlare).
You don't need buttflare to handle HN levels of traffic. And even if your wordpress or other shitty blog software falls over for a couple of hours? So what, it's a personal website.
In the 90s it was commonplace to design a new protocol on top of TCP/IP. These days, all the tooling and infrastructure is for HTTP. Designing a new protocol, you'd be starting from scratch; HTTP is much, much easier to build an application on top of.
I doubt QUIC is easier than TCP to build on. But it's much easier to get your new protocol through firewalls and other middleware when using port 443 than trying to introduce a new port (or worse: a new protocol number)
If hosts are configured with SSH certificates as part or their setup, you can definitely skip TOFU and determine trust on the first connection. That won't work for the "I need to connect to a random IP address" scenario, but any cloud server exposing SSH can be configured with a certificate signed by a company/personal SSH certificate authority.
You could configure something delightfully atrocious like https://github.com/mjg59/ssh_pki but I think for most use cases where you connect to loads of SSH servers, host keys and certificate authorities will work just fine. We can do with an ACME-like protocol for distributing these certificates, though.
Strict SSHFP can theoretically solve it [1], assuming it's used in the first place and has DNSSEC. I personally use it for all servers I manage purely because I like the additional security, but it not at all common and DNSSEC isn't all that perfect either.
It's crazy that SSHFP hasn't taken off, I don't think a single person on earth has ever verified a host key before attempting to connect, and deploying DNSSEC is trivial now that you can use ECC and ED25519.
* Deploying DNSSEC is obviously not trivial, as doing so has taken some of the largest companies on the Internet fully off the Internet for multiple hours, within the last year, so much so that it has become a running joke when companies have prolonged outages to suggest that DNSSEC is the culprit.
* There are still resolvers that can't handle Ed25519
* Being able to use Ed25519 was never the ops problem with getting DNSSEC rolled out!
* It's weird to assume that people would want to enroll their server integrity --- something that doesn't in any way depend on an Internet PKI designed to allow strangers to verify your identity, and that enlists de facto government support to make that use case work --- in a global PKI, especially when SSH already has a perfectly good certificate system that solves the same problem without any of the above liabilities.
What boggles my mind, and I mean this sincerely, not as snark, is that anybody in the entire world takes SSHFP seriously. Even if you stipulate that DNSSEC (and/or DANE) works, just arguendo, it's still a totally different use case than resolving SSH key continuity problems.
My favorite thing about normal ssh keys is the way it does not come with the obnoxious assumption that if I don’t rent my cert from some rent-seeker, it is super scary and invalid. And yet it gives me both identity verification and encryption. I get the reasons we got there for HTTPS due to how unsophisticated the least experienced users are, but I am glad this idea here won’t catch on in real life, because we don’t need any of this stuff for SSH — mainly because it already has these features.
TLS certificates are free. Unsigned certificates are super scary and invalid. SSH and TLS do not work with the same threat model; first-contacts with SSH servers are comparatively rare (if they aren't, you should be worried about your SSH threat model, too).
Certificates aren't "free." They cost time at minimum, and complexity (you're forced to add a bunch of tooling to renew them) to satisfy the arbitrary requirements based around the fact that somehow their bits get old. The "free" ones think they get old in 90 days, and Apple decided for the whole world that I think one or two years is the absolute limit and anything longer-lasting is also "invalid."
A cert signed by your own CA isn't scary. You can trust and install your own CA, and unless you're an idiot and publish your private key you're not harming your security profile one bit, but like I said, I understand how we got here because if it were easy and not presented as scary, naive users would be tricked into installing new CAs every day. It's just annoying how it has (in practice) imposed this external prerequisite to use TLS encryption itself, which I think is pretty obviously the reason it took like 15 years for SSL to become ubiquitous.
I've done ssh over websocket before (to bypass a corp proxy)... been thinking about it a lot lately. I would love if mosh got support for different transports than just udp and it would be cool if the initial handshake could be done over http instead of ssh.
I know this isn't an actual v3 of the SSH protocol, but if there ever is a version 3 of SSH, it really needs some kind of (encrypted) SNI or at least a standardized metadata block that can be passed to any jumphost without having the know the specifics of the ProxyCommand on that middlebox.
SNI is absolutely needed. Over at https://pico.sh we have to request an IP for each ssh server even though from a resource perspective we really only need 1 VM. It increases the complexity of our deployments and overall makes us want to figure out how to merge all of our SSH apps into one.
I feel like they're missing some benchmarks here, show off the benefit that QUIC brings! OpenSSH's fixed window size significantly bottlenecks throughput on long fat links. I'd love to see ssh+rsync running at 2+ gbps.
> HPN-SSH is a series of modifications to OpenSSH, the predominant implementation of the ssh protocol. It was originally developed to address performance issues when using ssh on high speed long distance networks (also known as Long Fat Networks: LFNs). By taking advantage of automatically optimized receive buffers HPN-SSH could improve performance dramatically on these paths. Later advances include; disabling encryption after authentication to transport non-sensitive bulk data, modifying the AES-CTR cipher to use multiple CPU cores, more detailed connection logging, and peak throughput values in the scp progress bar. More information can be found on HPN-SSH page on the PSC website.
SSH started out with a maximum window size of 128K, which was bumped to 2M in the mid-2000s. It'd be entirely reasonable to bump this to the 64M to 128M range; it's not a fixed buffer allocated for each channel, and the peers explicitly manage the window size, so there really shouldn't be any compatibility issues. This would already solve most of these issues, the more complicated parts of HPN-SSH aren't really needed, and things like multithreaded crypto are entirely unnecessary with modern CPUs unless you need to saturate a 100G link with one connection.
> unless you need to saturate a 100G link with one connection
Maybe not 100gig, but I routinely transfer data over 10gig links. I used to be a heavy user of HPN, but Gentoo pretty much stopped supporting it because the multithreading is supposedly broken.
It's an interesting project, but I think a clarification would be important: Is SSH3 supposed to be "SSH-over-HTTP" that happens to use QUIC as a transport, or is it "SSH-over-QUIC" that happens to use HTTP as an auth/adressing layer?
The difference is not just philosophical, it also has practical implications, in particular what part of the different protocols (SSH, HTTP, TLS and QUIC) clients, servers and intermediates are expected to implement and which can be left out.
E.g., if it's "SSH-over-HTTP", I'd expect the protocol to work well with HTTP proxies and application servers and I'd expect to be able to run a SSH3 server and a regular HTTP server on the same port. On the other hand, I'd expect features that require precise control over the low-level QUIC connection - like UDP port forwarding and session resumption - to be less reliable.
If it's "SSH-over-QUIC", the expectations would be the opposite: That you can treat QUIC (and TLS) as an integral part of the protocol, in the same way that the encryption, auth and transport layers in standard SSH are seen as an integral part of SSH. However then the server should generally be deployed as a standalone process on a separate port and should not be considered a fully compatible HTTP endpoint. That might diminish the "stealth" ability of the protocol a bit.
Or to sum it up, which parts of the protocol stack would an SSH3 client or server be expected to provide by themselves and which parts would be delegated to the OS/infrastructure/intermediaries etc?
Although not relevant to my post above, tunneling WireGuard over SSH sounds like an interesting challenge...
Because WireGuard and SSH are at different layers of the network stack, it might be necessary (though slow) to bridge two WireGuard networks through a single TCP socket port-forwarded by SSH. I'm actually curious now what tools would best be used to accomplish this, how much effort would be needed to configure things, and how badly performance would suffer when faced with normal internet traffic congestion.
The reason they’re using HTTP is to allow for hiding the SSH server so that it pretends to be a dummy HTTP server that responds to 404 on all requests unless you know the special random URL that hosts the SSH capabilities. It’s a neat idea but overkill when you’re not using that capability (didn’t dig into the code so maybe it is bypassed if you don’t ask for a secret URL). It does make me hesitant as I don’t know how secure Go’s HTTP stack is since an exploit there could expose quite a bit and I don’t know that it’s been hardened to host directly, but it is an interesting idea. May be worth hand-rolling a custom server to do the routing but at the same time it makes it easier to fingerprint. I think it makes more sense to separate the routing secret to a standard reverse proxy that’s harder to fingerprint. One could imagine that the secret URL idea in a normal HTTP stack is susceptible to scanning techniques since there’s only one route to guess.
HTTP/3 is almost indistinguishable from any other protocol running over QUIC, and QUIC itself is almost indistinguishable from random noise in UDP packets. If you want to masquerade as HTTP/3 traffic, just using UDP on port 443 will generally be sufficient.
(Only “almost” indistinguishable, because it’s possible to decrypt the first packets of the client’s handshake and examine the ALPN parameters used to negotiate an application protocol. And QUIC may be further distinguishable from other UDP traffic through statistical analysis of packet sizes and response latencies, as well as the few unencrypted header bits.)
Yeah its overkill.
Its like trying to hide a meth lab by just putting up a sign that says daycare.
(and if you do a special knock on the door you go to the secret room)
Edit: vs having an invisible building which if you knock on it the right way, materializes...
The best thing about QUIC if its udp is definately that it could be made un portscannable.
If the URL path is meant to be kept secret, such as here, they should use a password hashing algorithm such as for example bcrypt or scrypt for the URL path and hash the path of every incoming request and then check the hash of the path instead of the path itself
It might be easier to integrate but you’ve got a custom server and client in this case so it should be possible to do both without HTTP being involved for the server/client layer without an HTTP server? At least I think that’s right but it’s been a long long while since I wrote OAUTH code.
Seems like it’s primarily to implement the masking feature to pretend to be a normal HTTP server hosted on a port until the shared secret URL is knocked.
People are being perhaps a little too over dramatic here.
Yes, this is not SSHv3 as defined by a standards body. It is very much SSHv2 over HTTP/3. (Which sorta sounds like how HTTP/3 is actually HTTP/2 over QUIC)
But there is lots of SSH servers and clients, such as Dropbear SSH, OpenSSH, libssh, libssh2 (which is very different from libssh which also supports sshv2), and more. So I don't blame the creators from putting SSH in the name.
The code itself looks like mostly glue code to other more well established libraries. I'm not saying that they didn't introduce new flaws, just that they did not roll their own crypto here.
I kinda hope this succeeds. The faster connection time is nice, but really OpenSSH is so change adverse that it's painful.
IE, I have to have a pretty large patch sets to open SSH, one of them being HPN ssh for getting any kind of reasonable throughput over high latency links. This patch set is decades old and the problems well known, but OpenSSH maintainers do not care. Replacing the transport layer would force things like having reasonable window scaling.
Another is loadbalancing and routing SSH connections. You cannot know where a client wants to connect to till after they done a full hand shake. This is pretty painful. If we had something like SNI we could route clients to the correct servers using only a single IP and port.
I fully welcome these ideas and am glad a group is working on testing these concepts.
I don’t think this is SSHv2 though the GitHub talks about reimplementation on HTTP semantics, and the paper illustrates SSHv2 vs SSH3 as being extremely different for session setup.
In naming; Francois also explains SSH3 is a concatenation of SSH and HTTP/3 — we can not like that here on HN (due seemingly to the lack of IETF involvement?) but it’s what the project creators picked.
Sure, but that project name is still in the title - what I change was the description. I don't know enough to say if the description needs to be more accurate. Others here surely do?
This is cool, but calling that SSH3 is not appropriate. It's an independent project, not a new version of the SSH protocol. Sure, it's "SSH3" and not "SSHv3". Still, introducing a confusion with something that could be an official protocol is not nice.
What if you already run a web server which uses port 443? Strange that the readme doesn't mention that scenario because it's extremely common.
Presumably you'd choose a different port but then it'd be pretty obvious you're running something if your server has a random HTTPS server exposed on port 444 or whatever.
Can nginx proxy-pass encrypted data now? I tried this before and failed pretty hard, had to use HAproxy at the time and pass based on the hostname in the SNI header. Was still pretty unreliable.
If so, I assume the encryption on the SSH is handled separately from the http headers.
You can run different hosts on one web server, like company.com goes to localhost:5555 (your app or whatever) and ssh.company.com goes to localhost:8443 (let's say you're running ssh3 on that port)
* depending on your web server/reverse proxy configuration
[for instance, I run Kestrel and it really isn't designed to target more than one site; I do it, but it's like bending that Lego brick to make it fit where it shouldn't go]
That already has a (brutal) solution now - sslh https://www.rutschle.net/tech/sslh/README.html - the current version is more sophisticated, but it was originally just a perl script that would send the connection to sshd or the https web server, based on regex matching on an initial string (and I probably timing out and going to sshd if it didn't see one? Something like that, I haven't dug out the old code to check.)
SSHv2 is likewise trivially probable though ("nc $HOST 22" replies with "SSH-2.0-whatever"!), and that never hurt it. If you want to hide your services from attackers, there are many tools for that. I don't see why it needs to be part of the application protocol.
For faster session establishment in OpenSSH consider ControlMaster in ssh_config(5), which multiplexes multiple sessions in one connection instead of creating a new connection for each session.
Interesting. Having HTTP/3 layered over the top, which I presume allows for SSL certificates to be applied to the connection, might result in the SSH connection appearing to observers as standard - uninteresting - website traffic.
Assuming one could connect to an SSH server this way and tunnel ports, could this allow for a means to bypass China's GFW?
China's firewall allows http and https connections through however VPNs, SSH and similar are detected upon connection and blocked on demand.
Hiding a VPN connection by tunneling to a remote SSH server over HTTP/3, forwarding the VPN port and connecting to it might fly under the radar as it could be perceived as regular web traffic.
So let me get this straight, from reading the README, the only tangible benefit is faster session establishment? With the downsides being, he's using a more complicated protocol, which apparently has slower throughput?? I guess this is a cool experiment but why would anyone use this over OpenSSH or libssh2?
The interesting thing here isn't so much the improved latency or whatever, it's the ability to ssh from a client that's on a network which restricts access to anything other than 80/443
- SSH3 is a bad name: this isn't a successor to SSHv2 and will only cause confusion
- The authors don't seem to understand that SSHv2 predates all of their chosen technologies, and provides "robust and time-tested mechanisms" they claim to be adding
- How is "hiding your server behind a secret link" a feature? This is, at best, security through obscurity, which can be layered on any network protocol (e.g. https://en.wikipedia.org/wiki/Port_knocking); this implies that the authors don't have much of a security background...?
I concur. They seem to have reinvented a part of the protocol without actually addressing many of the issues of SSH. The paper also doesn't bother to go into details on any the advancements that have been made to SSH since the original RFC, such as keyboard-interactive, GSSAPI, etc.
> Some SSH implementations such as OpenSSH or Tectia support other ways to authenticate users. Among them is the certificate-based user authentication: only users in possession of a certificate signed by a trusted certificate authority (CA) can gain access to the remote server [12]. Available for more than 10 years, this authentication method requires setting up a CA and distributing the certificates to new users and is still not commonly used nowadays.
Somebody had an agenda to make SSH look as bad as possible. You can implement OIDC authentication with keyboard-interactive, no need for HTTP/3 for that. However, it gets very tricky if you want automated / script access, so it doesn't solve the authentication problem.
As an aside, Tatu Ylonen, the original author of the SSH protocol, published a paper in 2019 titled "SSH Key Management Challenges and Requirements"[1], which is an interesting read. It would seem the authors of this paper should have at least read it.
> This is, at best, security through obscurity, which can be layered on any network protocol (e.g. https://en.wikipedia.org/wiki/Port_knocking); this implies that the authors don't have much of a security background...?
This isn't security through obscurity. The url would be a secret. This is a form of capability security, where to connect to the server you must be able to name the server.
A URL with a secret is, in my opinion, far more sane than port knocking, and will be much more efficient as well.
Your points are great but SSH is extensible so openid connect support doesn't mean much since you can do it with existing ssh.
"Security by obscurity" is only a thing if you're relying on that mechanism for security. People already configure SSH port knocking as you noted. It can be considered attack surface reduction and is a good feature given they're not using a secret link for any security control.
One benefit of their approach might be how you can use TLS pki now instead if setting up ssh-ca's. Potentially you would need to manage less pki.
But a criticism I have is how http* has much more vulns and new attack techniques being developed all the time unlike ssh. I can imagine LFI or request smuggling on the same http/2 web server causing RCE via their protocol.
I'd agree with you. The readme calls out "Significantly faster session establishment" and goes into greater detail later on.
> Establishing a new session with SSHv2 can take 5 to 7 network round-trip times, which can easily be noticed by the user. SSH3 only needs 3 round-trip times. The keystroke latency in a running session is unchanged.
I, for one, can say that sometimes session establishment can take a little while but not to the extent that it would be a selling point (so to speak) for me to adopt SSH3.
so if you want to execute uptime on a remote machine, the session will only be open for a few ms, and those extra RTT are a problem. (Yes, I know about openssh controlmaster...)
SSH over HTTP/(url) is a killer feature if you're working on hostile networks that block SSH and go even as far as to try and detect the protocol over the wire.
If you want SSH via UDP, try mosh. If you have it installed on both client and server side, it just works, re-using auth, sessions etc fron ssh itself and only replacing sending actual session bytes back and forth. Don't break on unstable connections, have way lower latency
Mosh and this project have fairly different goals.
Mosh uses regular TCP SSHv2 to authenticate and setup the udp connection. As such your initial connection time is actually slower than just normal v2, and you cannot auth with something like oauth.
Mosh is heavily focused on interactive sessions. You could not use mosh for batch programs easily.
> Mosh is heavily focused on interactive sessions. You could not use mosh for batch programs easily.
Correct, the goals are better human interaction with a high delay internet or server. Effectively allowing the client side to guess a bit as to where your input went (it does decently at it). But the key thing that I've loved is even if my client machine goes to sleep and I go to a different building I'm still connected to the server. That is wonderful. Agreed the connection time is slower. Mosh = Mobile shell.
Right there, Eternal doesn't even try to cover the same use case as Mosh. It might be an alternative, same way regular SSH is an alternative, but there's no way it can be "better"
SSH3 seems a bit of a clickbait project name, it's not clear to me that this project uses anything protocol-wise from SSH though it offers similar functionality.
A PhD project from Belgium that combines several Golang libraries to offer HTTP-based authentication on top of backwards compatibility with OpenSSH keys, configuration, agents, etc. -- it looks pretty solid but the associated paper titled "Towards SSH3" acknowledges "This article is a first step" in the conclusion.
Anyone should treat any new crypto project which hasn't seen a lot of testing from others as such, no matter who it came from. Even if this was some sort of proposal of the OpenSSH people.
The project is associated with the Louvain university; I would rate the risk of outright malicious tomfoolery to be quite low.
My understanding is limited, but HTTP/3 boils down roughly to HTTP/2+QUIC.
Major cloud providers are still shaking issues out of HTTP/2 like "Rapid Reset" 2 months ago, the nesting and layers open gaps and new edge cases as naive implementations were clearly not yet battle hardened even against old attack families like amplification/resource exhaustion.
The standards bodies don’t seem to buy the “bad things” argument and appear resolute on making it harder to MITM traffic on the wire and attempting to force IDS/IPS to all be run on the client.
Is there a 5-10 year future where you just can’t do this as a middlebox?
Protocols supported MITM with correct configurations and it led to complete ossification of said protocols because middleboxes suck at following standards.
It seems that at the time these features were dropped, most middleboxes have ignored features like exporting keys or configuring static RSA keys and went for CA-MitM attacks instead. You should expect these tools to break if they're actively trying to subvert protocols to do things they're not designed to support.
I don't really see what changed, though. I guess static keys were dropped to provide forward secrecy, but other than that running your own rogue CA is as possible as it was 20 years ago. Middleboxes lagging behind in support for features like HTTP/3 is probably annoying, but that's because of a lack of implementation more than anything.
You can still use your domain tools/MDM configuration/settings to configure an HTTPS proxy and firewall off the normal ports if you want to MitM your network reliably. If yiur prozy doesnt support http/3, it will happily downgrade your connection to HTTP/1.1 for you. Android's insistence on not actually applying user-installed certificates is a pain for many apps, but other operating systems will happily and silently drop security measures like certificate transparency when they encounter a user-operated MitM CA.
The lack of MitMability comes down to Android being fussy, IoT devices you had no chance of ever controlling needing workarounds, and devices you don't have permissions to manage not being manageable. I really do wish Android would let MDM solutions inject certificates into the system store (though I can see why they don't with the wide range of stalkerware in the wild).
I assume he means with the encrypted metadata in HTTP/3 / QUIC that it makes it harder as a security admin to "peek" at what is going on in the network.
In my opinion its short sighted, because if we care about security, then we should care about user security and privacy as well. Because if the security admin has the ability to packet inspect stuff, so does a potential malicious app.
SSH3 is a complete revisit of the SSH protocol, mapping its semantics on top of the HTTP mechanisms. In a nutshell, SSH3 uses QUIC+TLS1.3 for secure channel establishment and the HTTP Authorization mechanisms for user authentication.
So, it has nothing to do with SSH2; more about HTTP/3-QUIC security theater: hostname is still being sent over TLS/1.3 negotiation.
To be clear, my reading of the parent post is that the grandparent doesn't like HTTP/3-QUIC making it harder to read data off of the wire (ie: for internal security analytics).
But I don't see how this is worse than SSHv2. In both cases retrieving the hostname / IP is obviously trivial since you just instrument DNS for the hostname and, of course, the IP is cleartext.
Not sure why this is downvoted. HTTP3/QUIC is a lot more complex to implement than SSH.
SSL is a very well studied standard, but it is clearly a committee product with lots of features built on enterprise standards like X.509, and SSH is made by a few protocol engineers with a razor sharp use case.
It is easy to see why someone who audits parsers for a living would be much more comfortable with SSH as compared to a something over HTTP/SSL/QUIC.
What does “official” mean? The OpenSSH team? IETF?
Anyway, SSH authentication is extremely inflexible, and the protocol is not particularly performant, especially on large bandwidth-delay links. Moving to HTTP3 seems like an excellent idea if it’s implemented well.
(Although… we really need a way to do TLS/QUIC to an endpoint without a domain name.)
The RFCs for SSH2 were all published by IETF, so I would definitely expect IETF to be involved in a project that claims to be "SSH3". If some random person started an OS project and called it "Windows 12", people would rightfully be confused.
Yeah, right now for auth, if you want to use e.g. OIDC I think the best you can do is to essentially shove everything in the square hole using very short-lived SSH certificates and OOB auth flow; e.g. "open this browser link to get a cert for the next 15 minutes/24 hours." So, you're basically treating short-lived certs like session tokens, more or less. I got this working with my own homegrown SSH CA infrastructure last year, but never took it out of prototype stage. Even slightly more flexible authentication would be very welcome.
Cloudflare offers this as a service. It’s obviously a second class citizen (it’s intensely buggy, has low availability, doesn’t work especially well even on a good day, has incoherent configuration, and no support whatsoever).
On the other hand, all Cloudflare configuration seems incoherent, and it gets more so over time. I was recently highly entertained when I tried to access one of the Zero Trust [0] pages. The UI cheerfully informed me that only the new UI could configure Zero Trust, and it redirected me to a new domain that was IIRC “one.dash.cloudflare.com”. You can’t make this up — maybe it’s called One Trust internally? The new panel looked quite a lot like the old one except that the Zero Trust pages worked.
Well, “worked”. None of the Zero Trust config makes any sense.
[0] Is there any logic at all to what lives under the Zero Trust umbrella?
It's just someone's project. As far as I can tell it's unrelated to IETF, if that's what you mean by "official". In any case it's presumptuous for the author to call this "SSH3".
For example x509 seems to be a disadvantage to me. I do not want anyone with a cheap DigiCert certificate be able to log in to my server, even as a result of some fat finger misconfiguration. OAuth assumes that both client and service provider can reach identity provider. Is it so for most serves? I am used to see pretty restricted setups, where many servers have no internet access and only update from a private package repository.
From one side, I really like the idea of reusing HTTP. Who does new protocols these days? Everything is JSON or XML over HTTP. And it's good enough for most cases. But is it good enough for SSH? WinRM works over HTTP, but it uses Kerberos for authentication.
Are there any significant real practical advantages? I don't see any. Are there any vulnerabilities, possibilities for misconfiguration, architectural flaws? Quite possible.