Hacker News new | past | comments | ask | show | jobs | submit | brokenwren's comments login

An overview of how FusionAuth selected our SOC 2 software vendor for 2023.


Generally, the HN community prefers if you announce the company you are with. It helps ensure transparency in the communications.

I’m the founder of FusionAuth and have deep knowledge on OAuth and SAML. The groupings Dan used seem like a decent assessment with some caveats.


I'm not with any company.

> caveats

And I'm pointing those out.


I missed that you left Okta back in 2022, so pardon asking for a disclosure. In any case, I think your last sentence was the part that seemed a bit defensive. Dan was pretty clear that his assessment had overlap and things get fuzzy. Of course, YMMV.


I think that's actually what most people are waiting for with Loom. They want it to be fully baked into the JDK and ready for production first. Then they will start using it.

For java-http, once it has been tested a bit more, we'll likely release a 1.0.0 version, but it's already in production, so it works right now.

In terms of Netty or Jetty, why pull in all the dependencies and overhead if all you need is an HTTP server? java-http solves the 25 year old HTTP server problem we've had in Java.

In the past, you either had to learn Netty or use Jetty/Tomcat/JBoss/Glassfish/WebLogic/etc. In my opinion, these tools are complex, bloated, and legacy. Most other platform have a simple HTTP server (Node, Ruby, etc). Java has lacked this for a long time and we've been forced to use things like Tomcat when we didn't need JEE or WARs or the complexity.

The JDK team plans to add one in version 20 or 21, I think. But they have specifically stated it won't be production quality (ugh). Not sure why they made that decision honestly.

Need a simple HTTP server that only takes a few lines of code to start, is production quality, requires no dependencies, is crazy fast, scales well, and is only 140k? No problem. Just use java-http! :)


Actually, Loom is about threading and helps support NIO. You'll still need Selectors, Channels, and ByteBuffers with Loom, you'll just be able to pass off the parsing and handling to a Fiber. You might be able to get away with doing the IO blocking with Fibers, but it likely won't scale. Non-blocking IO is still way faster at the OS level so my guess is that Loom will simply replace 10-20 lines of code in java-http and the majority of the IO will be the same.


All blocking IO when run in fibers will be potentially non-blocking. That is literally the point of Loom. Also, non-blocking IO is almost never faster than blocking IO but you can handle way more connections with it. You use NIO for scale not for latency. Due to fairness concerns you will also want to hand off heavy computation to native threads vs fibers. Fibers are for IO.


I don't think this is accurate. The Loom documentation says specifically that it is a concurrency model to replace native threads with fibers. It says very little about non-blocking IO, except that it is a use-case that it assists with:

https://openjdk.org/jeps/425

The NIO section covers a bit about how the fibers release the channels, but the OS will not know that. Non-blocking IO is about interrupts at the hardware level that let the application code know when bytes are ready to be read or written. Fibers help by fanning out the work, but they don't replace this concept.


FTA, it's pretty clear.

  Typically, a virtual thread will unmount when it blocks on I/O or some other blocking operation in the JDK, 
  such as BlockingQueue.take(). When the blocking operation is ready to complete (e.g., bytes have been 
  received on a socket), it submits the virtual thread back to the scheduler, which will mount the virtual 
  thread on a carrier to resume execution.

  The mounting and unmounting of virtual threads happens frequently and transparently, and without blocking any 
  OS threads. For example, the server application shown earlier included the following line of code, which 
  contains calls to blocking operations: response.send(future1.get() + future2.get());

  These operations will cause the virtual thread to mount and unmount multiple times, typically once for each call to get() and possibly 
  multiple times in the course of performing I/O in send(...).
> Non-blocking IO is about interrupts at the hardware level

This isn't relevant. We only care from the perspective of userspace.


I'm not clear why there is a distinction here. Any HTTP server can easily use a non-blocking Selector to handle the I/O operations and then perform the application logic on Threads or Fibers. My point is that Loom fundamentally is a threading model that works well for blocking I/O.

What is unclear is whether or not Loom will increase performance of the server I/O (not the application logic code) that is already working well with non-blocking Selectors. I doubt it. But if someone has a Loom implementation of a plain HTTP server (nothing complex or JEE), I'd be up for some benchmarking exercises.

Overall though, I don't think Loom negates the usefulness of java-http. We built a super simple API, with no dependencies, that works with Java 17 and above, supports TLS natively, and is insanely fast.

Am I missing something?


True, at the OS level the JVM libs/framework are going to rely on some thread pool of non-blocking IO. Maybe a more fair comparison on calling them the same abstraction is event-loop/callbacks versus fibers.


Thanks! Feel free to log any issues you encounter. TLS is complex, but I think I have it working properly now.


I thought so as well. See my comment on the other thread about Netty. I'm sure someone that is a Netty expert or committer could figure it out, but it's so complex that it makes it nearly untenable.


I ran the load tests and couldn't explain it either. I adjusted the thread pools, buffer sizes and a bunch of other parameters and couldn't get Netty to scale.

I think Netty tries too hard to be everything to everyone. This makes it really hard to determine how to configure it properly across a bunch of different versions with lots of incompatibilities.

I wrote java-http with the concept of not doing that. It's purpose built for HTTP and high performance.

Once I have some time, I'll publish my Netty setup and let the community bang on it and see if they can beat my RPS. At 65k, it might be hard though. :)


What's the hardware being used for your test? I get 55k RPS with a basic 200 responder with zio-http[0] (which uses Netty) on my i5-6600K, and over 20k RPS for an e2e POST endpoint that does write batching to postgres (committing the insert before responding to all of the clients in the batch with with their individual db generated ids). Postgres, client (vegeta[1]), and the app all on the same machine. I think that was with keep-alive, I think like 256 clients for the basic responder and 1024 for the one that writes to the db. There's a recently merged PR for zio-http that does 1M RPS on whatever machine they test on[2] so Netty can absolutely scale to high RPS.

[0] https://github.com/zio/zio-http

[1] https://github.com/tsenart/vegeta

[2] https://github.com/zio/zio-http/pull/1659


Would love to see your set up!


Sounds good. Once I get the project published, it will include all of the load tests for each server as well as the setup and code for it all. Might be a couple of weeks or so, but it will be a separate GH project. Something like java-http-performance.


I wrote most of the server code for the project and I actually looked extensively at Loom.

We decided to anchor the project to LTS Java versions only. The issue with Loom is that as a preview release, you can't use it without introducing compiled code that is hard to deploy on new LTS versions. We ran into this with Java 14 and used a bunch of the preview features in that release. When we upgraded to Java 17, it caused a number of issues and we had to rebuild almost everything.

Once the next LTS has been released and Loom is top-level, we'll definitely look at using it as long as the Java community is willing to make the jump with us.

In the meantime, I think our threading implementation works quite nicely and scales really well. Let me know what you think if you review the code.


As this applies to access tokens, if your application doesn't need a JWT, it shouldn't care whether the authorization server returns a JWT or an opaque token. On the flip side, if your app needs a JWT, then the authorization server must return one.

Revocation is either offered by the authorization server or managed by the app. If the authorization server manages it, then JWTs vs. opaque tokens are not a concern because the authorization server issues and revokes its own tokens. If the app manages it, then generally it does so based on the token type. If the app revokes based on opaque tokens, it can handle any type of token, including JWTs. If it revokes based on JWTs, then JWTs are required.

Beyond that, the only differences between the two token types are size and data leaks. Size rarely matters (hehe), so just ignore that. Data leaks are only an issue if you app is leaking JWTs, which is usually considered a critical vulnerability. Remember that access tokens are the main units of identity and if I steal an access token, I effectively become that user/client.


I agree with you, but it should be WAY faster than every 90 days. I'm trying to find articles that address the fact that NIST and others are worthless since they recommend every 1-2 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: