Hacker News new | past | comments | ask | show | jobs | submit login

That would be a misunderstanding of Woo. You can have as many workers running as you like with woo, but for async activities processing requests, the issue isn't woo, that is the server doing things with the request once received, and for that you run lparallel with great results. You can simply send every request off in a future, and it will go off and do it on another thread.



So you're not running anything truly async, you're just using threading at a lower level? I don't get what Woo really provides to the async ecosystem in CL.


Woo is a superfast web server. It handles receiving 4x the number of requests that node js can. Once the server gets the request, you can control how the response works with futures etc. Our front end webapp is using Vue cli, which clearly is javascript. I assumed the discussion here was about the server, not the front end webapp on the browser.


The discussion was about an app server written in CL (Wookie) that supports a backend app written in CL, all in async (via cl-async).

It seems to me you're using Woo as an Nginx replacement, not as a CL app server, which makes total sense to me after you described the model you're using.


I wouldn’t describe 4x nodejs as “super fast”. Nodejs is one of the slowest popular backends, or at least it was when I did a comparison a few years ago.


In the last benchmarks, woo beat all including the Go webserver: Scroll down just a bit to see the comparisons here: https://github.com/fukamachi/woo


Nice. Node has gotten a lot faster than when I last looked apparently. In fact, I’m a bit suspicious of the benchmark they’re using.


What is your distinction between "async" and "using threading at a lower level"?


In this case, an async (via cl-async) application served by an async app server (http://wookie.lyonbros.com/), so async all the way down using evented I/O vs an async server (Woo) that farms out all requests to a synchronous thread pool.

I've been asked multiple times why I "didn't just use Woo" by people who don't understand that Turtl's server was async and Woo doesn't support async.


I guess the question is what does that have to do with it? Why not just work asynchronusly with requests that hit the server with a library like lparallel with their futures? Isn't it about the response to the request?


> Isn't it about the response to the request?

High level, yes. Mid/low-level, it really depends on what you're doing. If you're serving static files, sure, use Nginx/Woo. If you're running an evented CL application that deals mostly with network i/o, threading is going to be a tank when you need a hummingbird.

I built cl-async/wookie as parallels for nodejs/express in common lisp.


Oh, are you the developer mentioned in the main post?

If so, I agree a lot with what you said!


I am! Turtl is my baby.


Awesome stuff!


The post talks about using libuv which uses event loops all the way down. I'll be honest, this whole line of questioning feels quite dismissive; OP probably knows exactly _why_ they want an event loop design, you shouldn't try to talk them into using granular thread pools.


I think you're responding to the wrong person as I didn't "try to talk them into using granular thread pools".

I asked for clarification on the distinction between "async" and "using threading at a lower level", which, for me, are equivalent things. The distinction, to the extent there is one, being that "async" often means "an existing library or language feature versus rolling it myself".


Evented ("async") concurrency, as found in Node, Python, Rust/Tokio, libuv, and Ocaml is based on building chains of events which are waited on by some fast polling mechanism like epoll or kqueue. Any IO call, say a socket read, tells kqueue/epoll to notify some handler to service the event. The flow of events drives execution.

This is distinct from thread pool models where you still block the entire thread for an IO call. While a sufficiently smart scheduler can probably then context switch out of this thread onto something else as the thread waits for an IO response, this is distinct from having the event directly wake up a handler.

That's usually what I associate as the difference between an event loop model and a threaded model. You can certainly make your threads highly granular and isolate each distinct blocking operation to its own thread pool, but it's different from actually being notified and woken up for events.

> I think you're responding to the wrong person as I didn't "try to talk them into using granular thread pools".

Yeah I think my wires got a bit crossed there. Apologies. That's what I get for being snarky while not paying full attention.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: