Hacker News new | past | comments | ask | show | jobs | submit login

Wow that's awesome. I'll have a look at it ASAP. We have actually just converted our lambda code to run on a multi core machine + much wiser algorithms to massively speed up the process.

I have not deeply look into your library yet. But how do you deal with de/serialising? We use https://www.npmjs.com/package/class-transformer to correctly de/serialise ts-objects.

Also, do you create a new webworker per function call or do you create only as many workers as threads/cores on the machine and run the functions inside those? Starting a webworker can be very expensive if the serialised data is large .

Ps: each lambda function ran a special parsing of complex mathematics-excercises. We are an ed-tech company ;)




The serialization/deserialization is just JSON for now, though I plan on adding some configurability and perhaps changing the implementation at some point. There is some runtime checking to make sure the arguments are correctly serializable.

In local mode, a process is created up to the concurrency limit you specify, and each process is reused for subsequent calls (mimicking how Lambda reuses containers, allowing you to use the same caching behavior you'd use on Lambda). I'm not currently using webworkers, but that's something I could see a new mode for easily. For larger data, I would recommend storing arguments and return values directly in cloud storage like S3, or on local disk in local mode.

I would be interested to learn how your experiment with faast.js goes!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: