Hacker News new | past | comments | ask | show | jobs | submit login

Considering the author's use case (mathematical modeling) and language (Python), threading and event-based models would have no real performance benefit.

Event based models only shine when you are doing IO-bound tasks. They won't help you when you are chewing CPU.

Threading models in Python aren't attractive because of the GIL. If you are doing a parallel matrix operation you can only ever use one CPU because of the GIL. Not attractive.




Any thoughts how software-transactional memory might apply for a use case like this?


STM is not a great fit for this kind of problem, there's no need for all the transaction machinery if the problem is embarrassingly parallel. In an ideal world what you want is threads which just split the work sections like they do in C/C++/Haskell.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: