Considering the author's use case (mathematical modeling) and language (Python), threading and event-based models would have no real performance benefit.
Event based models only shine when you are doing IO-bound tasks. They won't help you when you are chewing CPU.
Threading models in Python aren't attractive because of the GIL. If you are doing a parallel matrix operation you can only ever use one CPU because of the GIL. Not attractive.
STM is not a great fit for this kind of problem, there's no need for all the transaction machinery if the problem is embarrassingly parallel. In an ideal world what you want is threads which just split the work sections like they do in C/C++/Haskell.
Event based models only shine when you are doing IO-bound tasks. They won't help you when you are chewing CPU.
Threading models in Python aren't attractive because of the GIL. If you are doing a parallel matrix operation you can only ever use one CPU because of the GIL. Not attractive.