Hacker News new | past | comments | ask | show | jobs | submit login

Precisely my thought :).

Also, what about the builtin array module? https://docs.python.org/3.7/library/array.html




Well, given that they didn't even use pythonic constructs, I'm not quite sure what to think of the article:

    In [1]: import random
    In [2]: r = [random.randrange(100) for _ in range(100000)]
    In [3]: x, y = random.sample(r, 1000), random.sample(r, 1000)
    In [4]: %timeit z = [x[i] + y[i] for i in range(1000)]
    106 µs ± 1.28 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
    In [5]: %timeit z = [i + j for i, j in zip(x, y)]
    67.3 µs ± 3.38 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(under python 3.6.3)

For those that "don't see it", what I'm seeing is that, instead of looping over a zip of iterables, like they should be doing, they're using an "index range" to access each element by index -- not the best practice, and also results in a noticeable slowdown.

Edit: And for the curious who might think they're different result:

    In [6]: z1 = [x[i] + y[i] for i in range(1000)]
    In [7]: z2 = [i + j for i, j in zip(x, y)]
    In [8]: z1 == z2
    Out[8]: True
Final edit: all in all, I'd say this is a low effort post, aimed at gathering attetion and showing "look how good we are that we know how to speed up python loops using numpy" (/s)...

And I've successfully been python-nerd-sniped.


> builtin array module?

It's not for arithmetic but packing large amounts of data in memory efficiently. I've never seen it used, since NumPy usually is easier




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: