Hacker News new | past | comments | ask | show | jobs | submit login
PyPy Project looking for sponsorship to add support for Apple Silicon (morepypy.blogspot.com)
262 points by fniephaus on Dec 31, 2020 | hide | past | favorite | 163 comments



For those asking "who uses PyPy?": the truth is that very few people using it get back to us and say how it helps them or what they are doing with it. For instance crossbar.io uses it [0]. Some shops use it as a second step until they refactor code into a compiled language, but often that second step never materializes.

But the value of a second implementation of a language goes beyond the immediate "who uses this in practice". It can be a fertile bed for innovation and for new ideas, and provides a contrast to the nay sayers. For instance, the recent pitch to vastly improve CPython's speed has some roots in ideas that were tested out in PyPy. CFFI [1], revdb [2] and vmprof [3] all started as PyPy projects. Some of these turned out to be very popular, some less so. The next project in this line is HPy [4] (still alpha-quality), which is trying to re-think the C-API for Python to make it even easier to interface with.

RPython [5], the language behind PyPy, is also an accessible playground for dynamic language research.

[0] https://crossbar.io/about/FAQ/#python-runtime

[1] https://cffi.readthedocs.io/en/latest/

[2] https://morepypy.blogspot.com/2016/07/reverse-debugging-for-...

[3] https://vmprof.readthedocs.io/en/latest/

[4] https://hpy.readthedocs.io/en/latest/

[5] https://rpython.readthedocs.io/en/latest/examples.html


If you want more examples of real world use cases, PyPy is pretty stress-tested by the competitive programming community already.

https://codeforces.com/contests has around 20-30k participants per contest, with contests happening roughly twice a week. I would say around 10% of them use python, with the vast majority choosing pypy over cpython.

I would guesstimate at least 100k lines of pypy is written per week just from these contests. This covers virtually every textbook algorithm you can think of and were automatically graded for correctness/speed/memory. Note that there's no special time multiplier for choosing a slower language, so if you're not within 2x the speed of the equivalent C++, your solution won't pass! (hence the popularity of pypy over cpython)

The sheer volume of advanced algorithms executed in pypy gives me huge amount of confidence in it. There was only one instance where I remember a contestant running into a bug with the jit, but it was fixed within a few days after being reported: https://codeforces.com/blog/entry/82329?#comment-693711 https://foss.heptapod.net/pypy/pypy/-/issues/3297.


> If you want more examples of real world use cases, PyPy is pretty stress-tested by the competitive programming community already.

I think this is the first time I've seen someone suggest that competitive programming has much bearing on “real-world use cases”.


I guess it tells you that loops and basic containers work?


So essentially all code right?


Right, but most code does things other than that as well.


Why do competitive Python programmers use PyPy instead of CPython?


PyPy is easily 10x faster than CPython at numeric stuff, which is 99% of these contest problems.

For example, using CPython, if you try to make an array of a million ints, you won't get `int[1000000]` in your memory layout. Each int would actually be an object, which is huge and inefficient to reference (they are something like 24+ bytes each).

PyPy on the other hand, works as expected.

I think the more important point is that PyPy when written like C code, can actually get within 2x of the performance of C code. If it's any slower, python won't be a viable language in competitive programming at all.

(CPython is sometimes still used on other platforms like atcoder.jp, but only because they allow third party libraries like numba and numpy which can fill the same role pypy does)


For that particular use case, how does PyPy perform in comparison to CPython's array module[1]?

[1] https://docs.python.org/3/library/array.html


Why doesn't the community collectively switch over to PyPy? It seems like it's better in all reguards.


Library support?


The cabal that runs CPython ignores both standardization, the standard library and interop with other implementations.

PyPy shouldn't be the new default, but neither should be CPython.


Don't you have to choose one or the other tho lol


PyPy is much faster than CPython.


For some cases, and the picture often changes.

We were using pypy because it was better for our use case at one point but then later on we retested cpython and found the picture had changed.

We believe this was due to significant improvements in the regex engine for cpython over the period, but could also be due to our code base changing.

The point being it is not a given that pypy is faster.


AFAIK pypy improvements also led to cpython compact dictionaries (which led to dictionaries being ordered) and many many cpython performance improvements (the most recent one being LOAD_ATTR caching, with huge performance gains).

I wonder if sponsoring pypy instead of the PSF might be a way for people that want "a focus on performance improvements, instead of fancy features" to vote with their wallets


It is worth noting that none of the PSF donations arrive at core development anyway. They support the bureaucracy and the infrastructure.

python.tar.gz would be exactly the same if the donations were zero.

All the more reason to switch to donating PyPy.


Instead of downvoting, google the financial statements and see for yourself. You are supporting bureaucracy/infra/conferences.

All of which would not exist without python.tar.gz.


Going by https://www.python.org/psf/annual-report/2020/ , 75% are running the annual conference, 12.6% grants open to _anyone_, and 3.8% maintaining a central registry primarily staffed by volunteers, and I imagine paying for quite a sizeable set of origin servers at this stage.

As for the PyCon expenditure, it's certainly the kind of thing I'd never attend, but there can be no doubt the existence of the conference has strengthened and grown Python's user base, and it is open to everyone should they wish to attend. Meanwhile, $97k annual to operate a package registry with literally millions of downloads per month that tens of thousands of companies depend on sounds like an amazing bargain to me.


Speaking as someone who had a core role in organizing PyCon US a decade+ ago and was active with the PSF for some time, I’ve never seen an organization more dedicated to good stewardship and inclusiveness at scale. I’m sure there are other great examples but the PSF was always commendable.

PyCon was always meant to be affordable for as many people as it could be - via low prices and financial aid for many. Those conferences helped create many of the relationships that drove key parts of the Python ecosystem. In addition, there are/were typically dedicated time for code sprints following the main conference. Important work happened there, often for key parts of the Python codebase or key libraries. While the budget numbers don’t say explicitly “development support” people shouldn’t presume PyCon does nothing for the language itself. Open Source conferences are not like commercial conferences. It’s community driven rather than marketing driven. That difference matters.

It was a smaller world when I was involved but it was and still is something special.


> literally millions of downloads per month

I was surprised it would be so low given all the Dockerfiles building in CI systems everywhere and apparently from

https://pypistats.org/top

It looks like tens of millions per day a billions per month

edit which is ~115 downloads per second, which is nothing a beefy VM couldn’t handle.


BigQuery:

    #standardSQL
    SELECT EXTRACT(HOUR FROM timestamp) hour, COUNT(*) AS num_downloads
    FROM `the-psf.pypi.file_downloads`
    WHERE DATE(timestamp)
        BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 30 DAY)
        AND CURRENT_DATE()
    GROUP BY hour
    ORDER BY num_downloads
Gives us 6.956 billion downloads in the past 30 days

With hour 15 clocking in at around 2,964 requests/sec -- mean. This is still not representative of peak instantaneous traffic, which I'd hazard a guess is somewhere closer to 10x that number.

Now factor in the 'usual' case where the CDN absorbs 70% of this, and you're still left with a service that is non-trivial to run and support 24/7. PyPi manages to only have a few small downtimes per year.


Hadn’t thought to look up more precise numbers, and the comment on CDN is good one. It’s an example of where smart choices in the client (pip) could keep the backend (PyPI) scalable.


PyCon makes a sizeable profit, which supports the staff and grantmaking activities of the PSF.


PyCon totally does affect python.tar.gz.

https://github.com/python/cpython/pulls?q=+label%3Asprint (and that link doesn’t capture the PEPs hashed out among other things.)

Also, why the hell make a new account for each comment?


Here's the 2020 financial statement: https://www.python.org/psf/annual-report/2020/

Total expenses for 2019 were a bit under $3.7m. "Program service expenses" were $2.6m, about 70% of total expenses. Of this $2.2m, 75.3% was spent on PyCon. So overall, around 53% of PSF spending in 2020 went to PyCon ($1.9m). In 2018, it was around 57%.

The next-biggest expense was staffing ($694k), at around 19% in 2019.


To add I believe PyPy also inspired asm.js the precursor to Web Assembly.


I have turned to pypy quite extensively for pure-Python text processing tasks, where I often get a 10-100x speed up just by changing the command line invocation. For example, I wrote a proof of concept Rabin-Karp hashing approximate string matching algorithm to perform plagiarism analysis while I was working at Udacity in around 2017. The system never went into production, but pypy was super helpful in crunching all historical user submissions for analysis.

I’ve also had great success using pypy to accelerate preprocessing steps (when they don’t rely on incompatible c libraries) for machine learning pipelines. It’s the most painless performance enhancement trick in my toolbox before I reach for concurrency (in which case I reach for joblib or Dask).

The one oddity I’ve noticed is that using tuples (including things like named tuples) often speeds up CPython by a lot, but even plain tuples can slow down pypy on the same code—in some cases pypy winds up slower than CPython.

In any case, I’m low key in love with pypy, even though I can’t use it for _everything_.


> The one oddity I’ve noticed is that using tuples (including things like named tuples) often speeds up CPython by a lot, but even plain tuples can slow down pypy on the same code—in some cases pypy winds up slower than CPython.

I'd love for this to be addressed. namedtuple should be lowered into a fixed sized struct, great opportunity for PyPy to show wins in both memory, memory bandwidth and compute. Add in type stability and PyPy should approach native speeds just by adding in namedtuple.


I created an issue [0] to address this. Could you add a microbenchmark that demonstrates the problem?

[0] https://foss.heptapod.net/pypy/pypy/-/issues/3373


We use pypy pretty extensively at our firm for analytics/olap work.

We've actually tried using some of the more traditional libs (Pandas et al) with CPython, but there's always a pure-python bottleneck (e.g. SQLAlchemy).

Performance is important to our clients and trying to keep everything performance critical in C extensions / NumPy would be kind of risky for us when adding new functionality, so pypy's guarantee of more speed pretty much across the board is awesome.

There are downsides of course - higher memory usage, longer boot times, some more obscure libraries being unsupported - but on the whole, it's a good choice for us


Why isn't Ubuntu shipping it as the default python?


Some popular libraries don't work on PyPy because of their C code.


A couple of years ago we reached out to the python community about wheels and arm64 - how it should be handled and whether they plan on embedding non-x86 blobs. We received the standard "we'll think about it and let you know". Now that Apple switched to arm64, all communities are suddenly interested in porting things to arm64.

And of course, Apple is not investing in these ports, at least as far as I know. They just rely on what other arm64 players did in the ecosystem before Apples rolled out M1; respectively lets developers figure out the remaining porting.

As much as I hate to say this, IBM does handle porting things to ppc64 right - you can find IBM contributed code and optimizations anywhere you look. For many packages, porting to arm64 was a matter of "does it have ppc64 support? if so, it can be reused for arm64" ...

Disclaimer - used to be a contractor porting stuff to arm64 for a couple of years.


> And of course, Apple is not investing in these ports

What count as "investing in these ports"? Does submitting patches to CPython (and many other open source projects, including NumPy) for macOS 11 and Apple Silicon count? Here's a list of Apple-submitted PRs on python/cpython: https://github.com/python/cpython/pulls?q=is%3Apr+author%3Al... Also some co-authored patches excluded by the search. See https://bugs.python.org/issue41100 for more related PRs. Your subsequent comments about IBM seem to imply that these do count.

I'm also curious about who's "we" in "we reached out to the python community about wheels and arm64".

---

Edit: Forgot to say, arm wheels have been supported for many years now (not sure about the specific timeline of aarch64 support, but if 32-bit arm was supported I don't see why aarch64 wouldn't be). Maybe most famously there's https://www.piwheels.org/ for RPis. Are you talking about aarch64 support on PyPI/warehouse?


By investing in these ports I was thinking about any contribution that enables/improves arm64 support. So yes, PRs for python itself definitely qualify. However, the python ecosystem is not only python, there are lots of python packages that still lack arm64 optimizations or native arm64 support. Most of them work, being python, but every once in a while you'd run into a package that does not just work on arm64 without some tweaking.

In my case, "we" referred to the working group of ARM contractors working a very specific higher level project (at the time we were doing some cloud/openstack/k8s stuff, which indirectly required some python packages). Note that the way contractors were assigned to projects at the time implied that sometimes you'd run into overlaps, e.g. some packages we needed had already been ported by a different team of contractors for a different project a while back. Sometimes multiple teams would hit the same missing port at the same time, most of the time we could sync and team up by escalating that through the right channels, but sometimes we'd just end up duplicating work items and end up with two different sets of patches/ports.

About wheels, someone else in my team handled that, so I'm not that familiar with the issue - it might have been a particular package that was lacking arm64 wheels or it might have been impractical to use a different repository only for arm64 (in general we'd try avoiding using separate resources based on architecture and we definitely did prefer the arch-agnostic approaches, e.g. debian repository URLs are arch-independent, dockerhub manifests allowed using the same container image names etc.). We were interested PyPI at the time (couple of years ago), maybe the situation has improved meanwhile, but we just accepted compilation at install time as a good enough solution.


For what it's worth, I think Apple sent the Homebrew devs ARM Mac Minis for building and when the Homebrew devs asked for more, they sent more. So they're doing something, even if they're not doing as much as one would hope.


To be honest, I don't blame Apple for this. I blame the ARM ecosystem which is very fragmented, each company working with ARM is contributing to the stuff they are interested in and that's it.

Lots of contractors and always shuffling/changing projects they work on.


Isn't Apple doing the exact same thing with their proprietary ARM ISA extensions?


Their ISA extension is an ML-specific one, and macOS runs fine with it disabled. Their public compilers do not support it either.

You are supposed to use it through Accelerate.framework, which exposes a more traditional interface to that capability.


> Their ISA extension is an ML-specific one

Apple Silicon has more than one ISA extension.

There is also the x86 memory ordering extension used by Rosetta 2.

Maybe there are yet others too.


> There is also the x86 memory ordering extension used by Rosetta 2.

That one isn't really a requirement either, and is handled fully in kernel mode.

Wkdm and friends are handled fully in kernel mode.

APRR? Not a strict requirement for user-mode, JIT regions are just left as RWX without it.


Apple enforces APRR on macOS; you need to use their APIs to work within the confines of W^X.


There are more but they are not exposed to applications. Apple wants you to ship standard arm64 code.


They also submitted some patches to MacPorts, and I think Homebrew though I couldn't find a reference offhand. So i suppose it's always possible to argue they could do more, but code and hardware is a long way from nothing.


Why would Apple invest in third-party toolchains? Aren't they incentivized to only support Xcode, Swift, and other tools that are as tightly bound to their ecosystem as possible?


Because they sell hardware and commodize its complement. Having a good Dev experience is part of that, be that a good story for mac-only software (swift/xcode -- ymmv if this is better than other technologies, but that's the intent) or having your posix software mostly just work.


Apple uses community tools internally for some things.


They can't. They've put themselves on the path where they can't abide by the GPL. GPL3 says you must preserve the right to run the software and apple takes that away from their customers then sells it back.

So apple nopes out of it. They rely on the initial boost they got from GPL software. They do some MIT/BSD licensed stuff (and publish some of it but not everything). Every year less and less happens, unless it's their own language/etc.

They do have nice gradients and round corners though. :)


> They can't. They've put themselves on the path where they can't abide by the GPL.

PyPy is MIT licensed.


I'm talking about apple investing in non-apple/community software.


You're talking about GPL3 software, which is irrelevant here. They clearly do contribute to MIT-licensed "non-apple/community software", as I demonstrated up thread, so what's the point of this "they can't" BS.


just remember it all started with stuff like gcc afaik.


>They rely on the initial boost they got from GPL software

You mean BSD? That's how Darwin is licensed.


I believe the GP is talking about the software macOS ships with, not macOS itself.

macOS ships with some GPL2 software, along with its BSD/MIT/Apache-licensed Darwin layer. (The GP is presumably implying that having this GPL2ed software "built-in", helped macOS entrench itself as a useful POSIX for developers — at least before tools like MacPorts/Homebrew came along to make acquiring this type of software "after-market" simple.)

For any GPL2 software macOS makes use of that then transitions to GPL3 licensing, Apple can't actually adopt the GPL3ed version (exactly because of the "runs anywhere" clause in the GPL3) and so instead, macOS keeps the old, GPL2ed version of the software around, left to rot. Eventually, if it's something critical, Apple removes their dependency on said software, replacing it with something that's not GPLed.

This is the story of Bash on macOS: it was regularly updated for as long as it was GPL2-licensed; then it got "stuck" on the last GPL2 version when GNU transitioned Bash to GPL3 licensing. The Bash on macOS remained on that old version for the longest time, before finally being swapped out for Zsh in Catalina.


also they were technically in violation of the GPL with bash. They made some modifications and did not release them all. (try to compile bash without rootless.h)


Presumably, they are on https://opensource.apple.com/source/bash/bash-118.40.2/ although sources for the latest macOS are usually not released in a timely manner (no Big Sur at the moment, for instance)


rootless.h has never been made available to the public as far as I know.


I just found that the new dump of sources for 11.0.1 has a version of bash that doesn't #include <rootless.h>


that's odd... what does Apple have to win out of a GPL violation for this?


Nobody has forced them to release it, so until then they get to keep some magic to themselves, presumably.


In the beginning the packaged a bunch of stuff, both mit/bsd and gpl. At some point they stopped gpl software. The darwin stuff is published but not everything. The code shared has become less over time even as apple has grown enormously.


They didn't stop shipping _GPLv2_ software. They just never shipped GPLv3 software.


Emacs is gone IIRC


Parts of Darwin are APSL.


Yes. That. I am done with the MacOS for good. My first Mac was a Classic II, back in '92. My last will be a MB Pro 2018 (actually 2020, but already repaired once and broken beyond working condition again). It was my third MB Pro in 3 years and every single one had to be repaired once before it crashed shortly thereafter. The company I work for has to replace around 1% - 2% of their MB Pro fleet per month. 6 - 10 devices are always in repair (additionally 1 - 2%).

I have notified my superior, that I am not able to work properly in January when I am back from a 2 months LOA were I privately switched everything over onto a WIN10 with WSL2 running Ubuntu.

I am not missing a thing currently. OK - I know at one point I will probably be missing Keynote.


Amazon AWS will soon add support for the M1 hardware to the macOS EC2 managed offering announced a few weeks ago. AWS also offers free credits to prominent Open Source projects.

It's not available yet but it should be a good option once they become available, you can immediately start the process of applying for the credits.

This way you would get access to the required hardware without any costs and without having to maintain it. They are designed from the ground up towards such CI/development use cases.

You can also use the same credits for getting access to Intel and ARM Linux, as well as Windows EC2 instances. This may improve your project's CI and build times.

Disclaimer: I work at AWS.


This is no use unless you reduce the minimum tenancy. Might as well buy a Mac Mini and shove it on a shelf.


I'm not working closely with the Mac EC2 nor Open Source departments, so I may be wrong but the project should get credits covering a certain monthly bill, the amount would be requested on application for the credits.

It shouldn't matter what type of instances are launched and the credits cover any other services, not just the EC2 compute, as long as the bills are less than the monthly credits.


Use cases like this are far more than you can imagine. The longer I work for AWS the more I realize this...


what if the free credits cover 24x7 tenancy


PyPy volunteers are being too nice asking for a couple of thousand dollars. Don't be afraid to ask for enough money to support development and maintenance for this new platform, purchase the hardware, as well as earning replacement income for opportunity loss. $50K minimum is more reasonable. Get the funding you deserve from these free riding companies using the software.


This seems like the kind of thing that Apple themselves are in a good position to help with. Seems pretty improbable unfortunately.


I imagine that PyPy is way too obscure for Apple to care about it. Amazon would probably care a lot more if they're planning on adopting more ARM in AWS.


In the Python community it is a pretty well known implementation.


I don't doubt that many Python people have heard of Pypy, but in my experience not many companies actually use Pypy in production


Unfortunately no, JITs in other communities get much more love than PyPy does.


I was under the impression that Apple doesn't really care for JIT compilers.


They do, in macOS and in what concerns iOS, JSCore.

PyPy doesn't get much love from Python community overall, too focused on CPython + C, no matter on which platform.

Hence why efforts like Julia and Chapel spring to life.


They ship multiple JITs, so I don’t really understand your comment.


They don't allow anyone else to ship a JIT on iOS and you set special security settings in your app on macOS to ship one.


For sure, but it's not used as much on user Arm machines compared to server applications, which Apple doesn't have much of a stake in ATM.


Also: the people or organizations making these requests. They know making software isn't a free lunch.


It is, if someone else funds it. It's a commons dilemma of sorts.


> Either we get sufficient money, and maybe support, and then we can do it quickly; or PyPy will just remain not natively available on M1 hardware for the next 3-5 years

Out of curiosity, what is a ballpark, a rough estimate of the "sufficient money" for a project like this?


I too became frustrated reading the article and trying to figure out what they were asking for.

Protip: patrons and sponsors do not want to know all the details, and certainly not up-front. What they want is to know what you're asking for, what you offer in return and then (briefly) how one leads to the other. Thus:

We make Pypy, which does X and is used by Y people, including projects such as Z, z, and z'. We'd love to bring Pypy to Apple's exciting new M1 hardware, but that needs resources.

We're looking for $CASH_MONEY to buy some M1 hardware that we can develop, test against, and eventually run the repository on. With #NUM_MACHINES available, we believe was can have PyPy available to all M1 developers by %CALENDAR_MONTH. We also like eating and drinking and if you send us more money we will spend some of it on !GOOD_TIME(HAVING).

The whole approach of 'we're not asking for much, just x would help' is how volunteers talk to each other, minimizing their ask so that reorienting effort is not too burdensome for anyone. But how you talk to each other is not a good way to talk to external readers, because it requires effort to parse what you are getting at and then further effort to negotiate and arrange everything which is a distraction from their own concerns. Many people would rather just give you some cash and sign up for a monthly progress update.

There's a reason services Kickstarter etc. work so well: they save the donors time and entanglement. Just set up : PyPy on Mac M1 for $5000?' or similar and make it easy for people to throw money at you.


I agree -- I'm fortunate enough to be able to consider just buying an M1 mini and sending it to someone in order to support this, but it's not clear who I would send it to, nor if that would even be helpful.


From the rest of the blog post it seems enough to buy a single M1 machine with 8+ GB of RAM, so my estimate is that it should be a bit over $1k for a basic Mac Mini + shipping + Apple Care.


I don't think we should exclude fund for developer time from the calculation, at least that's what I understood from this sentence in post:

> I would do it myself for a small amount of money.


Ah true, I got confused with "It can be either a machine provided and maintained by a third party, or alternatively a pot of money big enough to support the acquision of a machine and ongoing work of one of us." and conflated the "pot of money" for the machine with the total being asked for, my bad.


That assumes they work for free


I wish I could contribute more generally, but I think the PyPy project is a great cause and will put down a small amount to help with this.

I've come to the realization that, to get more performance out of python, the most pythonic way is to simply use PyPy, rather than try and hack your way around issues like attributes requiring dict lookups, access to locals() being faster than globals(), etc.


> and will put down a small amount to help with this

I'd politely ask you to reconsider. You'd basically be subsidising work that benefits the World's highest-capitalised company, to whom the investment in this port would be a fraction of a second's profit.


The money isn't for Apple, it's for PyPy developers, who don't work for Apple.


Indirectly it pretty much is. If software works on Apple hardware people wish to buy it.

Boycotting noncollaborating hardware vendors by FOSS is LONG overdue. In my opinion there's too much catering to companies that do not wish to be helped.

Though, we can't forget that, aarch64 support also helps other ARM vendors, open hardware, theoretically. So there's that.


By your logic every hardware vendor should pay for the development of all software (or at least all open source software) that aims to run on that hardware. Which is plain ridiculous. Apple is pretty collaborating in this transition. Not paying for development of every remotely popular project under the sun does not make them “noncollaborating”.

Apple ships a Python.framework and a user-facing python3 installation through CLT, so I expect them to contribute to CPython. And they did. They don’t ship anything remotely related to PyPy, so I don’t expect them to do anything. A gesture would be nice, but that’s being nice.


> By your logic every hardware vendor should pay for the development of all software (or at least all open source software) that aims to run on that hardware.

If it takes special work, yes.

Ungrateful to just demand compatibility but provide none back, Apple hardware is increasingly hostile to 3rd party code running on it. They're not worth it.


What the heck?

"Boycotting noncollaborating hardware vendors by FOSS is LONG overdue"

Is this serious. What does apple even use PyPy for?

We need to boycott APPLE because of a group making software they don't use? Huh?

I suppose we will need to boycott my home builder because they don't support my mattress company that works with the house they built me?

Apple uses python. They ship python3. They contribute to python. That get's folks into python, that get's folks interested in ARM64, so python work and golang and others start targeting ARM64 more, which in turn will probably help Microsoft and AWS whenever they inevitably release ARM products into production. That's the traditional open source cycle. We don't boycott these folks.

One issue - Apple WILL NOT ship GPLv3. Many other companies are EXTRMELY careful about shipping GPLv3 (I wouldn't be surprised if google was strict hell no on GPLv3). So there is going to be some fragmenting as the GNU folks and other put out things like Bash as GPLv3. Probably from the same folks that boycott, so it may not matter in the end.


> I suppose we will need to boycott my home builder because they don't support my mattress company that works with the house they built me?

I suppose you do need to boycott the home builder if they wish to sleep in your bed and also place demands on it.

They want 3rd party software, yet make it increasingly harder to both run and develop it on their hardware.

In the same category is Nvidia, so much effort goes into supporting their hardware but they don't really wish to play ball for more than a decade now. Why bother with such cunts? There are other vendors that are much nicer.


I believe they can, just not freely: https://opensource.google/docs/thirdparty/licenses/


The purpose is to help users who want to run PyPy not to help Apple.


His point is that users who want to run PyPy can already do so on hardware from vendors that enable people to run whatever software they want on their devices instead of providing black boxes that the community has to reverse engineer.


The issue seems to be around Apple’s/MacOS’s calling conventions on ARM64. These are documented.


You're still missing the point. It's not about having a particular API documented. You cannot run Linux with the M1's GPU without significant reverse engineering effort. Open source developers do not like to support vendors who do not make their hardware equally available for the open source community to use because it encourages people to use closed platforms. See Nvidia for another example.

Instead, it's better to provide that value-add on open platforms.


> Though, we can't forget that, aarch64 support also helps other ARM vendors, open hardware, theoretically. So there's that.

PyPy (claims to, at least) already supports arm64; this would just be Apple-silicon specific work.


Yep, and that's not worth doing for free.


For references, Apple's piggy bank (Braeburn) currently has about $250E9 on the books.


I've never been able to find a good use case for pypy. Why would you ever want to run CPU bound tasks in python? I'm sure there are arguments about the huge ecosystem, not having to rewrite code in another language, etc. But most of the widely used stuff in python already has underlying compiled code for the heavy lifting. Also, given the large amount of parallelism in modern hardware architecture, and python's lackluster concurrency support, I just don't see a reason for using it.


I used Luigi [1] to automate data processing at a previous job. It's a simple job queue with a UI. You request jobs from it, and then run them for minutes or hours, so it shouldn't normally be a bottleneck and it makes sense to use a language that's quick and easy to write.

It's written in Python and works fine to process thousands of jobs per day. Once you start having tens of thousands of jobs in the queue, it gets slow enough that it can back things up. This compounds the problem, eventually resulting in the whole thing crashing.

By switching the interpreter to PyPy, I was able to keep the data pipeline running at that scale without having to rewrite anything.

[1] https://github.com/spotify/luigi


> Why would you ever want to run CPU bound tasks in python? ... But most of the widely used stuff in python already has underlying compiled code for the heavy lifting.

Haven't you just answered your own question? We know that people want to run CPU bound tasks in Python so much that they went to the effort of writing native modules because they couldn't do it in Python.

> python's lackluster concurrency support

This is a common misconception - Python actually has fully concurrent threads already.


Not fully. The GIL is always held when executing Python bytecode, because it isn’t threadsafe. Dropping the GIL (parallelism) only happens in native code that explicitly drops it.


> Not fully.

This isn't true - they are fully concurrent.

The GIL prevents parallelism - not concurrency.


> The GIL prevents parallelism - not concurrency.

When a Python thread is holding on to the GIL (running Python bytecode), how many other Python threads can concurrently run in the same process?

The answer is zero.

Sure the interpreter releases the GIL every n bytecode ops, and C extensions can release the GIL before doing anything IO bound and reacquire it (i.e wait for it) afterwards, but that isn’t full concurrency, in my books.


I think you're describing parallelism. The thread is about concurrency. I think you'll find this definition of concurrency matches industry standard definitions like Padua.


The broader point is: What good is it if it cannot effectively utilize modern multi-core hardware?


This is a commonly held misconception. Concurrency actually implies that computations can be reordered without changing the final outcome, and does not imply parallel execution. This is related to parallelism in that concurrent computations can be run in parallel for speed up.


> When a Python thread is holding on to the GIL (running Python bytecode), how many other Python threads can concurrently

You've been tricked by jargon. It's a common misconception. In this context, "concurrency" has a specific meaning that is different from the everyday one you're using in this sentence.

For a good introduction to what the two words mean in a software engineering context, check out this written version of Rob Pike's talk, "Concurrency is not Parallelism."

https://rakhim.org/summary-of-concurrency-is-not-parallellis...

The very tl;dr summary is:

> Concurrency is composition of independently executing things. . . Parallelism is simultaneous execution of multiple things.

When Python's threading model was implemented, parallelism just wasn't much of a concern. CPUs had a single core and could therefore only be working on one thing at a time. (In a macro sense; pipelining and supersclar architectures were still a thing, but not super relevant here.) Multithreading was not a way to do multiple things at once, it was just a way to ensure that some long-running calculation would not cause the program to lock up by, e.g., preventing it form responding to event queues in a timely manner. This was done by, not by running things in parallel, but by switching back and forth among them them very quickly.

Python's GIL was designed for this kind of situation. It's there to ensure that nothing bad happens if one of those context switches happens in the middle of a sensitive operation. Which is, strictly speaking, a concurrency concern and not a parallelism concern.

(It's also possible to have parallel work that is not concurrent, in which case locks are not necessary. But just because it's common for parallelism and concurrency to co-occur does not mean that they are the same thing.)


What does fully concurrent mean, exactly? Which commonly used programming languages have threads that are not fully concurrent?


I think it’s binary - it’s either fully concurrent or not concurrent at all.


I guess I incorrectly understood “fully concurrent threads” to mean threads that actually run in parallel, like Java threads. The redundant word “fully” threw me off; apologies.


A well-tuned web application layer is CPU bound at scale - your database is well designed and not a source of latency, and the lack of concurrency support in a language doesn’t matter if the interpreter is so much slower than context switches, which is absolutely true for CPython.

There are many sites and services where a rewrite in a new language is just not viable, and I still would recommend Python-everywhere to startups doing things remotely associated with data. So PyPy would be a tide that would lift many boats.


I switched a high traffic Flask web app to PyPy a couple of years ago and we saw substantially faster response times across the board, and much higher task throughput from our background worker machines, many of which were pegged 24/7.

We had so much less baseline load afterwards that we were able to scale down a bunch of instances. The transition only took a few hours of effort fixing one or two incompatible dependencies, so it paid for itself in savings quickly, especially vs an approach of trying to rewrite the slowest bits in a faster language.


> Why would you ever want to run CPU bound tasks in python?

0. Because you don't have time to deal with the mess that C++ has become, and the amount of please-repeat-yourself-a-million-times crap you have to deal with (cmake files, header files, they give us a goddamn spaceship operator but not basic necessities like string split/join methods)

1. There are many use cases where faster execution time is nice to have, e.g. when results can be cached for a long time, or if it's a one-off data analysis script, but human time is far more expensive. If it costs $1000 more in engineer hours to write C++ instead of Python for that script that's only going to be run 10 times, that isn't a worthwhile tradeoff. Hell you could buy a new GPU for that money.

2. Because the same exact file can be deployed on arm32, arm64, and x86

3. Because CPU-intensive stuff is largely already optimized by numpy, numba, tensorflow, pytorch, etc.


I can either spend a lot of time rewriting my already tested and working code if my application scales to the point of hitting a CPU bottleneck OR I can just try using PyPy.


But why write it in python in the first place?


Python makes it quick to write code, test it and get it out the door.

Other languages don't have that same cycle, and Python has a LARGE amount of freely available packages that are able to help launch even quicker by not having to necessarily write everything oneself.

So at that point it comes down to what are you fastest in?


Python given it's eco-system of packages is quick to write functioning code in. It's dynamic nature allows for quick prototyping and re-factoring without requiring massive pre-planning and cognitive overhead.


I've done it for one-off data munging scripts - processing archives of one format of data to another, etc. Python is easy to write and for these tasks, you can get PyPy to execute at twice the speed for basically no cost.


FWIW, you don't have to be faster than the bear.

Strictly speaking, "CPU bound" is not an adjective that can be used to describe tasks. It's one that describes a particular program solving a particular task under a particular configuration. I've done no analysis on this myself, but I would be more than willing to believe that a CPU-bound job might be only a CPython-to-MyPy's worth of speedup away from instead being memory- or disk-bound.

It can be hard to tell the difference, too, since being stalled out while waiting on the memory controller shows up as CPU activity in htop.


The Intel vtune profiler [0] bolts on linux's perf subsystem and offers a very nice way of assessing if the cpu is stalled on memory (or cache) or os spending its time computing. I guess is a nice GUI on (nowadays) standard Linux tracing interfaces, but I really did not dig enough.

If you are after deep profiling, you should definitely give it a try. My recollection is totally positive.

[0] https://software.intel.com/content/www/us/en/develop/tools/o...


Killing efforts like PyPy is what creates the Julias and Chapels of the future.


As others mentioned, it's generally an easy choice to write one-off data processing & analysis stuff in. I can get all the multithreading support I need with the multiprocessing library.

For me this always just consists of reading a bunch of files in and then doing some basic aggregation on them. Years ago I benchmarked Python vs. PyPy and found no real benefit to it.

Here is the link to that if you'd like to read it:

http://www.hydrogen18.com/blog/unpickling-buffers.html


Code can end up being CPU bound that wasn’t meant to be. For example, having to use on a database driver or some other low level library written in Python can make your web app pretty CPU-bound. The main issue with PyPy is compatibility for me.


I guess they're not asking for individual donations in this post, but still, the "donations page" link in the sidebar 404s. It appears from https://www.pypy.org/howtohelp.html that https://opencollective.com/pypy is the correct link.


How much money and how do we contribute?


I'd add GRPC to the wishlist too: https://github.com/grpc/grpc/issues/4221


So they just need $900?


Apple should pay some of their vast horde of cash.


[flagged]


I'm not sure why you feel this, but it's far off the mark.

In audio-land (where I work) we're seeing great performance on the new processor, and little or no porting effort, just a recompilation to get things working nicely. What's not to like?


What? I’ll try not to be sarcastic, but there’s already tons of stuff running on M1 Macs, a lot of it just after recompiling. And even for what cannot be recompiled, Rosetta 2 works wonders for most use cases. The only problematic point is large AVX instructions in binaries.

What makes you think otherwise?


The constant stream of articles from large companies about how they have not yet been able to get their applications running.

Running under a virtual machine definitely does not count.


Wait, I thought large companies did not count. Large companies have loads of users who rely on their applications; they are much more conservative than small developers, and have longer procedures for testing and validation before each release. Some lag is expected, and their applications are useable in the meantime, anyway.

You’ve got things completely backwards. Virtual machines running on M1 Macs at the moment are ARM, they don’t emulate x86_64 (yeah, I know, there are experimental versions of some emulators; none of them are ready yet). So a virtual machine is not going to help you running Intel binaries.

Rosetta 2 is a translator (takes binary, spits out binary once, which is then run), and has nothing to do with a virtual machine.

Besides, what matters is that the resulting executable work, and with more than adequate performance. You install programs to make them run and do useful things, not just for the intellectual pleasure of having done it The One True Way, which is bullshit anyway. Do you also go trolling that Linux is terrible because Python scripts are interpreted?


>I thought large companies did not count.

If they are having trouble then individuals will too. As for Rosetta 2, you're right. I'm not making the distinction between a virtualizing and attempting to do translation of one binary architecture to another blindly. The second is much more unreliable.

A script interpreter is an entirely different thing than binary architecture translation and I think you know that.


Rosetta 2 is not a virtual machine. It does binary translation, mostly ahead of time.


Jetbrains has DataGrip and RubyMine already... and apparently just got IntelliJ on M1. https://blog.jetbrains.com ... and reading that list, there's a fair bit more that's been built.

The other thing is... all if the iPhone and iPad apps "work" (once you extract them from the iDevice) - its a lot more than just the core apps.


[flagged]


My coworkers who switched over did so for the extra long batter life; responsiveness, and faster compilation times


Yeah, the battery life is amazing: so far, I just don’t worry about whether my laptop is plugged in.


And I recently put new brakes on my car and they're really responsive. But it has little to do with the fact that my car's software based clock often thinks it's Feb 43rd.

Hardware can be pretty good and the software can still be extremely lacking.


At this point, all the development stuff I do with my laptop basically just works on the M1, and it’s virtually indistinguishable from my Intel Mac.


What gave you that impression?


Apple?


Who uses PyPy in production?


Who uses Apple Silicon in production?


Truly my own ignorance... help me out! Is this port strictly for Apple silicon, ~ARM64, or is that the same thing? Where would this leave support for say, AWS Graviton instances?


PyPy already supports ARM64. This is additional work needed to support macOS 11 on M1. Apple changed some things that impact PyPy, like the register uses and ffi calling conventions. They updated clang to handle this, but PyPy's JIT emits assembler directly and so requires work to support M1.


> Apple changed some things that impact PyPy, like the register uses and ffi calling conventions.

I thought everyone who used 64-bit ARM used ARM's AAPCS64 (https://github.com/ARM-software/abi-aa/blob/master/aapcs64/a...), so the register usage and FFI calling convention should be the same as on Linux and Windows. What did Apple do differently that would affect the PyPy JIT?


There are some slight nuances that affect the JIT. https://developer.apple.com/documentation/xcode/writing_arm6...


Link is broken; asks user to enable remote code execution.


Please don’t call working links “broken”. It links to the right place, regardless of whether you like what happens when you get there.


In addition to a slightly modified ABI, Apple enforces W^X in hardware.


AFAIK, the only notable difference is with arguments spilled on the stack being packed on macOS (except variadic arguments). If all your arguments are int or larger, that means essentially no difference. (source: I worked on making Firefox work on M1. We didn't need changes to XPCOM, for instance, although theoretically, we'd need some. I don't think the JIT needed changes either. Funnily enough, I had to fix libffi for variadic arguments, and the libffi shipped in macOS is broken)


varargs in general is significantly different, I think it's even encoded as a char * in C++ name mangling.


There are enough engineers posting here from well paid FAANG roles to fund this effort...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: