Hacker News new | past | comments | ask | show | jobs | submit login

Just FYI I looked at PyTorch for the first time now, and unfortunately they require Mac OS users to build it from source in order to get CUDA support:

https://pytorch.org/get-started/locally/

Please if someone at PyTorch is reading this, put in a request to make CUDA support the default on Mac OS.

Also, it looks like PyTorch doesn't currently support OpenCL:

https://github.com/pytorch/pytorch/issues/488

I can't tell by the issue comments if it's been added yet or if they plan to use Intel's oneAPI or similar.

To me, these are prerequisites for switching to PyTorch. Hopefully someone can clarify the state of these thanks!




Hi I am a PyTorch maintainer.

NVIDIA has dropped CUDA support for macOS: http://www.cgchannel.com/2019/11/nvidia-drops-macos-support-...

This was pretty evident for a few years, and it's one of the top reasons for us to not provide official binaries with CUDA support -- the maintainer overhead was way too much. We did work to make sure it still builds with CUDA support from source (with a contbuild) but once CUDA 10.3 or 11 releases, we have to drop that too.


Ah thanks for that. One of my biggest concerns right now is that since SIMD won out in the performance wars, and has come to be dominated by the video game industry and proprietary players like NVIDIA, that we are missing out on a whole possible tree of evolution in computer science.

For one, that we don't have easy access to MIMD, so we can't easily/cheaply experiment with our own simulations for things like genetic algorithms.

20 years ago I wanted to go into AI research and make a multicore FPGA (say 1000+ cores) where each one could run its own instance of an OS, or at the very least an isolated runtime for something like Lisp. But the world has gone a completely different direction, and that's great and everything with all the recent advances in machine learning, but it's like comparing rasterization (what we have) to ray tracing (what we could have had). Current implementations are orders of magnitude more complex than they need to be. I've written about this a bunch:

https://news.ycombinator.com/item?id=17759391

https://news.ycombinator.com/item?id=17419917

So I guess short of this, I hope that PyTorch can at least provide a cross-platform performant SIMD implementation. Which I had hoped OpenCL would be, but maybe it's too much like OpenGL and we need something a level of abstraction higher for easier vector processing without all the worrying about buffers and moving between CPU and GPU.


> Please if someone at PyTorch is reading this, put in a request to make CUDA support the default on Mac OS.

It's unlikely this will ever happen. Apple doesn't officially support NVIDIA drivers anymore and even Tensorflow no longer lists MacOS as having official GPU support[0].

Don't hold your breath.

[0]: https://www.tensorflow.org/install/gpu


Are you really GPU training on your home laptop? I absolutely get why CUDA support for MacOS isn't a priority


I would if I could - I have an external GPU at home. Unfortunately Apple is (not without reason) angry at nvidia so they dropped support for Nvidia in Mac OS. I’d have to use Windows which is a big no no for me. Obvious pytorch can’t support it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: