Hacker News new | past | comments | ask | show | jobs | submit login

How does this compare with using the Hugging Face `diffusers` package with MPS acceleration through PyTorch Nightly? I was under the impression that that used CoreML under the hood as well to convert the models so they ran on the Neural Engine.



It doesn't. MPS largely is on GPU. PyTorch's MPS implementation is incomplete a few weeks ago as well. This is about 3x faster.


Is it? I just ran it on my M1 MacBook Air and am getting 3 it/sec, same as I was using Stable Diffusion for M1. Maybe I'm doing something wrong?


That's surprising to me, although I did the look about 3 weeks ago, and MPS support is a moving target. It is just M1 without Pro or Ultra right? Also, diffusers does support different backends other than PyTorch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: