Hacker News new | past | comments | ask | show | jobs | submit login

I have heard that DirectML was a somewhat easier story, but allegedly has worse performance (and obviously it's Windows only...). But I'm not entirely suprised that setup is somewhat easier on Windows, where bundling everything is an accepted approach.

With AMD's official 15GB(!) Docker image, I was now able to get the A1111 UI running. With SD 1.5 and 30 sample iterations, generating an image takes under 2s. I'm still struggling to get InvokeAI running.




That has to include the model(s), no?

Also, nothing is easier on Windows. It's a wonder that anything works there, except for the power of recalcitrance.

Not dogging Windows users, but once your brain heals, it just can't go back.


It actually doesn't include the models! The image is Ubuntu with ROCm and a number of ML libraries, such as Torch, preinstalled.

> Also, nothing is easier on Windows.

As much as I, too, dislike Windows, I still have to disagree. I have encountered (proprietary) software which was much easier to get working on Windows. For example, Cisco AnyConnect with SmartCard authentication has been a nightmare for me on Linux.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: