We're headquartered in the US, but have manufacturing in Taiwan. You are correct that there is no change in tariffs or pricing for our EU (and other non-US) customers.
That would be great. I’ve been hacking at ROCm and using Ryzen iGPUs for industrial scenarios, and the HX chipsets look like a massive improvement over what you’d get from folk like AsRock Industrial.
We took the Ryzen AI Max, which is nominally a high-end laptop processor, and built it into a standard PC form factor (Mini-ITX). It’s a more open/extensible mini PC using mobile technology.
I love the look of it and if I were in the market right now it would be high on the list, but I do understand the confusion here - is it just a cool product you wanted to make or does it somehow link to what I assumed your mission was - to reduce e-waste?
A big part of our mission is accessibility and consumer empowerment. We were able to build a smaller/simpler PC for gamers new to it that still leverages PC standards, and the processor we used also makes local interference of large models more accessible to people who want to tinker with them.
Considering the framework desktop or something like it for a combo homelab / home assistant / HTPC. The new gen of AMD APUs looks to be the sweet spot for a lot of really interesting products.
And given that some people are afraid of malicious software in some brands of mini-PCs on the market, to have some more trusted product around will also be an asset.
This is made worse by Windows 11 retail images being years behind on Wi-Fi drivers. We recommend to customers to use Rufus to create Windows USB installers to bypass the network requirements because Windows 11 doesn’t come with functioning drivers for any of the last few generations of Wi-Fi cards we use.
Note that none of the PCIe interfaces on Strix Halo are larger than x4. The intent is to allow multiple NVMe drives and a Wi-Fi card. We also used PCIe for 5Gbit Ethernet.
Feedback: That 4x slot looks like it's closed on the end. Can we get an open-ended slot there instead so we can choose to install cards with longer interfaces? That's often a useful fallback.
Definitely! ROCm is getting really solid for inference. LM Studio (and therefore the underlying llama.cpp) work out of the box already, and we see AMD pushing forward on PyTorch and other areas rapidly.
reply