SSE itself tops out at 4.2, but Tremont does support the newer SHA extensions.
AVX appears to be the only missing thing, but AVX isn't even standard across Intel's other lines, either. The Pentium line doesn't support AVX either, for example, even though they are using Skylake & newer micro-architectures.
So you currently can't assume AVX support, and you still won't be able to assume AVX support. Why does this matter?
Operating systems move threads between cores. If those cores support different features, threads that are migrated to low-feature cores might experience illegal instruction traps despite correctly checking for instruction features.
Maybe standard application code all runs on the big core, and the little cores are like the system assist processors on IBM mainframes - used to run OS jobs or specific application support code, to free up the main processor for other work (or idling).
For this to work, there would need to be a small set of undemanding tasks that account for a lot of machine time. Feeding video to the GPU? All sorts of GUI compositing and housekeeping? Handling network connections in the browser?
I don't think this is a good explanation, but it's fun to think about.
A lot of things are background tasks until suddenly, without warning, the user ends up waiting for them to happen. Take your example of handling network connections, for example: this can definitely be a background thing! Your computer might want to keep an IMAP connection open, periodically poll a CalDAV server, sync photos with your phone, etc., and all of these would be very reasonable things to run on low-power CPU cores. Kick them over to a wimpy core, throttle down the frequency to its most power-efficient setting, insert big scheduler delays to coalesce timer wake-ups, whatever. Good stuff.
But what happens when the user opens up a photo viewer app and suddenly wants those photos to be synced right now?
If your code is running on a recent iPhone -- the heterogeneous-core platform I'm most familiar with -- then the answer is that the kernel will immediately detect the priority inversion when a foreground process does an IPC syscall, bump up the priority of the no-longer-background process, probably migrate it to the fastest core available, and make it run ASAP. Then, once the process no longer has foreground work to do, it can go back to more power-efficient scheduling.
This kind of pattern is super common, and it would be way more annoying and perilous to try to split tasks into always-foreground and always-background.
The standard operating system model today is mostly to do what the application(s) request and then get out of the way quickly. I'd expect to cede all cores to applications most of the time, rather than reserving the low power cores for OS tasks.
That would be pretty disappointing if they can't handle normal processes too. It's easy to end up with a bunch of processes that are using small amounts of CPU but aren't dedicated background tasks.
Even just looking at a browser, I might have half a dozen generic tab processes open, each using a small amount of CPU. But then I navigate one to a game, and I want that particular tab to get near-exclusive access to the big core while the others use only the little cores.
SSE itself tops out at 4.2, but Tremont does support the newer SHA extensions.
AVX appears to be the only missing thing, but AVX isn't even standard across Intel's other lines, either. The Pentium line doesn't support AVX either, for example, even though they are using Skylake & newer micro-architectures.
So you currently can't assume AVX support, and you still won't be able to assume AVX support. Why does this matter?