I'm enthusiastic about BitNet and the potential of low-bit LLMs - the papers show impressive perplexity scores matching full-precision models while drastically reducing compute and memory requirements. What's puzzling is we're not seeing any major providers announce plans to leverage this for their flagship models, despite the clear efficiency gains that could theoretically enable much larger architectures. I suspect there might be some hidden engineering challenges around specialized hardware requirements or training stability that aren't fully captured in the academic results, but would love insights from anyone closer to production deployment of these techniques.
I think that since training must happen on a non-bitnet architecture, tuning towards bitnet is always a downgrade on it's capabilities, so they're not really interested in it.
But maybe they could be if they'd offer cheaper plans, since it's efficiency is relatively good.
I think the real market for this is for local inference.
I find it a little confusing as well. I wonder if its because so many of these companies have went all in on the "traditional" approach that deviating now seems like a big shift?
People are almost certainly working on it. The people who are actually serious and think about things like this are less likely to just spout out "WE ARE BUILDING A CHIP OPTIMIZED FOR 1-BIT" or "WE ARE TRAINING A MODEL USING 1-BIT" etc, before actually being quite sure they can make it work at the required scale. It's still pretty researchy.
Sorry for a stupid question but to clarify, even though it is a 1-bit model, it is supposed to be working with any types of embeddings, even taken from larger LLMs(in their example, they use HF1BitLLM/Llama3-8B-1.58-100B-tokens). I.e. it doesn't have an embedding layer built-in and relies on embedding provided separately?
Can anyone help me understand how this works without special bitnet precision-specific hardware? Is special hardware unnecessary? Maybe it just doesn't reach the full bitnet potential without it? Or maybe it does, with some fancy tricks? Thanks!
The major benefit would be its significant decrease in memory consumption, rather than the compute itself. The major bottleneck of the current LLM infra is typically memory bandwidth and that's the reason why those chip industries are going crazy on HBM. Surely compute optimization helps but this is useful even without any hardware changes.
While fancy hardware would make it faster, what you are comparing it to is a bunch of floating point and large number multiplication. I believe in this case they just use a look up table:
If one value is 0, it is 0.
If the signs are different, it is -1.
If the signs are the same, it is 1.
I’m sure those can be done with relatively few instructions using far less power hungry hardware.
I'm glad Microsoft uses Bash in the example, instead of their own Windows shells. As a user I would like having something like "Git Bash" for Windows built in the system, as default shell.
WSL is where it's at today. It's not quite what you're asking for, as it is a separate virtual OS, but the integration is so tight that it feels like you're using your favorite shell natively in Windows.
> integration is so tight that it feels like you're using your favorite shell natively in Windows
WSL1 certainly felt that way, WSL2 just feels like any other virtualization manager and basically works the same. Not sure why people sings the praise of WSL2, I gave it a serious try for months but there is a seemingly endless list of compatibility issues which I never had with VMWare or VirtualBox, so I just went back to those instead and the experience is the same more or less.
Probably because it has relatively painless GPU sharing with pass through. As far as I know that sort of feature requires a hypervisor-level VM, which is not something you get with VirtualBox.
Someone correct me if I'm wrong, but I think you can use a KVM or QEMU backend for VirtualBox and that way get GPU pass-through. Probably not out of the box though.
The WSL2 GPU passthrough is more like a virtual GPU than KVM style device passthrough. I believe it's effectively a device specific linux userland driver to device specific windows kernel driver with a linux kernel shim bridging the too. If I recall correctly, the linux userland drivers are actually provided by the windows driver.
Not sure what you mean by “default shell”. The default shell on Windows is this: https://en.wikipedia.org/wiki/Windows_shell. I don’t suppose you mean booting into Bash. Windows doesn’t have any other notion of a default shell.
Neat. Would anyone know where the SDPA kernel equivalent is? I poked around the repo, but only saw some form of quantization code with vectorized intrinsics.