Here's a project I've been working on for the last few months.
It's a new (I think) algorithm, that allows to adjust smoothly - and in real time - how many calculations you'd like to do during inference of an LLM model.
It seems that it's possible to do just 20-25% of weight multiplications instead of all of them, and still get good inference results.
I implemented it to run on M1/M2/M3 GPU. The mmul approximation itself can be pushed to run 2x fast before the quality of output collapses.
The inference speed is just a bit faster than Llama.cpp's, because the rest of implementation could be better, but with a better development I think it can be a new method to speed up inference - in addition to quantization.
You could call it ad-hoc model distillation :)
You can change the speed / accuracy of a model at will, in real time.
Oh, and as a side effect, the data format allows to also choose how much of the model you want to load into the memory. You can decide to skip say 10-20-40% of the least important weights.
It's implemented for Mistral, it was also tested slightly on Mixtral and Llama. It's for FP16 for now, but Q8 is in the works.
The algorithm is described here, and the implementation is open source.
https://kolinko.github.io/effort/
I know these are bold claims, but I hope they survive the scrutiny :)
A search for model pruning turns out many results, including https://arxiv.org/abs/2305.11627 which discusses "magnitude pruning" as a baseline, and refers to https://arxiv.org/pdf/2301.00774.pdf, which asserts in the introduction:
> First, as shown in Figure 1, SparseGPT can induce uniform layerwise sparsity of up to 60% in e.g. the 175-billion-parameter variant of the OPT family (Zhang et al., 2022), with minor accuracy loss. By contrast, the only known one-shot baseline which easily extends to this scale, Magnitude Pruning (Hagiwara, 1994; Han et al., 2015), preserves accuracy only until 10% sparsity, and completely collapses beyond 30% sparsity.
I don't like how these papers boast their own methods by poorly implementing a baseline and describing their own methods using lots of mathematical jargon - the OP's blog post is a breeze in making the methods accessible to people with very little background.