Red state-ers are the king of scams, MLMs, pyramid schemes, and most other forms of terrible good for nothing businesses. This is the party of the payday loan shark and of the buy here pay here used car scamsman.
Correct. Folks here don’t even begin to talk about Dave and busters or Chuck E. Cheese with their own terrible betting systems designed for kids were the company script (tickets) buys sugar poison at prices that would make your local robber baron jealous of you!
But no one cares that we begin the stupid-person-to-gambler pipeline right at those moments. No one calls to ban Chuck E. Cheese. In fact, it’s the opposite. We embraced those companies with horror games like five night at Freddie’s - horror games whose characters were rapidly used for internet degeneracy similarly to the overwatch characters.
A husky famously caused a fire on a physical control stove top when he tried to get pizza which sat on top of the stove top. There’s videos (everyone survived including the dog)
I’m going to laugh so hard if we finally find out that quantization really does hurt models, and all the claims of 99.9999% at 4bit or lower precision really were full of it!
Of all the gaslighting I see in Ml/AI, the idea that quantization is almost free is up there. I don’t buy it for one second at all, no matter how many charts you try to show me implying that the logprobs are 99% the same. Sure, maybe until you go to 10K tokens context window!
I disagree. ChatGPT is nowhere near AGI. I don't know anyone who confuses general intelligence with super intelligence.
How do you know what most people mean?
Update: I agree with the article that a reasonable amount of generality exists with chatGPT. I dispute the intelligence part of it. It cannot evaluate the truth or falsity of what it spouts out. This appears to be a fundamental limitation of LLMs.
This paper seems dubious, because it flies in the face of what the reft/pyreft paper is showing (you can use 0.0001% of the parameters trained for 100 epochs to personalize on a small dataset):
Note that the OP paper is not peer reviewed yet, and while the one I linked isn't either, it has Christopher Manning (yes, the one you know from youtube), the head of AI at Stanford, as a co-author.
In general, I think that Lora and especially reft should be more resistant to catastrophic forgetting due to them literally not impacting most of the model.
The Stable Diffusion community has literally tens of thousands of lora's that don't cripple a model at small rank.
I don't see how the authorship by Christopher Manning shifts favour towards the other paper; this paper has Antonio Torralba as a co-author, who's also one of the big shots in AI.
reply