Hacker News new | past | comments | ask | show | jobs | submit login

The formulations in attention as rnn have similar issues as rwkv. Fundamentally it's a question of what we call an RNN.

Personally I think it's important not to call some of these recent architectures RNNs because they have theoretical properties that do not match (read: they're worse) what we've "classically" called RNNs.

Ref: https://arxiv.org/abs/2404.08819

As a rule of thumb: you generally don't get parallelism for free, you pay for it with poorer expressivity.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: