Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
mrgaro
29 days ago
|
parent
|
context
|
favorite
| on:
Prompt caching for cheaper LLM tokens
Hopefully you can write the teased next article about how Feedforward and Output layers work. The article was super helpful for me to get better understanding on how LLM GPTs work!
samwho
29 days ago
[–]
Yeah! It’s planned for sure. It won’t be the direct next one, though. I’m taking a detour into another aspect of LLMs first.
I’m really glad you liked it, and seriously the resources I link at the end are fantastic.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: