Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
lawlessone
10 days ago
|
parent
|
context
|
favorite
| on:
New LLM optimization technique slashes memory cost...
doesn't training require inference? so i guess it would help there too?
boringg
10 days ago
|
next
[–]
Yeah but training requires the larger memory deployment data center infra
reply
fzzzy
10 days ago
|
prev
[–]
Training doesn't require inference. It uses back-propagation, a different algorithm.
reply
bitvoid
10 days ago
|
parent
[–]
Backpropagation happens after some number of inferences. You need to infer to calculate a loss function to then backprop from.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: