Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
htrp
4 months ago
|
parent
|
context
|
favorite
| on:
GPT-4o
Did we ever get confirmation that GPT 4 was a fresh training run vs increasingly complex training on more tokens on the base GPT3 models?
saliagato
4 months ago
[–]
gpt-4 was indeed trained on gpt-3 instruct series (davinci, specifically). gpt-4 was never a newly trained model
whimsicalism
4 months ago
|
parent
[–]
what are you talking about? you are wrong, for the record
fooker
4 months ago
|
root
|
parent
[–]
They have pretty much admitted that GPT4 is a bunch of 3.5s in a trenchcoat.
whimsicalism
4 months ago
|
root
|
parent
[–]
They have not. You probably read "MoE" and some pop article about what that means without having any clue.
matsemann
4 months ago
|
root
|
parent
[–]
If you know better it would be nice of you to provide the correct information, and not just refute things.
whimsicalism
4 months ago
|
root
|
parent
[–]
gpt-4 is a sparse MoE model with ~1.2T params. this is all public knowledge and immediately precludes the two previous commentators assertions
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: